PM Notifications by email Jul 17, 2011 | Rei
Long overdue, but finally you will receive an email notification ( generally within 5 minutes ) when a new PM is sent to you.
Stats Patches Jul 16, 2011 | Rei
Fixed a major error in how it handled deletes.

If the Parent Node in the linked list was removed, the static reference to it was not updated. Which would cause some massive errors~ seg faults, bus errors when other function calls expecting the parent node to not be null were called.

It was nice watching debug skew in my terminal, but funny thing is, displaying that information is more computationally expensive than running the rest of the program! lol Understanding MySQL, you'd know it logs data to binary logs, replay logs, so yet another optimization on our part not wasting cycles on those operations.

Made the debug skew use a Debug Flag that can be set with my PHP Admin Interface.

Also wrote a quick stats function that prints to stdout a few counters ( connections, max id, num records ).

And rewrote the SELECT * WHERE id function to print the record to stdout.

Kinda was suprized that "SELECT * WHERE id" was made redundant. Being able to move client code into the server is.... beautifully efficient.

Previously, on MySQL, we'd look up your record ID using keys. We'd then fetch your record or create it. Then update your record if needed. 3 operations. I moved all of that into one operation, the server now handles the other two on it's own.

The code and server looks and feels pretty solid so far.

It'll need a full day of testing, plus I have yet to run the "Delete all records older than 24 hours" function ...

I'll know in a week ( unless something breaks before then ) if this project is ready to be marked complete.

Using auto increment ID's in a binary tree proved not possible... however, looking at these CRC32's I might have an option with them!

Oh! Almost forgot another awesome optimization.

That User's Online thing at the bottom?

Calculated it meant O(n) operations every single page view. Since we're doing about 10+ pages a second, traversing that list several times a second, just seemed a stupid waste. I don't know how good MySQL's query cache is, but it's got nothing on this. I cache the time the query was made, hold it for 3 seconds, and send the cached count. Thus, we traverse the list only once every 3.9999 seconds. Practically a 4000% increase in speed. Muahahah well i do suck at math. lol
Stats Server live beta Jul 16, 2011 | Rei
All the features needed to replace the MySQL version have been coded, and uploaded to the main server now.

I'll be keeping an eye on the performance today.

Since it's optimized for O(n) it should slow down as more and more records are added ...

Should that become a big issue, I have a fairly simple way to optimize Updates for O(c). ;-)

Rebuilding the index at midnight is another operation I'll be curious about...

Will it lock for a fraction of a second?
For a second?
For seconds?
Bookmark Icon Bookmark this Page


Main Lounge
Living Room
Who Board

Sponsored with ♥

Sponsored with Jewels

Share on Facebook Share on Twitter Share on Google+ Share on Reddit Share on Tumblr
日本語 한국어 简化字 Français Deutsch Español Afrikaans English
Veronica Lyonesse Veronica Lyonesse
I don't know what you and Hendriksen are up to,
All images are copyright of their respective owners.
Rendered in 18.7 ms. R-15-W-1-M-2204.99 KB Modified: Wed, 25 Jul 2018 01:48:41 -0400
Copyright © 2007-2018 Goral Software | Privacy Policy | Discord | Contact Us | Site Map | アニキャラベー

Dark Theme