As expected, my code wasn't perfect. We had a few DoS attacks ( defined as more than 20 page requests in 2 seconds from 1 individual ) which caused an unexplained termination of the stats server.
First time it happened, it took the whole site down with it. I recoded a few things so that if the stats server died, the site continues to run without it.
Simplified several parts of code, traced the program execution, and recreated the scenario. At first I didn't realize why it was exiting at a dprintf() point.
Uploaded patch ss-0.3 which ran until the next DoS attack. Crashed again, but this time the site stayed up.
Looked over the code again... and noticed dprintf() was being called twice, but was requested only once. Added a return value to the function that might call dprintf to report if it did so I don't call it the second time.
Hopefully this is the last time I have to deal with it....
Still worried about the simplified while loops tho. They have a possibility of going infinite. I'll know by tomorrow if the code is solid.
Long overdue, but finally you will receive an email notification ( generally within 5 minutes ) when a new PM is sent to you.
Fixed a major error in how it handled deletes.
If the Parent Node in the linked list was removed, the static reference to it was not updated. Which would cause some massive errors~ seg faults, bus errors when other function calls expecting the parent node to not be null were called.
It was nice watching debug skew in my terminal, but funny thing is, displaying that information is more computationally expensive than running the rest of the program! lol Understanding MySQL, you'd know it logs data to binary logs, replay logs, so yet another optimization on our part not wasting cycles on those operations.
Made the debug skew use a Debug Flag that can be set with my PHP Admin Interface.
Also wrote a quick stats function that prints to stdout a few counters ( connections, max id, num records ).
And rewrote the SELECT * WHERE id function to print the record to stdout.
Kinda was suprized that "SELECT * WHERE id" was made redundant. Being able to move client code into the server is.... beautifully efficient.
Previously, on MySQL, we'd look up your record ID using keys. We'd then fetch your record or create it. Then update your record if needed. 3 operations. I moved all of that into one operation, the server now handles the other two on it's own.
The code and server looks and feels pretty solid so far.
It'll need a full day of testing, plus I have yet to run the "Delete all records older than 24 hours" function ...
I'll know in a week ( unless something breaks before then ) if this project is ready to be marked complete.
Using auto increment ID's in a binary tree proved not possible... however, looking at these CRC32's I might have an option with them!
Oh! Almost forgot another awesome optimization.
That User's Online thing at the bottom?
Calculated it meant O(n) operations every single page view. Since we're doing about 10+ pages a second, traversing that list several times a second, just seemed a stupid waste. I don't know how good MySQL's query cache is, but it's got nothing on this.
I cache the time the query was made, hold it for 3 seconds, and send the cached count. Thus, we traverse the list only once every 3.9999 seconds. Practically a 4000% increase in speed. Muahahah well i do suck at math. lol