Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've seen a few solutions in this space that use an RDBMS as a glorified spool file. So, append log entries to PG or MySQL or whatever over a rest endpoint (like the one splunk exposes to writers), and then have a few workers (for fault tolerance) that the 100K oldest entries in the table every few seconds, stick them into the "real-time" system, delete them from the DBMS and commit.

I've never understood why this isn't just done better by the downstream product though. It's not that hard to implement a performant write ahead log from scratch.

(Note that you can scale out the above arbitrarily, since there's no reason to limit yourself to one worker or one DBMS.)




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: