Survivorship bias is real, but is missing the important piece of the story when it comes to software, which doesn't just survive but is also maintained. Sure you may choose to discard/replace low quality software and keep high quality software in operation, which leads to survivorship bias, but the point here is that you also have a chance to find and fix issues in the one that survived, even if those issues weren't yet apparent in version 0.1. Author is not trying to say that version 0.1 of 30 year old software was of higher quality than version 0.1 of modern software -- they're saying that version 9 of 30 year old software is better than version 0.1 of modern software.
In my experience actively maintained but not heavily modified applications tend towards stability over time. It don't even matter if they are good or bad codebases -- even a bad code will become less buggy over time if someone is working on bug fixes.
New code is the source of new bugs. Whether that's an entirely new product, a new feature on an existing project, or refactoring.
well, yes, exactly. I'm not trying to claim that old code is more reliable just because it was written a long time ago, I'm making the claim that old code is more reliable because of the survivorship bias. If code was first written 20 years ago and is still in production, unchanged, I can be relatively certain there's no stop-the-world bugs in those lines. (this says nothing about how pretty the code is, though).