Not many companies are willing to maintain a compiler... but LLMs will change that. An LLM can find bugs in the code if the "compiler guru" is out on vacation that day. And yes, you will still need a "compiler guru" who will use the LLM but do so at a much higher level.
I'm desperately looking forward to, like, 5-10 years from now when all the "LLMs are going to change everything!!1!" comments have all but completely abated (not unlike the blockchain stuff of ~10 years ago).
No, LLMs are not going to replace compiler engineers. Compilers are probably one of the least likely areas to profit from extensive LLM usage in the way that you are thinking, because they are principally concerned with correctness, and LLMs cannot reason about whether something is correct — they only can predict whether their training data would be likely to claim that it is correct.
Additionally, each compiler differs significantly in the minute details. I simply wouldn't trust the output of an LLM to be correct, and the time wasted on determining whether it's correct is just not worth it.
Stop eating pre-chewed food. Think for yourself, and write your own code.
I bet you could use LLMs to turn stupid comments about LLMs into insightful comments that people want to read. I wonder if there’s a startup working on that?
A system outputting correct facts, tells you nothing about the system's ability to prove correctness of facts. You can not assert that property of a system by treating it as a black box. If you are able to treat LLMs as a white box and prove correctness about their internal states, you should tell that to some very important people, that is an insight worth a lot of money.
As usual, my argument brought all the people out of the woodwork who have some obsession about an argument that's tangential. Sorry to touch your tangent, bud.
LLMs (or LLM assisted coding), if successful, will more likely make the number of compilers go down, as LLMs are better with mainstream languages compared to niche ones. Same effect as with frameworks. Less languages, less compilers needed.
First, LLMs should be happy to use made up languages described in a couple thousand tokens without issues (you just have to have a good llm-friendly description, some examples). That and having a compiler it can iterate with / get feedback from.
Second, LLMs heavily reduce ecosystem advantage. Before LLMs, presence of libraries for common use cases to save myself time was one of the main deciding factors for language choice.
Now? The LLM will be happy to implement any utility / api client library I want given the API I want. May even be more thoroughly tested than the average open-source library.
Have you tried having an LLM write significant amounts of, say, F#? Real language, lots of documentation, definitely in the pre-training corpus, but I've never had much luck with even mid sized problems in languages like it -- ones where today's models absolutely wipe the floor in JavaScript or Python.
Even best in class LLMs like GPT5 or Sonnet 4.5 do noticeably worse in languages like C# which are pretty mainstream, but not on the level of Typescript and Python - to the degree that I don't think they are reliably able to output production level code without a crazy level of oversight.
And this is for generic backend stuff, like a CRUD server with a Rest API, the same thing with an Express/Node backend works no trouble.
I’m doing Zig and it’s fine, though not significant amounts yet. I just had to have it synthesize the latest release changelog (0.15) into a short summary.
To be clear, I mean specifically using Claude Code, with preloaded sample context and giving it the ability to call the compiler and iterate on it.
I’m sure one-shot results (like asking Claude via the web UI and verifying after one iteration) will go much worse. But if it has the compiler available and writes tests, shouldn’t be an issue. It’s possible it causes 2-3 more back and forths with the compiler, but that’s an extra couple minutes, tops.
In general, even if working with Go (what I usually do), I will start each Claude Code session with tens of thousands of tokens of context from the code base, so it follows the (somewhat peculiar) existing code style / patterns, and understands what’s where.
See, I'm coming from the understanding that language development is a dead-end in the real world. Can you name a single language made after Zig or Rust? And even those languages haven't taken over much of the professional world. So when I say companies will maintain compilers, I mean DSLs (like starlark or RSpec), application-specific languages (like CUDA), variations on existing languages (maybe C++ with some in-house rules baked in), and customer-facing config languages for advanced systems and SaaS applications.
Yes, several, e.g., Gleam, Mojo, Hare, Carbon, C3, Koka, Jai, Kotlin, Reason ... and r/ProgrammingLanguages is chock full of people working on new languages that might or might not ever become more widely known ... it takes years and a lot of resources and commitment. Zig and Rust are well known because they've been through the gauntlet and are well marketed ... there are other languages in productive use that haven't fared well that way, e.g., D and Nim (the best of the bunch and highly underappreciated), Odin, V, ...
> even those languages haven't taken over much of the professional world.
Non sequitur goalpost moving ... this has nothing to do with whether language development is a dead-end "in the real world", which is a circular argument when we're talking about language development. The claim is simply false.
Bad take. People said the same about c/c++ and now rust and zig are considered potential rivals. The ramp up is slow and there's never going to be a moment of viral adoption the way we're used to with SaaS, but change takes place.
For that, they would need to make LLMs not suck at easy programming tasks. Considering that with all the research and money poured into it they still suck at easy stuff, I'm not optimistic.