> Chris Lattner, inventor of the Swift programming language recently took a look at a compiler entirely written by Claude AI. Lattner found nothing innovative in the code generated by AI [1]. And this is why humans will be needed to advance the state of the art.
I’ve recently taken a look at our codebase, written entirely by humans and found nothing innovative there, on the opposite, I see such brainrot that it makes me curious what kind of biology needed to produce this outcome.
So maybe Chris Lattner, inventor of the Swift programming language is safe, majority of so called “software engineers” are sure as hell not. Just like majority of people are NOT splitting atoms.
Also: if that one particular AI-produced compiler has nothing innovative, that only means that the human "director" behind the AI didn't ask it to produce anything innovative; what it does not mean is that AI can never produce anything innovative in a compiler.
> if that one particular AI-produced compiler has nothing innovative, that only means that the human "director" behind the AI didn't ask it to produce anything innovative
Couldn't it also be true that the AI didn't produce innovative output even though the human asked it to produce something innovative?
Otherwise you're saying an AI always produces innovative output, if it is asked to produce something innovative. And I don't think that is a perfection that AI has achieved. Sometimes AI can't even produce correct output even when non-innovative output is requested.
> Couldn't it also be true that the AI didn't produce innovative output even though the human asked it to produce something innovative?
It could have been, but unless said human in this case was lying, there is no indication that they did. In fact, what they have said is that they steered it towards including things that makes for a very conventional compiler architecture at this point, such as telling it to use SSA.
> Otherwise you're saying an AI always produces innovative output
They did not say that. They suggested that the AI output closely matches what the human asks for.
> And I don't think that is a perfection that AI has achieved.
I won't answer for the person you replied to, but while I think AI can innovate, I would still 100% agree with this. It is of course by no means perfect at it. Arguably often not even good.
> Sometimes AI can't even produce correct output even when non-innovative output is requested.
Sometimes humans can't either. And that is true for innovation as well.
But on this subject, let me add that one of my first chats with GPT 5.1, I think it was, I asked it a question on parallelised parsing. That in itself is not entirely new, but it came up with a particular scheme for paralellised (GPU friendly) parsing and compiler transformations I have not found in the literature (I wouldn't call myself an expert, but I have kept tabs on the field for ~30 years). I might have missed something, so I intend to do further literature search. It's also not clear how practical it is, but it is interesting enough that when I have time, I'll set up a harness to let it explore it further and write it up, as irrespective of whether it'd be applicable for a production compiler, the ideas are fascinating.
I’ve built some live programming systems in the past that are innovative, but not very practical, and now I’m trying to figure out how to get a 1.5B model (a small language model) into the pipeline if a custom small programming language. That is human driven innovation, but an LLM is definitely very useful.
Office lives matter! Do you know how much PTSD I have from waiting for my morning latte in our office coffeeshop while being late for standup? All of it!
A lot leaps from riflemen, who obviously didn’t want to die (did you expect them to rush Medal of Honor style?), to system features to model office work? Whole essay is incoherent mess written by one of those lonesome “no-bullshiter” who gets the job done but is so pulled down by modern day bureaucracy that even his clairvoyance can’t get through.
> Dostoevsky wrote _The Brothers Karamazov_ alone. The Apollo Guidance Computer came from a team at MIT small enough to have real ownership, hierarchical enough that Margaret Hamilton's name could go on the error-detection routines she personally designed
I have good news for you, my jaded friend! What is similar between those people and you? You’re an individual! Therefore you could write another masterpiece yourself, you can be next Notch, next copyparty guy, next Stardew Valley guy and a long list of creations created by an actuallly high-performing individual, not some complainer who is oh so encumbered by stupid social dancing.
> A lot leaps from riflemen, who obviously didn’t want to die
Yeah but you'd think not dying involves killing those who want to kill you, or at least shooting at them! Isn't it super interesting to learn that 80% of riflemen don't ever shoot?
In a gunfight, you usually have to expose yourself at least a little bit in order to aim and fire. And let's say that you know an enemy soldier is around some corner, unaware, and you can pop out and shoot them. If there is another soldier aiming at your position, unbeknownst to you, you are dead.
In WW2 most shooting was covering fire, not targeted shots. That means people where not aiming shots, but just firing in the general direction of the enemy. If the 80% would have done it, the positive would be the other 20% would have been much more effective with the only downside of increased ammo consumption.
a) other comment in the thread disproved the claim
b) even if it was remotely true, context matters. Refusing to shoot someone point blank because of reasons is one thing, refusing to go against Tiger 2 is another.
Yes, but that's also why the claim isn't true and has been criticized for years. It is so much more instinctive to simply pull the trigger even in a panic than sit there and do nothing.
You seem to ignore all the mountains of evidence that sense of responsibility drops in groups. The larger the group, the bigger the drop. This is not news, or non-sense.
sorry, that is just taken from my experience, and perhaps I am considering reasoning to be a broader category than others might.
To be lenient I will separate out bugs caused by insufficient knowledge as not being failures in reasoning, do you have forms of bugs that you think are more common and are not arguably failures in reasoning that should be considered?
on edit: insufficient knowledge that I might not expect a competent developer to have is not a failure in reasoning, but a bug caused by insufficient knowledge that I would expect a competent developer in the problem space to have is a failure in reasoning, in my opinion on things.
> as if he wasn't the one who clutched at some mythical "the university"
Next you'll tell that anything below Ivy league isn't considered university and kids should've known better and used their damn head™ really hard to be born in better families to be able to get real university™ education.
I sure hope those "university professor friends" exist, and you're not self-distancing. Because you really need help with the mindset like that. Students are not your enemies and LLMs are not ought to get you. Seek help.
> No it won't. It really, really wont. You clearly don't have any university professors amongst your friends or acquaintances.
Maybe some fancy professors in their cushy Ivy league ivory tower won't, but a lot of teachers that work for minimal salary sure will.
> Because the reality is that LLMs are nothing more than a party trick, a stats based algorithm that gives you answers within a gaussian curve.
A lot of humans can't even do that.
> Some of the students even have the audacity to challenge the professor's marking saying "but the AI said it is right" in relation to some basic math formula that the student should know how to solve with their own brain.
Students challenge professors over some stupid assumption, more news at 11.
> Trying to cheat your way through university with an LLM is a waste of the students time, a waste of the professors time and a waste of the university's infrastructure.
Who even said anything about cheating? Witch hunting too much? For majority of layman topics LLM will be a far superior offering precisely because LLMs have no ego and will reply to their best abilities instead of chastising students about, oh God forgive, HAVING AUDACITY to disagree over a topic.
Which includes hallucination, reward-hacking, over-confident delivery of completely wrong answers etc.
> HAVING AUDACITY to disagree over a topic.
When we are discussing a long-standing centuries-old textbook mathematical formula which is internationally recognised there is no disagreement to be had.
If an LLM hallucinates and tells a student that the textbook mathematical formula is wrong, and the student has the audacity to complain to the professor on that basis, I see no issue with the professor firmly challenging the student. University is there to foster learning and reasoning using your own brain, not outsourcing it to a hallucinating LLM.
reply