Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is an experiment to see the current limit of AI capabilities. The end result isn't useful, but the fact is established that in Feb 2026, you can spend $20k on AI to get a inefficient but working C complier.


Of course it's impressive. I am just pointing out that these experiments with the million line browser and now this c compiler seem to greatly extrapolate conclusions. The researchers claim they prove you can scale agents horizontally for econkmic benefit. But the products both of these built are of questionable technical quality and it isnt clear to me they are a stable enough foundation to build on top of. But everyone in the hype crowd just assumes this is true. At least this researcher has sort of promised to pursue this project whereas Wilson already pretty much gave up on his browser. I hadn't seen a commit in that repo for weeks. Given that, I am not going to immediately assume these agents truly achieved anything of economic value relative to what a smaller set of agents could have achieved.


> inefficient but working

FWIW, an inefficient but working product is pretty much the definition of a startup MVP. People are getting hung up on the fact that it doesn't beat gcc and clang, and generalizing to the idea that such a thing can't possibly be useful.

But clearly it can, and is. This builds and boots Linux. A putative MVP might launch someone's dreams. For $20k!

The reflexive ludditism is kinda scary actually. We're beyond the "will it work" phase and the disruption is happening in front of us. I was a luddite 10 months ago. I was wrong.


> FWIW, an inefficient but working product is pretty much the definition of a startup MVP

It depends on what kind of start-up we're talking about.

A compiler start-up probably should show some kind of efficiency gain even in an MVP. As in: we're insanely efficient in this part of the work, but we're still missing all other functionalities but have a clear path to implementing the rest.

This is more like: It's inefficient, and the code is such a mess that I have no idea on how to improve on it.

As per the blog improvements were attempted but that only started a game of whack-a-mole with new problems.

If on the other hand you're talking about Claude Teams for writing code as an MVP: the outcome is more like proof that the approach doesn't work and you need humans in the loop.


You are projecting and over-reacting. My response is measured against the insane hype this is getting beyond what was demonstrared. I never said ot wasn't impressive.

I'm not hung up on anything. Clearly the project isn't stable because it can't be modified without regression. It can be an MVP but if it needs someone to rewrite it or spend many man-months just to grok the code to add to it then its conceivable it isnt an economic win in the long run. Also, they haven't compared this to what a smaller set of agents could accomplish with the same task and thus I am still not fully sold on the economic viability of horizontally scaling agents at this time (well at least not on the task that was tested).


> The end result isn't useful

Then, as your parent comment asked, is there value in it? $20K, which is more than the yearly minimum wage in several countries in Europe, was spent recreating a worse version of something we already have, just to see if it was possible, using a system which increases inequality and makes climate change—which is causing people to die—worse.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: