Hacker News new | past | comments | ask | show | jobs | submit login

Ok but this is extremely new tech, all of that stuff will get better over time, and the AI will require less and less intervention.



I don't think so. Ultimately there's not enough information in prompts to produce "correct" code. And any attempt to deliver more information will result in a worse programming language, or as it is now, more iterations.


Many high quality human programmers could go off and make a very good program from a simple description/prompt. I see no reason an LLM couldn’t do the same.

On top of that, there’s no reason an AI couldn’t ask additional questions to clarify certain details, just like a human would. Also as this tech gets faster, the iteration process will get more rapid too, where a human can give small bits of feedback to modify the “finished product” and get the results in seconds.


English is a programming language now. That is what is being demonstrated here. Code is still being written; it just looks more like instructions given to a human programmer.

Eventually, human languages will be the only high-level programming languages. Everything else will be thought of the way we currently think of assembly code: a tool of last resort, used only in unusual circumstances when nothing else will do.

And it looks like "Eventually" means "In a year or two."


English is a programming language once you stop looking at or storing the output of the LLM. Like a binary. I'm not seeing anybody store their prompts in a source repo and hooking it directly up to their build pipeline.


We'll be adding flakey code gen to our flakey tests, because someone will do this


What programming language do your stakeholders use to communicate their ideas during planning meetings? Unfortunately, mine can only speak English…


The issue in this is that they speak english, think english, want X in english. But in reality need Y.

ChatGPT will not help with that.


The point is that the roles are reversed not that you give ChatGPT to the stakeholders. ChatGPT is a programmer you hire for $30/month and you act as its manager or tech lead.

This is pointless to argue though since it’s apparent there are people for which this just doesn’t fit into their workflow for whatever reason. It’s like arguing over whether to use an IDE.


Seems like if it can eventually test that the output meets the criteria then it will excel.


But when the code doesn't meet the requirements, the AI needs to know what's incorrect and what changes it needs to make, and that still requires a human. Unless you just put it into a loop and hope that it produces a working result eventually.


So what if you don't "just put it into a loop and hope" but actually make a complex AI agent with static code analysis capabilities, a graph DB, a work memory etc?

I'm doing just that and it works surprisingly well. Currently it's as good as people with 2-3 years of experience. Do you really believe it's not going to improve?

Now I'm making a virtual webcam so it has a face and you can talk to it on a Zoom meeting...


Do you have a presentable demo? LLM augmented by static code analysis sounds very interesting


I don't have GPT-4 API access yet... Using my ChatGPT Plus subscription so far. Will make a release once I get the API.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: