I don't think so. Ultimately there's not enough information in prompts to produce "correct" code. And any attempt to deliver more information will result in a worse programming language, or as it is now, more iterations.
Many high quality human programmers could go off and make a very good program from a simple description/prompt. I see no reason an LLM couldn’t do the same.
On top of that, there’s no reason an AI couldn’t ask additional questions to clarify certain details, just like a human would. Also as this tech gets faster, the iteration process will get more rapid too, where a human can give small bits of feedback to modify the “finished product” and get the results in seconds.
English is a programming language now. That is what is being demonstrated here. Code is still being written; it just looks more like instructions given to a human programmer.
Eventually, human languages will be the only high-level programming languages. Everything else will be thought of the way we currently think of assembly code: a tool of last resort, used only in unusual circumstances when nothing else will do.
And it looks like "Eventually" means "In a year or two."
English is a programming language once you stop looking at or storing the output of the LLM. Like a binary. I'm not seeing anybody store their prompts in a source repo and hooking it directly up to their build pipeline.
The point is that the roles are reversed not that you give ChatGPT to the stakeholders. ChatGPT is a programmer you hire for $30/month and you act as its manager or tech lead.
This is pointless to argue though since it’s apparent there are people for which this just doesn’t fit into their workflow for whatever reason. It’s like arguing over whether to use an IDE.
But when the code doesn't meet the requirements, the AI needs to know what's incorrect and what changes it needs to make, and that still requires a human. Unless you just put it into a loop and hope that it produces a working result eventually.
So what if you don't "just put it into a loop and hope" but actually make a complex AI agent with static code analysis capabilities, a graph DB, a work memory etc?
I'm doing just that and it works surprisingly well. Currently it's as good as people with 2-3 years of experience. Do you really believe it's not going to improve?
Now I'm making a virtual webcam so it has a face and you can talk to it on a Zoom meeting...