Rather than feeling scared I think you can use this to productively guide you.
Look through the prompts he is using. Do they strike you as something a random person can produce? Not really. They show that Simon has excellent understanding of sqlite and how to do benchmarks. All ChatGPT does for him is speed up the typing. My experience has been that the quality of output is heavily correlated with the quality of input. Good clear prompts give good output. I postulate that if Simon had worse understanding of benchmarks and sqlite he would not have gotten as good output.
If you as a developer make your money by doing what ChatGPT is doing (turning clear instructions into working code) then you are going to be automated away. If you as a developer make your money by having a good understanding of the tools you use and by communicating that understanding clearly, then you have to type less in the future.
The silver lining is that even without GPT good understanding of tools and clear communication were always the more important skills to learn. ChatGPT just solidifies existing structure.
For the CRUD app writers - yeah this is bad news. But it was always clear that it was a bubble. If you're actually solving new problems, it works more like code completion.
Unix was written in assembly with the Ed editor. Even Notepad and any modern language probably represents a 100x increase in productivity compared to that - much more than ChatGPT can do. The field of software engineering will only grow from this.
From what I've seen - most places have way more work than available programmers. Jira tickets stay unsolved for years. Maybe with the increased productivity we can actually clear our backlogs one day.
Not sure about you, but I also have a huge backlog of projects that don't exist yet and I would like to create them. but usually they're too complex to do over the weekend and I don't have that much time to spare.
I already tried to approach one of those projects but I quickly failed as gpt knowledge was too outdated for the library that I needed to use and I didn't figure out how to "patch" gpt knowledge.
in this case I'm not very familiar with the stack I needed for that project ( and I think future versions of LLM could get better at bridging that gap) but for tasks that I'm more familiar with I noticed a significant increase in productivity
I hope there’s still room for improvement. I haven’t been able to get ChatGPT to do anything coherent for stuff with > 5 functions or logic that is even the slightest complex. It likewise frequently stumbles/hallucinates when trying to integrate revisions. This is all using GPT-4 as well.
What’s it’s absolutely fantastic at is the small, one-off scripts that were tedious and time-consuming to write, integrating well-specified changes into existing code, tests, and boiler plate config stuff.
I know this is all “for now” talk, but it will be interesting to see how quickly or if at all it can get to production-ready code in the absence of pretty thorough review by someone who knows how stuff actually works. Natural language is a fantastic new interface, but (for now) you still have to know how to describe stuff that actually works.
My experience has been similar. Not to say it hasn't been helpful in 'larger' apps, but I've needed to scaffold/stub out the functions, and then verify GPT-4 generated code passes the appropriate tests.
I'm not scared. ChatGPT produces shitty and insecure code which nobody should just trust.
/edit: also, most of the stuff I do is so fringe that chatGPT probably doesn't know about. I'm currently upgrading our spring boot/security stuff to 3/6 and ACL is broken. There is no single sample or SO question on the internet for that. So how can chatGPT solve this? Answer: it won't.
> ChatGPT produces shitty and insecure code which nobody should just trust.
Today. What about a couple of years from now? We're seeing stuff that was sci-fi 5 years ago, who knows what we'll have in another 5 years? I think that pretty much everyone that's not very close to retirement and whose job is performed while sitting on a chair should be concerned about a not so distant future where their job either disappears or is transformed radically.
That's why I used the expression "who knows what we'll have in another 5 years". It's the possibility of AI keeping advancing at this pace what scares me.
I'm not so sure. I've been studying that stuff for a decade now and we've seen multiple of these "AI gold rushes" where afterwards an "AI winter" followed. Let's see and find out.
I don't think we'll see knowledge transfer or intrapolation from non-existent data, whhich is what would not scare but excite me. Current machine learning just extrapolates from the data it has seen. Garbage in, garbage out. No data in, randomness out. So I'm not scared
How do you define extrapolation at the scale LLMs operate? Even if you work with unseen-to-model software, it seems sufficient to "understand" the documentation and code examples to orient itself to helpful context.
That's why I'm scared. Once embedding whole codebases becomes a viability, I expect many opinions to change too
I invite you to play around with the code it generates. For example, yesterday I tasked it to generate a gpt-3.5-turbo API client. In the time I needed to get it running I could've wrote it myself. And that's < 100 lines. Don't even get me started on architecture, contextual decisions, clean code etc.
Interesting that it's so insufficient for you, maybe indeed stuff you do is novel and would require lots of instructing before helping you.
Personally, my usecases involve quite standalone applications. copy pasted from another thread (using gpt4):
My personal use of gpt4 (also daily) is: correct, rephrase spelling from my brain dump, make python plots (stylize, convert, add subplots, labels, handle indexing when things get inverted), make short shell scripts (generated 2FA, login vpn through console using 2fa, make script of disabling keyboard etc), and help debug my code (my situation is this, here's some code, what do you suggest?).
I would agree with summarization and NLP-driven tasks and I will actually add chatGPT to my side-gig for that. But code-wise it's not as big of a help as I'd like.
Good for you that it actually helps with coding! I like copilot a lot for auto-completion tho, but the rest of my work apparently is too complex, yes.
I wanna play around with "baby/teenageAGI" today, but I don't have high hopes. chatGPT hallucinates stuff together, nothing more. Would be cool if it could solve small jira tickets but I don't think it will be helpful.
It clearly can, it’s not like ChatGPT just refuses to answer if your prompt isn’t exactly word for word the same as something in its training data. Otherwise it would just be a search engine
"Can" is a bit of a stretch imho. It can produce something for every input. Humans are way more accurate in that regard. ChatGPT just feels correct but is mostly wrong.
It’s more that its training objective doesn’t require it to be correct, which is a completely separate thing to whether or not it can generalise to things it hasn’t seen before
I am. And I fear developers using Copilot, ChatGPT etc. are helping to make this happen.
I'm not a luddite, actually quite the opposite, I'm all for the machines doing all the work. The thing is, I fear this is going to happen so quickly that even if governments and institutions had the intention to do something about it, they won't be able to do it in time. So whereas I wouldn't do anything to stop progress in this front, I also don't think I'd be willing to help get there (i.e. using Copilot and such).
I also fear that even if I don't lose my job it will change drastically, like the story, from Reddit I think, that made it to HN a few days ago where a passionate 3D artist was now prompting midjourney or whatever model and then doing some photoshop on top of it.
That last part is what I’m expecting: a lot of developers have been able to convince ourselves that our interests are aligned with upper management’s due to above-average pay and treatment, but that view is pretty heavily skewed one-way and a lot of companies resent the labor cost and lack of obsequiousness. Those places are going to see this as an opportunity to lower wages trying to turn the job into “just cleaning up what GPT creates”.
Aren't ML models trained using a reward/punishment loop? (back propagation). If we have an AI that can learn from its experience, then pain will be failure to reach its goals and pleasure will be the opposite. Animals work in the exact same way (goal being survival and reproduction). We just need to make sure that survival and reproduction are never the goal (either directly or indirectly) of an AI and we should be safe.