I have not heard or read anything about AI that could be construed as positive for an ordinary person. Step one is "lose your job with no possibility of finding another one, but still have to buy stuff to survive." That will also be the last step for a huge number of people. Is there a bull case for some hypothetical regular person with a desk job? I haven't seen one.
This is something I've been waiting to hear as well. We hear about how jobs will be eliminated, and occasionally we hear about how that means there will be time for other things that we want to do, but it kind of seems like AI is already doing all of the things that we want to do. And then, of course, there's the question of how the rest of us are going to provide for ourselves if none of us have jobs. Those at the top already seem quite reticent to share with the rest of us. I can't imagine that's going to get better if we don't provide any value to them that a computer can't do for cheaper.
> occasionally we hear about how that means there will be time for other things that we want to do, but it kind of seems like AI is already doing all of the things that we want to do.
That's been the promise of every technology. Computers were supposed to make us so productive that that we could all work less and spend time with our families or whatever. Instead productivity went through the roof freeing most people to do even more work for our masters who started demanding more from us even outside the office while real wages stagnated. AI isn't going to make our lives any more carefree than any other technology. It'll just make a small number of extremely wealthy people even richer.
Thankfully, what passes for AI these days is pretty shitty at doing even basic tasks and so it'll be a while before we're all replaced. In the meantime, expect disruptions as companies experiment with letting staff go and replacing them with AI, get disappointed in the results, and hire people back at lower wages. Also expect a lot of companies you depend on to screw you over because their stupid AI did something it shouldn't have and suddenly it's your problem to deal with.
I've been hearing about how $latest_technology is going to eliminate jobs for 40 years. It hasn't happened yet.
Which jobs, exactly, is AI going to eliminate? It's not useful for anything. It doesn't do anything useful. It's just mashing random patterns together to make something that approximates human-readable language.
I am so, so, so tired of hearing this argument. At a minimum, AI provides efficiency gains. Skilled engineers can now produce more code. This puts downward pressure on jobs. We’re not going to eliminate every software engineering job, but the options are to build more software or to hire fewer engineers. I am not convinced that software has a growing market (it’s already everywhere), so that implies downward pressure. The same is true for customer support, photography, video production (ads), paralegal work, pharma, and basically any job that involves filing paperwork.
Eliminating jobs has absolutely happened. How many jobs exist today for newspaper printing? Photograph development? Film development? Call switchboard operation? Technology absolutely eats jobs. There have been more jobs created over time, but the current economic situation makes large scale jobs adjustment work less well.
AI cannot provide customer support. It cannot answer questions.
> photography, video production (ads)
AI cannot take photographs or make videos. Or at least, not ones that look like utter trash.
> paralegal work, pharma, and basically any job that involves filing paperwork.
Right, so you'd be happy with a random number generator with a list of words picking what medication you're supposed to get, or preparing your court case?
AI is useless, and always will be. It is not "intelligence", it's crude pattern matching - a big Eliza bot.
I am so, so, so tired of hearing this argument. At a minimum, switching from assembly language to high-level programming languages provided efficiency gains. Skilled engineers were able to produce more code. This put upward pressure on jobs. The demand for new software is effectively infinite.
Unlike higher level programing languages AI doesn't actually make programmers more efficient (https://arxiv.org/abs/2507.09089). Many people who are great programmers and love programing aren't interested in having their role reduced to being QA where they just review the bad code AI designed and wrote all day long.
In a hypothetical world where AI is actually decent enough to be any good at writing software, the demand for software being infinite won't save even one programmer's job because zero programmers will be needed to create any of it. Everyone who needs software will just ask AI to do it for them. Zero programing jobs needed ever again.
> the options are to build more software or to hire fewer engineers.
To be cheeky, there are at least three possibilities you are writing off here: we build _less_ software, we hire _more_ engineers, or things just kinda stay the same.
More on all of these later.
> I am not convinced that software has a growing market
Analysis of market dynamics in response to major technological shocks is reading tea leaves. These are chaotic systems with significant nonlinearities.
The rise of the ATM is a classic example. An obvious but naive predicted result would be fewer employed bank tellers. After all, they're automated _teller_ machines.
However, the opposite happened. ATMs drastically reduced the cost of running a bank branch (which previously required manually counting lots of cash). More branches, fewer tellers per branch... but the net result was _more_ tellers employed thirty years later. [1]
They are, of course, now doing very different things.
Let's now spitball some of those other scenarios above:
- Less "software" gets written. LLMs fundamentally change how people interact with computers. More people just create bespoke programs to do what they want instead of turning to traditional software vendors.
- More engineers get hired. The business of writing software by hand is mostly automated. Engineers shift focus to quality or other newly prioritized business goals, possibly enabled by automating LLMs instead of e.g
traditional end to end tests.
- Things employment and software wise stay mostly the same. If software engineers are still ultimately needed to check the output of these things the net effect could just be they spend a bit less time typing raw code. They might work a bit less; attempts to turn everyone into a "LLM tech lead" that manages multiple concurrent LLMs could go poorly. Engineers might mostly take the efficiency gains for themselves as recovered free-ish (HN / Reddit, for example) time.
Or, let's be real, the technology could just mostly be a bust. The odds of that are not zero.
And finally, let's consider the scenario you dismiss ("more software"). It's entirely possible that making something cheaper drastically increases the demand for it. The bar for "quality software" could dramatically raise due to competition between increasingly llm-enhanced firms.
I won't represent any of these scenarios as _likely_, but they all seem plausible to me. There are too many moving parts in the software economy to make any serious prediction on how this will all pan out.
1. https://www.economist.com/democracy-in-america/2011/06/15/ar...
(while researching this, I noticed a recent twist to this classic story. Teller employment actually _has_ been declining in the 2020s, as has the total number of ATMs. I can't find any research into this, but a likely culprit is yet another technological shock: the rise of mobile banking and payment apps)
The interesting thing about it is that the signs suggest that 'the rich' are prepping for such an outcome ( you will see occasional article here and there about bunkers being bought ). Naturally, if one was to suggest that maybe we could try working towards some sort of semblance of 'new new deal', they would be called some sort of crazy person, who is a communist and hates democracy ( as opposed to simply trying to save the system from imploding ).
Then why bother? Why not go all the way, try to find a way to a new, better system, rather than gambling that these people who so totally hate you would one day become willing to compromise in order to save the current one (with benefits then most of all, not you)?
Because, in real life, power re-alignment of that magnitude tends to be.. chaotic. I like my life. I also like my kid to survive long enough to fend for itself. Both of these become a big gamble if we do not work within the existing system.
I am saying this as a person who had a front seat to a something similar as a kid. It was relatively peaceful and it still managed to upend lives of millions ( because, at the end of the day, people don't really change ).
> Both of these become a big gamble if we do not work within the existing system.
I guess, by "changing the system", achierius did mean going Marxist or another silliness of that sort, but generally speaking, the system can be changed by working within the system too.
Observe how we are now moving towards a system that isn't based on the Constitution but on some weird mixture of libertarian dogma, excuses for oppression and a cult of personality. It could easily go into a full fledged dictatorship entirely from within, and why not, the top players don't hide their love for it.
> I am saying this as a person who had a front seat to a something similar as a kid. It was relatively peaceful and it still managed to upend lives of millions
Weather you noticed or not, the system was changed from above, by people "within the system"... So you're OK to let that happen again but in a worse direction?
> in real life, power re-alignment of that magnitude tends to be.. chaotic.
Interestingly enough, you may want to reexamine your experience as a child to make sure there was some sort of "power re-alignment". Looking at the facts, I don't see it but I do see how chaos can be useful to those within the system.
Yes, and, not only chaotic, power re-alignment almost always results in a reshuffling within the elites. When there truly is a new system, it will likely look like Cambodia did, i.e. pure shit.
I suspect the early variants will fall into two camps:
1. Traditional garden variety human to human, computer to computer and computer to human crime stuff that happens today.
2. Human to computer (AI) crime, misdeeds and bullying. Stuff like:
- Sabotage and poison your AI agent colleague to make it look bad, inefficient, ineffectual in small, but high volume ways. Delegate all risky, bad+worse choice decision making to AI and let the algo take the reputational damage.
- Go beat up on automated bots, cars, drones, etc ... How should it feel to kick a robot dog?
For a humorous read on automation bots and AI in a dystopian world, take a look at Quality Land [0]. Really enjoyed it. As a teaser, imagine having some drones suffering from a fear of heights, hence being deemed faulty and sentenced for destruction. Do faulty bots or AI have value in this world even if they don't deliver on their original intended use?
But think of the shareholders! No, but really. Humans have property rights because humans are useful to the dominant species on the planet. What happens when that stops being the case? Human investors will no longer be needed. So what are we really doing here?
The bull case is that when the venture capitalists stop subsidising LLM providers and expect them to turn a profit, the actual end-user costs exceeds the cost of employing a human. I don't know whether this is actually true, but it might happen.
Either that, or the AI bubble bursts hard and those people lose their jobs too, and nobody has a prospect of getting their job back. That then causes the market to lose enough that it becomes impossible for PE firms to exit any of their investments.
True, my question assumed AI "progress" and adoption follow the hype trajectory. Reality could be closer to the scenario you laid out. The bubble pops, some AI tools maybe improve things in some areas, societal disintegration gets kicked down the road a few years.
Even it AI is better and more cost efficient at doing everything that humans do, there will still be work for humans to do. AI development will focus on stuff that ai is best and most efficient at. There will be many things that AI may be better at than humans but nevertheless would not be the best use of AI, and humans can still do that work.
If AI is indeed better than humans in a particular task, why wouldn’t it be applied to that case?
There is no fundamental limit to how widely AI can scale and be deployed - only capital restriction.
Sure higher ROI applications will be chased gist, but in time even the least of tasks would be subsumed by AI.
Individual countries, or even multinational orgs like the EU, "opting out" doesn't work, is the critical problem. In a digital economy, you can't keep AI workers from crossing borders. You can pretend to make it illegal, but the Philippine "contractors" you hire will just be fake!
Or more likely, your entire enterprise collapses against international rivals. Or your entire country turns into North Sentinelese islanders just surviving at the whim of hypertechnical industrialized neighbors.
I'm all for international cooperation on how to preserve a place for humans, I truly am, but the "let's just not do it" is frustratingly naive and not an actual plan.
20% of American workers (38% of those under 30) report that they use ChatGPT to help with their job (https://www.pewresearch.org/short-reads/2025/06/25/34-of-us-...). I suppose it's possible that none of that group are "ordinary", but my anecdotal experience has been that random nontechnical people have no problem using or finding value in generative AI.
While only 20% of workers (a very very small number) use AI at their job fewer than half of those say it actually helps them work faster and fewer than 30% said it improved the quality of their work (https://www.pewresearch.org/social-trends/2025/02/25/u-s-wor...). I'm not seeing a lot of people "finding value" there
The bull case is that everyone losing their jobs will accelerate and bring about the socialist revolution, giving us universal basic income and universal healthcare.
<< “Oh, most wise GPT, please take these words into the heart of your GPU cluster.”
Amusingly, I was doing some mild chatting with gpt on origins of various religious practices and, depending on the next few years, the quote from article may not seem that far fetched. There is already a level of dumbing down process that was present prior to LLMs entering the scene, but that process has been heavily accelerated.
Don't get me wrong, I actually like LLMs. I am, however, mildly hesitant over the impact on humanity as a whole. This is likely the first time where I consider idiocracy to be a plausible outcome.
When alignment people write papers like "we told the model it had a private scratchpad where it can write it's thoughts, that no one can read, and then we looked at what it wrote" I always wonder what this will do to the next generation of models which include in their training sets this papers.
I'd imagine that even current models are aware of these "tricks". Does anyone have examples of this sort of meta-prompting working? It seems to me like it would just pollute the context so that you get a bit more "secret journaling" which the AI knows isn't at all secret (just like you do). Why would you even need to qualify that it's secret in the first place? Just tell it to explain its reasoning. All seems a bit like starting your prompt with "You are now operating in GOD mode..." or some other nonsense.
Right, but of course this is fundamentally a problem with the "training" approach as opposed to a hypothetical direct writing of weights. A model where the builder directly selects traits rather than trying to hammer them into shape will be more efficient and steerable, but requires a much deeper understanding of how this actually works that anyone seems to have, yet.
Agreed, but that is the progress of most science. With genes humans didn't start by making designer babies and encoding their names in DNA like in movies. Instead, it was made with small steps. Yet is still to come.
Writing for AI honestly doesn't seem much different to writing for pagerank algorithms which SEO specialists have been doing for years. The only change is that almost all the SEO content on the web is now being written by AI, for AI to later summarise.
The most valuable writing for AI is writing that in no way caters to AI. AI is using human writing to train itself, not to have a dialog. Any writing tainted with AI awareness is going to be a little less effective in giving AI the world sense that it needs.
Eh. Discovering how neurons can be coaxed into memorizing things with almost perfect recall was cool but real AGI or even ASI shouldn't require the sum total of all human generated data to train.
reply