It is not "an agent" in the sense you are implying here, it does not will, want, plan, none of those words apply meaningfully. It doesn't reason, or think, either.
I'll be excited if that changes, but there is absolutely no sign of it changing. I mean, explicitly, the possibility of thinking machines is where it was before this whole thing started - maybe slightly higher, but moreso because a lot of money is being pumped into research.
LLMs might still replace some software workers, or lead to some reorganising of tech roles, but for a whole host of reasons, none of which are related to machine sentience.
As one example - software quality matters less and less the as users get locked in. If some juniors get replaced by LLMs and code quality plummets causing major headaches and higher workloads for senior devs, as long as sales don't dip, managers will be skipping around happily.
I didn't mean to imply AI was sentient or approaching sentience. Agency seems to be the key distinction between it and other technologies. You can have agency, apparently, without the traits you claim I imply.
Ah, ok, you must be using agency in some new way I'm not aware of.
Can you clarify what exactly you mean then when you say that "AI" (presumably you mean LLMs) has agency, and that this sets it apart from all other technologies? If this agency as you define it makes it different from all other technologies, presumably it must mean something pretty serious.
This is not my idea. Yuval Noah Harari discusses it in Nexus. Gemini (partially) summarizes it like this:
Harari argues that AI is fundamentally different from previous technologies. It's not just a tool that follows instructions, but an "agent" capable of learning, making decisions, and even generating new ideas independently.
> If this agency as you define it makes it different from all other technologies, presumably it must mean something pretty serious.
Yes, AI does seem different and pretty serious. Please keep in mind the thread I was responding to said we should think of AI as we would a hammer. We can think of AI like a tool, but limiting our conception like that basically omits what is interesting and concerning (even in the context of the original blog post).
That's the thing, you don't have to say it, sort of by definition. It's always implied, as is its opposite. Imagine a video or article or whatever, titled:
"How to get a Mediocre Job and live an Unremarkable Life"
Where the content of the article or video was, simply:
"Be born a human. You'll have a 99.99% chance of succeeding!"
It'd be pretty grim, it'd last about 15s to read or view, so no-one makes that video. Instead, they make videos about becoming pilots at 21, or owning your own house at 23, etc.
It's hard to accept these percentages as real, because it's about the opposite of what is presented on the content farm platforms, and that's partly as I said the fact that the story is too grim, but also the fact that the "unremarkable" people are not the ones producing "content".
I like Graham's writing, and defend it elsewhere in this thread, but that has such an obsequious and somehow macho smack to it, wow. One imagines Hercules chiseling his abs. If that's what his writing does for you, fair enough, but it sure is intense.
The commenter did not say Paul Graham writes quickly, so I'm not sure why you keep fixating on that point.
> I pity you and the likes of you who are coming here to shit all over as if there aren't any better things to do during the day.
Good lord. They said they like his writing, but found the particular tweet you shared pretentious. Your response to that light criticism is so disproportionate it reads sycophantic. This is a thread about good writing, I think criticizing anything is fair game.
Successful people outgrowing their jodhpurs and losing their reason is a thing, sure, but that does not apply in this specific case. Tech writing is still writing, my friend.
Have you read ANSI Common Lisp? Or even the introduction to it?
I have criticisms of Mr. Graham, but the man can write, and consistently. Some of the essays can be a tad too terse for me at times, but when he gets it right, his stuff can be exquisite.
Another example that comes immediately crashing to mind is Donald Knuth - have you read any of his tech writing? It's glorious.
Anyone who wants to claim there's a hard line between writing worthy of "literary merit" and tech writing is going to have their work cut out for them with those two already.
You mean, the CEO is only pretending to make the decisions, while secretly passing every decision through their LLM?
If so, the danger there would be... Companies plodding along similarly? Everyone knows CEOs are the least capable people in business, which is why they have the most underlings to do the actual work. Having an LLM there to decide for the CEO might mean the CEO causes less damage by ensuring consistent mediocrity at all times, in a smooth fashion, rather than mostly mediocre but with unpredictable fluctuations either way.
All hail our LLM CEOs, ensuring mediocrity.
Or you might mean that an LLM could have illicitly gained control of a corporation, pulling the strings without anyone's knowledge, acting on its own accord. If you find the idea of inscrutable yes-men with an endless capacity to spout drivel running the world unpalatable, I've good news and bad news for you.
I have never seen anything resembling emergent behaviour, as you call it, in my own or anyone else's use. It occasionally appears emergent to people with a poor conception of how intelligence, or computers, or creativity, or a particular domain, works, sure.
But I must push back, there really seem to have been no incidences where something like emergent behaviour has been observed. They're able to generate text fluently, but are dumb and unaware at the same time, from day one. If someone really thinks they've solid evidence of anything other than this, please show us.
This is coming from someone who has watched commentary on quite a sizeable number of stockfish TCEC chess games over the last five years, marvelling in the wonders of thie chess-super-intelligence. I am not against appreciating amazing intelligences, in fact I'm all for it. But here, while the tool is narrowly useful, I think there's zero intelligence, and nothing of that kind has "emerged".
It might be the ultimate irony if we were training them. But we aren't, at least not in the sense that we train dogs. Dogs learn, and exhibit some form of intelligence. LLMs do not.
It's one of many unfortunate anthropomorphic buzz words which conveniently wins hearts and minds (of investors) over to this notion that we're tickling the gods, rather than the more mundane fact that we're training tools for synthesising and summarising very, very large data sets.
I don't know how the verb "to train" became the technical shorthand for running gradient descent on a large neural network. But that's orthogonal to the fact that these stories are very, very likely part of the training dataset and thus something that the network is optimized to approximate. So no matter how technical you want to be in wording it, the fundamental irony of cautionary tales (and the bad behavior in them) being used as optimization targets remains.
Have you considered throwing your thoughts down in longer form essays on the subject somewhere? With all the slop and hype, we need all the eloquence we can get.
You had me at "probablistic regurgitation of tropes", and then you went for the whole "grainy facsimile" bit. Sheesh.
I wonder if we'll look back on this period in a couple of years and feel a nostalgic fondness as we think of the fateful moment when people working in software were forced to pull the wool from their eyes and look at the fact that businesses really, really, really dislike losing huge amounts of money paying people to make the software their businesses completely depend on.
I mean, I'm guessing that's true. It'd make a lot of sense if they vehemently disliked that. It's hard to make sense of it all otherwise, really.
Non-technical business owners have always had deep anxiety about software development. They don’t understand it, it’s very expensive, timelines can explode, and a hack or leak can materially damage their business.
A reasonably smart CEO can pretty much understand, in depth, every aspect of their business. But when it comes to tech, which is often the most essential part, they are left grasping, and must rely on the expertise of other people, and thus their destiny is not really in their control, other than by hiring the best they can and throwing money at R&D.
The AI and the hype around it plays into their anxieties, and also makes them feel like they have control over the situation.
In biotech, the Chief Scientific Officer (CSO) is often given much more authority in startups than the CTO in tech startups, I have noticed.
> A reasonably smart CEO can pretty much understand, in depth, every aspect of their business. But when it comes to tech, which is often the most essential part, they are left grasping, and must rely on the expertise of other people
I honestly really don't understand why this would be the case. Software isn't more complicated than any of the other aspects of the business. I think a "reasonably smart" CEO could just ... learn how it works? if it's really so critical to their business.
It's been a long time since I worked for a CEO who didn't understand software.
If you run a trucking company, or a retail business, or a food company, etc. I believe you can understand to a fairly detailed level the logistics and “secret sauce” involved that makes the business tick, even if you are not the core employees operating with the skills and expertise.
But if you are a non-technical CEO and your core business is, say, enterprise SaaS software, you don’t fundamentally understand what the heck is going on, and if you have a key deadline and blow it, don’t really understand why. So if a new VP says they can cut your costs dramatically by offshoring everything to India, etc., or replace half these expensive engineers with AI, it seems as plausible as anything else. Especially given the fawning press and hype, and salesmen pitching you all day.
The part of this argument that doesn’t make sense to me is that you’d think any CEO would have a reasonably decent bullshit detector, but maybe since they have to shovel it out so much, they forget how to detect it in others.
> But if you are a non-technical CEO and your core business is, say, enterprise SaaS software, you don’t fundamentally understand what the heck is going on
Why the hell not? It's not that hard! I think this sounds like laziness, honestly.
It is no harder to understand how your enterprise SaaS software works than it is to understand what makes your retail business or trucking company tick. If you're running a company in a business, I really think you should understand how the important pieces of your business work... Is that really so controversial?
Every CEO I have worked for - and I am an executive - doesn’t know shit about software engineering. I mean I’m paid a lot of money for that. I just explain things in metaphors, but the non-tech execs are clueless
CTO can be a funny position at companies. It sometimes does mean head of engineering and responsible for technical direction at a pretty granular level. But it often also can mean being sort of the public face for the company's technology vision. I've definitely seen companies where the two are largely one and the same. I've also seen companies where the CTO was more the outward-facing vision person.
If senior management feels their destiny is not in their control, they're doing it wrong. The best management don't always have the most expertise - but the good ones have an uncanny ability to know when they should defer, when/who to consult with, who to trust, and what to delegate.
Businesses really, really, really would like to just have profit without any expenses. Ideally all of the money in the world as profit, please. And no taxes either. Just skip straight to splitting all the money in the world between the shareholders.
If you think running the output of an LLM as a serverless function in some cloud is a good way to differentiate your business, build a moat, and make a profit, good luck!
I'll be excited if that changes, but there is absolutely no sign of it changing. I mean, explicitly, the possibility of thinking machines is where it was before this whole thing started - maybe slightly higher, but moreso because a lot of money is being pumped into research.
LLMs might still replace some software workers, or lead to some reorganising of tech roles, but for a whole host of reasons, none of which are related to machine sentience.
As one example - software quality matters less and less the as users get locked in. If some juniors get replaced by LLMs and code quality plummets causing major headaches and higher workloads for senior devs, as long as sales don't dip, managers will be skipping around happily.
reply