I have very little knowledge of how transistors shuffle ones and zeros out of registers. That doesn't prevent me from using them to solve a problem.
Computing is always abstractions. We moved from plugging to assembly, then to c, then we had languages that managed memory for you -- how on earth can you understand what the compiler should be doing or what it is doing if you don't deal with explicit pointers on a day by day basis.
We bring in libraries when we need code. We don't run our own database, we use something else, and we just do "apt-get install mysql", but then we moved onto "docker run" or perhaps we invoke it with "aws cli". Who knows what teraform actually does when we declare we want a resource.
I was thinking the other day how abstractions like AWS or Docker are similar to LLM. With AWS you just click a couple of buttons and you have a data store, you don't know how to build a database from scratch, you don't need one. Of course "to build a database from scratch you must first create the universe".
Some people still hand-craft assembly code to great benefit, but that vast majority don't need to to solve problems, and they can't.
This musing was in the context of what do we do if/when aws data centres are not available. Our staff are generally incapable of working in a non-aws environment. Something that we have deliberately cultivated for years. AWS outputs are one option, or perhaps we should run a non-aws stack that we fully own and control.
Is relying on LLMs fundamentally any different than relying on AWS, or apt, or java. Is is different from outsourcing? You concentrate on your core competency, which is understanding the problem and delivering a solution, not managing memory or running databases. This comes with risk -- all outsourcing does, and if outsourcing to a single supplier you don't and can't understand is acceptable risk, then is relying on LLMs not?
There's never been a case in my long programming career so far where knowing the low level details has not benefited me. The level of value varies but it is always positive.
When you use LLMs to write all your code you will lose (or never learn) the details. Your decision making will not be as good.
I think there is a big difference. You could and should have both knowledge. This applies to whether you're a lowly programmer or a CEO. Knowing the details will always help you make better decisions.
That’s the credo I’ve lived my life by, but I’ve come to believe it’s not entirely true: knowing the details can lead to ratholes and blurring requirements / solutions / etc. Some of the best execs I’ve met are good precisely because they focus on the business layer, and delegate / rely on others to abstract out the details.
I can’t do that. But I’m coming around to the value in it.
I've seen cases in my career where people knowing the low level things is actually a hindrance.
They start to fight the system, trying to optimise things by hand for an extra 2% of performance while adding 100% of extra maintenance cost because nobody understands their hand-crafted assembler or C code.
There will always be a place for people who do that, but in the modern world in most cases it's cheaper to just throw more money at hardware instead of spending time optimising - if you control the hardware.
If things run on customer's devices, then you need the low level gurus again.
I think it's a lot like outsourcing. And, expected quality of outsourcing aside, more importantly, I don't see outsourcing as the next step up on the ladder of programming abstraction. It's having someone else do the programming for you (at the same abstraction level).
Computing is always abstractions. We moved from plugging to assembly, then to c, then we had languages that managed memory for you -- how on earth can you understand what the compiler should be doing or what it is doing if you don't deal with explicit pointers on a day by day basis.
We bring in libraries when we need code. We don't run our own database, we use something else, and we just do "apt-get install mysql", but then we moved onto "docker run" or perhaps we invoke it with "aws cli". Who knows what teraform actually does when we declare we want a resource.
I was thinking the other day how abstractions like AWS or Docker are similar to LLM. With AWS you just click a couple of buttons and you have a data store, you don't know how to build a database from scratch, you don't need one. Of course "to build a database from scratch you must first create the universe".
Some people still hand-craft assembly code to great benefit, but that vast majority don't need to to solve problems, and they can't.
This musing was in the context of what do we do if/when aws data centres are not available. Our staff are generally incapable of working in a non-aws environment. Something that we have deliberately cultivated for years. AWS outputs are one option, or perhaps we should run a non-aws stack that we fully own and control.
Is relying on LLMs fundamentally any different than relying on AWS, or apt, or java. Is is different from outsourcing? You concentrate on your core competency, which is understanding the problem and delivering a solution, not managing memory or running databases. This comes with risk -- all outsourcing does, and if outsourcing to a single supplier you don't and can't understand is acceptable risk, then is relying on LLMs not?