> Something strange is happening in the world of software: It’s slowly getting worse. Not all software, but a lot of it. It’s becoming more sluggish, less responsive, and subtly less reliable than it was a few years ago.
> What baffles about these software moans is that Apple’s hardware is ever-more refined. While far from flawless, the entire lineup is now (finally) largely free from these “foundational” issues you see in software.
The answer to this is very simple (at some level): It is impossible to produce mechanical designs that are the equivalent of the software engineering abominations the world is stuck with today.
Going back to iPhone 3 days, I remember coding a genetic solver in Objective-C that was an absolute dog. I optimized as much as I could and could only squeeze so much performance out of it.
I finally gave up and re-coded the entire thing in clean C++. The code was somewhere in the order of 290 times faster. Objective-C was an absolute dog because, at a minimum, every data type you had to use was an object oriented mess. This was absolutely unnecessary, the proof of it being that my C++ equivalent performed exactly the same function and lacked nothing. In fact, it was so fast that it allowed the use of this genetic solver in real time in the context of this app. Objective-C, er, objectively speaking, had no reason to exist. Yet, it did, and lots of people believed it had to exist and likely many do today. They are wrong.
Another way to look at this is that the clean solution used somewhere in the order of 200~300 times less energy. This is something hardware engineers are keenly aware of. Software consumes energy and badly written software is even worse. Think of it this way: A bit transition costs energy. Inefficient code requires more bit transitions per unit time, therefore, more energy and more power dissipation.
Imagine the mechanical engineering equivalent. Some monstrosity where, instead of using a simple screw and a nut one ends up using a complex fastener that has layers upon layers of mechanisms and unnecessary complexity. A screw with the ability to grow and shrink to any length and diameter, including all the wrenches, tools, nuts, washers and devices needed to install, tighten and test them. Very quickly a simple product that could mechanically fit in the palm of your hand would quickly become the size of a car.
And so, in this way, mechanical and industrial design is always "low level", like always coding in assembler (not proposing we do that). Sure, materials and manufacturing techniques improve, yet, at the most fundamental level, excellent mechanical and industrial design is clean, uncomplicated, easy to understand and easy to manufacture. It's machine language, or C. Not Objective-C.
Software engineers who are not exposed to this reality, through no fault of their own, are not aware of these issues. I don't blame them. If most of what someone sees in school amounts to Python, their view of reality will be skewed.
My son is studying Computer Science at one of our top universities and has less than a year to graduate. He is home now for both the summer and due to the virus. I've been working on several embedded projects and showed him a few tricks to improve performance. He was amazed to see how you could optimize code for execution speed (with sometimes dramatic results) by making simple high level choices. For example, a down counter in a "for" loop ("i=10; i!=0; i--") is generally faster than the typical up-counter: "i=0; i<10; i++". This is due to the fact that processors have instructions like "DJZ" or "DJNZ" (Decrement and Jump if Zero / Decrement and Jump if Not Zero) that don't require loading a comparison value and engaging the ALU and sometimes even fetching that value over and over again from memory.
Software engineering doesn't have the same physical constraints found in mechanical engineering unless someone is coding for an application and a domain where excess really matters. One example of this is writing code for space hardware, where, aside from reliability, you have to be aware of the fact that every bit you are flipping will cost a quantum of energy and you might not have enough to be careless about it. Energy quickly translates to mass in the form of large cooling radiating surfaces that must be launched into space along with the hardware.
It's an interesting problem this issue of bad or bloated software. Not sure if there's a solution. There won't be until there's a forcing function that requires a change in perspective and approach.
EDIT:
To be clear, my point isn't to pick on a specific language or languages but rather than use easy examples to highlight one of the problems that has been building up since the introduction of object-oriented programming. I remember the early days of OO. It was a mess. Much like the early days of desktop publishing where people loaded-up documents with every font they could find.
Objective C will always be an absolute dog for high-performance work because of how everything has to go through dynamic dispatch and indirection steps. It's why even Apple is trying to replace it with Swift - that name is not coincidental! And the Rust language by Mozilla is even better performance than Swift while supporting a great set of principled, higher-level language features if you want them.
The average overhead of a message send using the runtime Apple ships these days is insanely low; it’s on the order of approximately two nanoseconds a call. With some minor insight you can bring that down to well under a nanosecond, literally single-digit clock cycles, just by testing if indirection is necessary. (For reference, this is about on par with, or on days I’m feeling a bit confident even better than, a C++ virtual method call.) There is no way that Objective-C is hundreds of times slower than C++ unless there is something else going on.
You have to test. You also have to be intimately familiar with what your compiler is doing. I can't give you an example right now, but it is possible for the resulting compiler-optimized code to change from run to run based on changes in the source code. If you want repeatable and reliable performance you have to be drive the optimization rather than hoping for the compiler to do the work for you.
I very much doubt an optimizing compiler is going to take an up-counting for loop and convert it to a down-counting loop. One thing is certain, if such a thing exists, there is no way I would trust it to be consistently true across compilers and architectures.
There are also fundamental choices that can have a serious effect on performance and reliability.
What's faster, using a struct or a bunch of individual variables?
Say you have three different copies of the same data corresponding to three different items. What's the fastest way to represent, store and process the data?
Hypothetical example: You have three robots generating telemetry you must track using a real time embedded system in order to coordinate their actions and react to the environment. Each robot generates one hundred variables you must track.
Is it better to define a struct and then create and populate three instances, one per robot, or is it better create one hundred distinct variables (i.e.: robot01_temperature_01, robot01_voltage_01, etc.)?
Which one results in code that executes faster, with lower resources and consumes less power?
These are not decisions an optimizing compiler is going to make.
Optimizing compilers will occasionally make these decisions, but often they don’t have to because the processor will make them themselves. Modern Intel processors already do macro fusion for the common pattern of “test this and jump the result”, micro-optimizing by changing the direction your loop runs in is generally not a good idea unless you really know what you are doing, have tested the code, and intend to never change your compiler or processor ever again. (And it still depends on how your CPU is feeling that day.) In general, the kinds of things you are mentioning produce in their best case a minor win over the optimizing compiler, and if you’re not checking whether they actually work, often make no difference or do much worse most of the time. And they often come at a huge readability/maintainability cost, so really you have to ask yourself if it’s really worth making these superstitious microptimizations.
> What baffles about these software moans is that Apple’s hardware is ever-more refined. While far from flawless, the entire lineup is now (finally) largely free from these “foundational” issues you see in software.
The answer to this is very simple (at some level): It is impossible to produce mechanical designs that are the equivalent of the software engineering abominations the world is stuck with today.
Going back to iPhone 3 days, I remember coding a genetic solver in Objective-C that was an absolute dog. I optimized as much as I could and could only squeeze so much performance out of it.
I finally gave up and re-coded the entire thing in clean C++. The code was somewhere in the order of 290 times faster. Objective-C was an absolute dog because, at a minimum, every data type you had to use was an object oriented mess. This was absolutely unnecessary, the proof of it being that my C++ equivalent performed exactly the same function and lacked nothing. In fact, it was so fast that it allowed the use of this genetic solver in real time in the context of this app. Objective-C, er, objectively speaking, had no reason to exist. Yet, it did, and lots of people believed it had to exist and likely many do today. They are wrong.
Another way to look at this is that the clean solution used somewhere in the order of 200~300 times less energy. This is something hardware engineers are keenly aware of. Software consumes energy and badly written software is even worse. Think of it this way: A bit transition costs energy. Inefficient code requires more bit transitions per unit time, therefore, more energy and more power dissipation.
Imagine the mechanical engineering equivalent. Some monstrosity where, instead of using a simple screw and a nut one ends up using a complex fastener that has layers upon layers of mechanisms and unnecessary complexity. A screw with the ability to grow and shrink to any length and diameter, including all the wrenches, tools, nuts, washers and devices needed to install, tighten and test them. Very quickly a simple product that could mechanically fit in the palm of your hand would quickly become the size of a car.
And so, in this way, mechanical and industrial design is always "low level", like always coding in assembler (not proposing we do that). Sure, materials and manufacturing techniques improve, yet, at the most fundamental level, excellent mechanical and industrial design is clean, uncomplicated, easy to understand and easy to manufacture. It's machine language, or C. Not Objective-C.
Software engineers who are not exposed to this reality, through no fault of their own, are not aware of these issues. I don't blame them. If most of what someone sees in school amounts to Python, their view of reality will be skewed.
My son is studying Computer Science at one of our top universities and has less than a year to graduate. He is home now for both the summer and due to the virus. I've been working on several embedded projects and showed him a few tricks to improve performance. He was amazed to see how you could optimize code for execution speed (with sometimes dramatic results) by making simple high level choices. For example, a down counter in a "for" loop ("i=10; i!=0; i--") is generally faster than the typical up-counter: "i=0; i<10; i++". This is due to the fact that processors have instructions like "DJZ" or "DJNZ" (Decrement and Jump if Zero / Decrement and Jump if Not Zero) that don't require loading a comparison value and engaging the ALU and sometimes even fetching that value over and over again from memory.
Software engineering doesn't have the same physical constraints found in mechanical engineering unless someone is coding for an application and a domain where excess really matters. One example of this is writing code for space hardware, where, aside from reliability, you have to be aware of the fact that every bit you are flipping will cost a quantum of energy and you might not have enough to be careless about it. Energy quickly translates to mass in the form of large cooling radiating surfaces that must be launched into space along with the hardware.
It's an interesting problem this issue of bad or bloated software. Not sure if there's a solution. There won't be until there's a forcing function that requires a change in perspective and approach.
EDIT: To be clear, my point isn't to pick on a specific language or languages but rather than use easy examples to highlight one of the problems that has been building up since the introduction of object-oriented programming. I remember the early days of OO. It was a mess. Much like the early days of desktop publishing where people loaded-up documents with every font they could find.