Hacker News new | past | comments | ask | show | jobs | submit login
Brilliant Hardware in the Valley of the Software Slump (craigmod.com)
183 points by mpweiher on June 12, 2020 | hide | past | favorite | 97 comments



The problem is that both hardware and software are garbage.

Spectre/Meldown & friends are just the tip of an iceberg. We have layers & layers of indirection/abstraction everywhere. We have hardware that lies and tells you that it has certain properties when in reality it doesn't (example: sector sizes in hard drives/NVMs, processors still pretending that they behave like PDP-11), we have hardware that is flat out broken. We try to fix those issues in software.

But in the software, we have another dump of workarounds, dependencies, abstractions with a sprinkle of backward compatibility. We are now creating "minimalist" applications with a fraction of functionality of the software from 30 years ago but using so many layers that total amount of code used to make it work is many orders of magnitude larger than what we had back then.

I know that most of the programmers did not work with systems where it's very, very easy to debug the whole stack and you can learn it in a short period but it's amazing when you have knowledge about EVERY part of the system in your head.

There are some good things going on (like strive to replace C with something which has similar performance characteristics but without its flaws) but it's not enough.

Here are two things worth watching:

https://www.youtube.com/watch?v=pW-SOdj4Kkk - Jonathan Blow - Preventing the Collapse of Civilization

https://www.youtube.com/watch?v=t9MjGziRw-c - Computers Barely Work - Interview with Linux Legend Greg Kroah-Hartman


> The problem is that both hardware and software are garbage.

I think it's incredible that, in my lifetime, computers went from giant mainframes with dedicated hard-line terminals to always-connected supercomputers in everyone's pocket, worldwide. Furthermore, anyone can use the internet to learn how to program.

Maybe that's garbage compared to some mythical ideal but in terms of impact on the world it's incredible.

> I know that most of the programmers did not work with systems where it's very, very easy to debug the whole stack and you can learn it in a short period but it's amazing when you have knowledge about EVERY part of the system in your head.

Well, you can tell from the above that I was around then. I started programming with a single manual and the Beagle Brothers "Peeks, Pokes and Pointers" cheatsheet[1].

People forget that the software itself had to do much less than it does today. Here's just one angle to judge: security. We did not have a worldwide interconnected network with so many people trying to steal data from it. We all used rsh and packets were flying around in cleartext, no problem. But today, all software will have to incorporate TLS.

And far fewer people built that simpler software. EA's first titles were written by one or two people. Now a typical EA title has hundreds of people working on it.

Things will get better than where they are today. In the future, the industry will have invest more money in "10X" productivity and reliability improvements. Eventually, I think that will happen as productivity continues to slow on large codebases.

[1] - https://downloads.reactivemicro.com/Apple%20II%20Items/Docum...


And yet, even doing something as basic as a smart doorbell requires a backend server somewhere (essentially a mainframe) and all kinds of NAT hole-punching, UPnP and proprietary push notifications despite the supercomputer in your pocket technically being able to listen for incoming TCP or UDP packets from the doorbell directly.

The "supercomputer" processing power is also being wasted on all kinds of malicious and defective-by-design endeavors such as ads, analytics, etc (install any mainstream app and look at the network traffic, 90% of it will be for analytics and can be blocked with no ill effects).

Despite the supercomputers being 10x faster than the early ones (back in the iPhone 3G days) we somehow lost the ability to render a 60fps non-stuttering UI despite modern UIs being less rich and consisting mostly of whitespace.

> anyone can use the internet to learn how to program

I think there used to be a "golden age" of this where the resources were all available for free and at the same time the stacks were manageable (think basic PHP deployed on shared hosting or Ruby or Django) where as nowadays it is considered "wrong" if you don't use Kubernetes, 10 microservices and 100+ NPM packages just to serve a "Hello world" page to a browser.


It requires all of those things because users want a doorbell that will easily connect to WiFi and a nice phone app, that includes not only movement alerts but also a miniature social network. And the startup making the doorbell wanted to make a quick proof of concept to get funding and then build organically using a flock of less expensive younger developers.

Alternatively, a company could invest money into writing something that looks beautiful to software developers that you could SSH into. The architecture would be sound because several gray-beared devs would talk about the relative merits of different approaches. It could offer so much more functionality if the user is willing to acquire minimal knowledge of Linux. The only problem is that the only people interested in it would be other software developers.

We're building stuff for people, not other geeks. Businesses invest in software as a means to an end, and if the backend is ugly but accomplishes that end, then it's successful.


> We're building stuff for people, not other geeks.

Of course, but everyone, including the end user, would greatly benefit from a less messy and more elegant, consistent "computing stack". The programmers would be (much) happier, the code would be smaller and better, the product would be better and cheaper and the freed resources could be allocated elsewhere.

These improvements (and IMHO there is huge potential for such improvements on all the layers) would bring the same kind of benefits that better automation has brought.


The end user does not care about the "computing stack" at all. "Does it work" and "is it cheap enough" are the main considerations.

And "the product would be better and cheaper" is really wishful thinking. I've worked in this industry for ~4 decades, so I really remember vividly working on those "less messy and more elegant" systems, and no, building a remote-controlled doorbell with life video on one of them would neither be better nor cheaper.

The sheer idea of doing this on a 128B machine (Olivetti Programma 101, my first machine) is ludicrous. The idea of processing any kind of video on a DG Nova Eclipse(second one) is... ambitious.

The first time we had machines with the processing power to do useful things with video for an at least semi-reasonable price was sometimes around the introduction of the Pentium, with dedicated hardware to process the incoming camera stream. I happen to know because I moved a video package that required a cluster of SGI machines to said P90.

Yes, the code was small. The video processing core was also the result of 6 rather sleepless weeks, squeezing every single cycle out of the CPU. I couldn't just grab OpenCV and make it happen. It also involved lots of spectacular hacks and shortcuts that happened to work in the environment in question, but would break down for a general purpose video processing system.

Around that same time the world also just barely had figured out the basic ideas behind making WiFi work usefully. But let's assume we would've just hard-wired that camera. If you wanted to do that with an embedded system, you wrote your own networking stack. That takes about 18 months to make work in anything that's not an extremely controlled environment - it turns out people's ability to misread networking specs and produce barely non-compliant systems is spectacular. (Source: Been there, done that ;)

So now we have the most basic system working. We have, however, no way of storing the video stream without more dedicated hardware, because realtime compression is not possible. (And if you think that dedicated hardware was less messy and more elegant... /lolsob)

Was the total codebase smaller? Yes. Were the programmers happier? F# NO. Would it have had anywhere close to the functionality that something like Ring has? Yeah, really, no. It's size would've been roughly a large desktop tower. It's cost would've been ~$5000.

The "good old days" really weren't.


A remote controlled doorbell with live video doesn't need video processing. It doesn't need to be digital. It doesn't need to be high resolution unless you're working on sensitive materials--for most people, low-quality analogue would be good enough.


At this point you're saying "if something that didn't have most of the features of a Ring camera would be built, it would be much simpler".

That's a tautology.

It's also ignoring market realities. We had doorbell systems with crappy analogue cameras, and nobody wanted them because they were overpriced for what they did, and what they did wasn't particularly useful. (The "good enough" low-res analog cams made for great guessing game who's at the door, but they didn't actually suffice for recognizing who was at the door without them plastering their face right in front of the camera)

And this is true for other consumer electronics too - of course you can build something inferior with less effort. But nobody wants that. (I mean, it's not like modern systems get built because somebody just wants to lose some money)

This leaves the conclusion that you're mostly arguing for it because you liked the developer "simplicity", and... it wasn't simple. It became simpler once we introduced computers. It became simpler as we built abstraction layers on top.

Yes, current systems are harder to understand top-to-bottom, and we paid for ease of development with increased systems complexity. But the rose-colored glasses of a "better past" are just that. Yes, we could understand the things we built more easily, but we couldn't build what we build today in that past.


These arguments have been made, mutatis mutandis, roughly since the beginning of computing. At least, certainly since the 70s/80s.

Gabriel articulated some of these in "Worse is Better" (89?) but it wasn't new then.


...what? The doorbell can't talk to the phone because the supercomputer in our pockets is not really ours - it's functionality is gimped by the various software and operating systems providers that control it from the respective motherships.


This is accurate, but is not the whole story. I’m running a terminal on iOS right now to ssh into my Linux server.[1] There are terminal emulators on Android too.

I wish there were more GUI apps centered around hybrid cloud/shell use cases. I would like to be able to make GUI widgets to do things in a ssh session on my server. I’m not sure how important it would be to run on the device; it could be a webapp I run on the server itself to trigger scripts. It’s a UI/UX that centers around touchscreen input, is reconfigurable, and can perform arbitrary commands or events server-side, which I find lacking. Anyone know of tools that scratch this itch?

[1] https://github.com/ish-app/ish


I ain't sure how "I can run a terminal on a phone" has much to do with "I have full control over the physical machine I paid for and ostensibly own". Unless you're sideloading via jailbreaking (which has a score of problems, not the least of which being that Apple is hell-bent on "fixing" jailbreaks, resulting in a cat-and-mouse game with no end in sight), your ability to run that terminal is exclusively the result of your device's manufacturer explicitly deeming that particular terminal (and the programs it executes) worthy.

Android is slightly better in this regard in the sense that it's (usually) easier to sideload applications, and sometimes even possible to modify or outright replace (most of) the operating system, but this, too, is subject to the whims of manufacturers rather than being something the user (and the user alone) controls.

----

On another note, I, too, would be interested in scratching that itch. It seems like it'd be straightforward to throw together.


My understanding is that Android is going to start disallowing execution of downloaded binaries, and they'll need to be in an apps lib folder. See the termux issues on GitHub for a discussion of this.


> It requires all of those things because users want a doorbell that will easily connect to WiFi and a nice phone app, that includes not only movement alerts but also a miniature social network.

You’re missing the point. The issue is that the underlying layer is more complex than it needs to be, not that the company needs to use that underlying layer to solve business requirements.

This is analogous to a freeway that’s too small.


> users want a doorbell that will easily connect to WiFi and a nice phone app, that includes not only movement alerts but also a miniature social network

I think "users" is a stretch here. Having worked at a couple $MEGACORPs and Unicorns, the bar for a feature being successful is very low. Most of the time, features are driven by Product Managers on product teams that come up with ideas. Validation of the success of these ideas (unless they are highly visible and core to the business), in my experience has been minimal (was there a regression? no, then great!), and don't even get me started on the numerous methodological issues.

> We're building stuff for people, not other geeks

I think computing is unique in how much we (the royal "we" here, and I have nothing more than anecdata to back these observations up, so take them with a grain of salt) focus on trying to hide the internals, almost as if it stems from an embarrassment with what computing is, as something only for "geeks". How often do you hear of musicians receiving censure for not making music that "other people" listen to, or artists receiving criticism for art that "regular people" consume? Obviously, business needs necessitate a tradeoff between beauty and functionality in any field, but despite the ubiquity of tech, it feels to be one that is uniquely embarrassed by the art and technique behind it. Maybe this is just an outgrowth of the "nerds are weird" culture of the '80s and '90s?

I think the reason that users put up with such bad software is twofold:

1. Computing is new, and the general population doesn't really understand what it means to be good at it yet. The general population has enough education with things like woodwork to understand what a shoddy table is, but not yet what shoddy software is. That said, I know several non-technical users that prefer using the iOS ecosystem because it has lower perceived input latency and higher perceived stability (much like the OP of the article), so users are certainly not blind to the problems of software.

2. Software as a field is deeply embarrassed about its technical internals. The fact of the matter is, we don't need to be worried about "our grandparents" being able to use our software anymore; the vast majority of young folk in industrialized countries have spent their whole lives growing up using technology. Yet, we are still obsessed with the creation of a dichotomy between "graybeards" and "normal people", or "geeks" and "average users". We need to stop treating users as these opposed forces, hidden from the innards of the technical beast, and instead embrace them as part of what it means to create a successful product.


> How often do you hear of musicians receiving censure for not making music that "other people" listen to, or artists receiving criticism for art that "regular people" consume?

All the time, when those artists have outside investors who are investing to make a profit off of their work. It's a recurring theme in pretty much every music documentary I've watched. Accessibility is frequently at odds with artistic development. At some point, people just want you to crank out the same boring (widgets/songs/art/software) that you've done a million times, and do it cheaply.

Indie rock artists who are just playing in bars to their friends don't have that economic pressure, and you're always welcome to build your own software for your friends along with maybe a few hardcore enthusiasts.


> Indie rock artists who are just playing in bars to their friends don't have that economic pressure, and you're always welcome to build your own software for your friends along with maybe a few hardcore enthusiasts.

I don't think it's as simple as "indie artists" and "artists with investors". There's room for innovation while making profit, and there's room to make catchy songs when you're an indie artist. In software, there seems to be a large divide between hobbyist code and industrial software. How many folks really write software for themselves and friends? I'd love to see a world where people grow up tinkering with software, so they can have an informed stake as a user instead of being a blind consumer.


> All the time, when those artists have outside investors who are investing to make a profit off of their work.

Prioritizing commercial gain over intellectual and creative stimulation and enrichment is, on that note, one of the biggest issues I have with the modern music industry. My views don't change much if you replace "music" with "film" or "television" or "journalism" or, to the point, "hardware" or "software".


I agree that software has gotten overly complex for the benefit, and Kubernetes is a good example. But it will improve again.

You mentioned Ruby and Django... the popularity of those in the Aughts were a swing back towards simplicity from overly complex Enterprise Java. Remember?


I wish smaller shops had some smaller role model to follow instead of your Facebooks or Googles. No, we don't need the same tools they use, because they use those tools to deal with deep organizational and technical complexity. If you don't have deep organizational complexity, and especially if you don't have separate data stores, you don't need microservices.


It’s exactly my philosophy in the small shop I founded, it’s a real struggle to educate both developers and clients.


It's easy to do, too. Just make boring software. C# is fine. SQL server is fine. A monolith is fine. That's it. (Replace with your favorite language.)


Making boring software is easy. Getting other people to pay for it is harder in a world where everything needs to be Kubernetes, React, AI or Blockchain.


Are you an agency? I'd love to hear your experience with this problem. As a contractor I find it very hard to find clients with "sane" stacks to work on. Even early stage companies nowadays (who would typically get contractors to build their MVP) seem to already settle on an overcomplicated mess despite their scale not warranting it. Maybe I'm just looking in the wrong place.


Yes we are an Agency, so we are fortunately able to choose the stack, but still have to convince (it hopefully gives us an edge on pricing). It’s a reality clients are looking for buzzwords because that’s what they hear and they want industry standard, cannot blame them. From what I see in our niche what we are choosing to do is the exception, so indeed I imagine it must be hard as a contractor to find simple stacks to work on.


You don't need a backend server to make a smart doorbell that pings your phone over Wlan. You only need one if you want to add loads of non-trivial functionality for non-technical users who might not be on the same network. You're welcome to build your own device with a wifi capable microcontroller and a simple phone app directly accessing the socket interface.


Nothing wrong with being excited by the progress made but I think engineers (well, especially myself) tend to be critical of things because it is in our nature. We see what they could be, not what they are.

I think vocalizing how bad the current situation feels is the only path to improvement.

You seem convinced that things will get better. I don't think all of us are that optimistic (and I am generally an optimist!).


> we have hardware that is flat out broken

Reading Linux driver code is very informative. Sometimes hardware just doesn't do what is expected and the driver must try and fix it up so that user space can have a reasonable interface.

A simple example:

  /* The ITE8595 always reports 0 as value for the rfkill button. Luckily
   * it is the only button in its report, and it sends a report on
   * release only, so receiving a report means the button was pressed.
   */
  if (usage->hid == HID_GD_RFKILL_BTN) {
      input_event(input, EV_KEY, KEY_RFKILL, 1);
      input_sync(input);
      input_event(input, EV_KEY, KEY_RFKILL, 0);
      input_sync(input);
      return 1;
  }


It's even more interesting when you can infer why its broken. Let's take the NVMe controller in the MacBook Pro 2016 and later for example: That controller is not properly detected by Linux and needs a quirk [1] to be identified by its PCI device id instead.

Why is that? Well, Linux usually detects NVMe devices based on their PCI class. The class for NVMe devices is 0x010802. So guess what the Apple controller provides as class id: 0x018002. If you have to compare the ids twice to notice what's different, you're not alone. My guess is that this subtle difference is just a human error made by an Apple engineer, which wasn't caught during QA and macOS simply works around it as well or doesn't use PCI class anyway.

So for the same reason software isn't perfect, hardware (or the firmware powering that hardware) is neither.

[1]: https://github.com/torvalds/linux/blob/b791d1bdf9212d944d749...

[2]: https://lists.infradead.org/pipermail/linux-nvme/2017-Februa...


Simplicity and good design take lots of time and money. Our culture is not truly ready to make these kinds of investments in the manner required. Why would they? There is a whole universe of FOSS out there upon which anyone can cheaply create "working" software. If your goals are short term (quarterly earnings, looking only a year or two down the road) this is "good enough." Worse, that FOSS foundation is typically filled to the brim with complexity. We have created a computing culture that is premised on pushing the extremes of the teletype model of computing, and tacking what customers think they want on top of it.

We have good alternate examples from the past (Oberon, Lisp machines, Hypercard, Smalltalk systems, etc). How often does the new generation of computing people get exposed to these ideas?


I don't think FOSS is the problem either. I think much complexity is required because everything is expected to live on the web and also/thereby is expected to be client/server. As soon as one wants to do more than just display a document on the web one discovers that its architecture is not very suitable for doing anything else besides displaying a document. A badly formatted document at that..... Also the expectation of client-server communication is a big driver of complexity. As soon as one has that we have network communication, serialization and so on. I.e., stuff that is on the large side of things to write oneself. Of course, with the web one more or less has a client-server architecture by default.


No, FOSS/proprietary is orthogonal to that. You can find simple and performant software. The problem is in something else.


> Worse, that FOSS foundation is typically filled to the brim with complexity.

Really? In my experience, FOSS tends to be a lot simpler and more streamlined than non-free software with comparable functionality.


I agree with 'de_watcher that FOSS/encumbered is an orthogonal axis to complexity/simplicity.

Lots of FOSS software is excessively complex (the systemd ecosystem of shovelware comes immediately to mind) and lots of FOSS is simple. Similarly there are untold thousands of overcomplicated/overengineered proprietary suites and of course it's hard for a graphical application to get simpler than notepad.exe.


And what software developers consider "simplicity and good design" often comes across to other people as "I have no idea what to do and looking at this literally gives me anxiety to the point that I want to avoid it."


> And what software developers consider "simplicity and good design" often comes across to other people as "I have no idea what to do and looking at this literally gives me anxiety to the point that I want to avoid it."

Software UX is a garbage fire [1], but good software UX doesn't necessarily mean building things an untrained user can easily figure out how to use. That's just the orthodoxy we've happened to take with most software, which may also limit its potential. See https://99percentinvisible.org/episode/of-mice-and-men/.


It would seem there may be a lack of appreciation for how powerful a modern x86 CPU actually is. Even when you apply every side-channel mitigation in the book, these processors are incredibly powerful when used appropriately. Somehow x86 is being branded as this inferior compute facility, always at odds with the GPU/ARM/et.al., and I feel it mostly boils down to shitty software engineering more than anything else.

I believe a lot of this can be attributed to rampant, unmitigated abuse of the NUMA architecture exposed by the x86 stack, as well as a neglect for the parallelism offered by CPUs of the last 3-4 generations. Most developers are not following principles which align with what the hardware can actually do, rather, they are following principles which align with what academia/cult-wisdom says you should do. These two worlds are vastly at odds when it comes to keeping your L1/2/3 caches as hot as possible and keeping all of those cores busy with useful work.

For example, most software engineers would tell you that busy waiting is a horrible practice, but when you step back and pay attention to the hardware reality, you have 32+ threads to burn, why not put your ultra-high-performance timer as a high-priority process on one of those threads and let it check trigger conditions in a tight loop? You can get timing errors measured in 10s of nanoseconds with this approach, and it's dead simple. Now you only have 31 threads remaining. 30 if you really want to make sure that timer runs smooth. Running at 3+ ghz, that one thread could service an incredible number of events per second. There are new ways to think about how we build things given newer hardware capabilities.

I feel a lot of the high-performance software revolution is going to come out of some ideas that have been floating around in fintech. Frameworks like the LMAX Disruptor (and the ideology behind it) can serve as the foundation for a UI framework capable of transacting tens of millions of aggregate user events per second and with peak latencies measured in microseconds. I have personally started to dabble in this area, and the results after just a few weekends have been very encouraging. You would be surprised what even high level languages (Java/C#) are capable of when their use is well-aligned with the hardware.


Jonathan Blow: "Software has been freeriding on hardware."

True or false? If I make the code that does a particular task smaller, faster, less resource-intensive, then I am not freeriding.

He says people do not reference the "five nines" anymore. True? I do not work in the industry anymore. I had no idea this has disappeared. That is really sad.

"Developer time" versus "user time". What is more important? Are they equally important?

Recently someone posted a video of a recent talk from Margo Seltzer. She said users, e.g., in the Physics Department, do not care about computer languages and these things that computer scientists and programmers think are so important. They care about how fast the program runs. That's all. "Make it go faster".

The incentives seem backwards. We pay programmers more today to do less than they did in the past. There is a lot of "busy work" going on.


You might enjoy this 2016 article by Chuck Moore (the Forth guy). His position is pretty extreme, as he dismisses static typing as needless complexity, thinks even C is too elaborate, and he doesn't touch on web technologies, but still worth a read. [0]

Also, mandatory link to the Software Disenchantment article. [1]

[0] https://web.archive.org/web/20160311002141/http://colorforth...

[1] https://tonsky.me/blog/disenchantment/


In spite of all this "garbage" I carry around a $500 machine which fits in my pants pocket and give me directions to anywhere I can think of, a live video call (!) with my friends or family, a vast trove of knowledge about millions of different topics, and a camera that basically matches my $3000 DSLR from six years ago. And these are available whenever and wherever I want, so I can do this video call at 2am on a mountaintop if I want. So... I love this garbage.


I too sometimes have the same feeling that the software stack we have today is just a tower of cards waiting to crumble.

How long do you think it would take to get back to where we are if everything was scrapped and we restarted with the initial binary -> assembly jump?


This is kind of like someone living during the Crusades yelling metallurgy is shiiiiit.


> https://www.youtube.com/watch?v=pW-SOdj4Kkk - Jonathan Blow - Preventing the Collapse of Civilization

Priceless, thanks for sharing.


This doesn't seem to have anything to do with the article.

Am I wrong to wish that the top voted comment was by someone who read the article before they posted?


Well is the solution more tech (like improving on C language) or is it making sure we have robust non technical fallbacks as a civilization


I find it interesting that your two links seem to contradict each other.


Semi-related to this and the recent Haiku R1/beta2 announcement: just for giggles, I installed that inage on a USB3 external SSD and fired it up under KVM, and was suddenly greeted with a faster, more polished desktop environment than Windows 10 or Ubuntu offer out of the box. Consistent UI patterns, applications that open in less than the 250ms perception threshold, and some truly useful utilities installed (a fast POSIX terminal, programmer's editor, media viewers, basic but serviceable WebKit-based browser, etc.). All of it happy to run in 1-2 GB of RAM and two cores of my six-core laptop, even under virtualization.

Running on an Atom-based SBC I had on the workbench, it's even more responsive.

Yes, building native apps requires at least a basic knowledge of C++. No, it won't seamlessly run the latest React SPAs as well as Chrome. The driver situation isn't as good as (say) Linux or FreeBSD.

And yet, the focus on providing a productive environment for normal computing tasks instead of endless up-selling to an app store, countless background updaters, and vendor-bundled crapware is like a breath of fresh air.

This is what we've lost in the move to impose ever more layers of services, unique per-app GUIs, and ubiquitous (even gratuitous) packaging of "webapps" in lieu of targeted, native apps.

To be clear: this isn't a unique property of Haiku. A nice clean install of FreeBSD or Debian has many of the same properties, and I have reason to believe that a Windows 10 "distro" based on something like Server Core could be similarly light and responsive.

Unfortunately, the major platform providers seem deeply uninterested in building systems that don't push you aggressively towards newer hardware, loads of subscription services, and "strategic" software bundles no one asked for.


Yeah, I feel like as soon as Haiku has support for encrypted partitions it'll be my daily driver (even if it's a performance hit, I'll take that over my data being in the clear should someone snag my laptop). Implementing support for it is something I'd love to try my hand at as soon as I can figure out how I'd go about doing so (namely: how to make an encrypted partition look like a normal partition to the overall system while still providing some mechanism to prompt for a passphrase or key).


Haiku already has support for encrypted partitions via "DriveEncryption" ... but no support for encrypted boot drives, yet, indeed.


Whoa! How long has that been around?

Even without boot partition support, if Haiku's able to keep my user files on a DriveEncryption'd partition and automatically decrypt it on boot (i.e. with a password prompt at some point in the boot process or immediately after), then that's good enough for me.


I think DriveEncryption needs a few tweaks now that Beta2 is out to work again, but it's been around for quite a long time. It was made by one of the foremost kernel developers who certainly knew what he was doing. I think it's based on TrueCrypt, though he may have updated it to VeraCrypt at some point...

Here's the GitHub, anyway: https://github.com/axeld/driveencryption


Definitely looks interesting, and the advertised support for decrypt-on-login is exactly what I'd need. Doesn't look like it's working on x86-64; I wonder how hard that'd be to fix...

Whatever the case, thanks! I left enough room on my disk to add an encrypted partition later (and move the stuff currently in ~ over to it), so hopefully I can get up and running with this.


Probably not that hard to fix, there are a few potential issues that come to mind but all have simple solutions, if you know where to look (and if you don't, as in Freenode#haiku.)


I think it's the real problem of the industry. It used to be that Intel had a strong brand because you'd buy a new computer in three years that would knock your socks off. It isn't that way anymore, and from the top of the bottom of the stack we should be addressing perceived performance.

My #1 frustration as a Windows user is that every so often an applications or the whole system freezes up for a few seconds. I use MacOS sometimes and also Linux and I don't think either one is much better.

For instance when I launch a command prompt often it is up in <1s, but sometimes it takes 10s. That's a very simple program, but often there are security checks involved; even ssh logins to a linux server on a LAN have a long-tail distribution in time-to-prompt because maybe some daemon was swapped out.

I would be satisfied I had some visual indication that progress was happening (ALWAYS AND <100ms LATENCY) and some insight into the process (e.g. real progressbars... if hadoop can do it, why can't pandas?)

We don't want to trouble people with details, but what's the harm in telling the user we are waiting on a 30 second timeout?


> For instance when I launch a command prompt often it is up in <1s, but sometimes it takes 10s.

On Windows, a reason for this might be virus scanners. Get rid of all third party virus scanners and tune the settings on the built in ones.

I personally am of the opinion that virus scanners are a waste of computing resources, in that if you reach the point where a binary you run has been infected you are probably compromised in ways a virus scanner cannot fix anyways.


What's more, virus scanners often run with SYSTEM privileges and consist of hundreds of thousands of often shitty LOC, presenting a huge attack surface. These days you might infected because you're running "anti-virus" software.


People need to try some old systems, dos or windows 3 or even XP. Responsiveness does matter, a lot


A modern-day, lightweight Linux system is just as responsive as those, in my experience. Often more so, because the systems you mention were often limited by slow hard-disk or even floppy-disk access, whereas modern Linux tries its best to cache absolutely everything in RAM.


Yeah, if you notice your laptop is getting slower and slower, it's not the hardware. Try installing Ubuntu and it will feel new again.


If you have other devices though, you may perceive it as getting slower. An older laptop with an HDD absolutely chugs compared to an SSD. That's been my experience with older devices, my newer ones spoil me on any OS


Running old software on new hardware is by far the best user experience that I've had. Everything is blazing fast to the point of feeling instantaneous.

Unfortunately it's getting harder and harder to keep things running with various licensing schemes and whatnot. Hell Microsoft even keeps breaking my old Office install with windows 10 updates forcing me to constantly fix it because they want me to update to their office 360 offering. All that said, both the hardware and software stack are getting a little silly even at the low level. You can send a keypress around the world as a data packet faster than you can send it to your monitor now a days.


Running up-to-date FLOSS software is an even better experience, in my view. It doesn't even get all that much slower or more RAM-intensive over time - certainly nowhere near as much as windows 10 or Office does! Of course you need to choose your software stack wisely if you're looking for that "lightning-fast" experience.


Even "heavyweight" DEs can pull this off. I've been running Manjaro with KDE on my Pinebook Pro and it's been overall a snappier experience than I had even hoped to get out of what is essentially a four year old phone (with only 4GB of RAM) in a laptop form factor. Faster than Android or Windows 10, for sure


Ive recently been noticing that g suite apps in firefox tabs are routinely grabbing more than 500MB of ram, it's often more than 1GB just to have a gmail window open. Just a few years ago this wasn't the case, but nothing has been improved as far as functionality. Anyone have insight into how we got to a place where a google sheet with 4 entries can use more memory than the entire OS? I used to routinely perform identical tasks on much lesser hardware with greater speed, so how do these seemingly broad regressive changes happen at the ground level?


I blame the Software as a Service model. Clearly, you should just downgrade to the old version of google docs (or whatever) that used 10% as many resources, and had a better UI.

Since you don’t get to control which version you use, teams don’t have to compete with last year’s version. Therefore their management doesn’t need to worry about extreme regressions in functionality, so they don’t take action to avoid them.

Since the service provider is the development team, you’re locked in, and won’t switch.

Usage metrics look good, management gets credit for shipping ${feature}, and gets their promotion. This happens a thousand times, and everyone wishes the irreparably broken ${megacorp} would just go out of business already.

This pattern doesn’t play out to the same extent with hardware, where people have to pay big lump sums to upgrade, and can hold on to older models / switch vendors instead.


Brilliant post.

I think this trend will continue, sadly, until a crafty company is able to steal 1B of revenue from a flaky incumbent by building a reliable and fast version of a core utility.

I am reminded of the early google.com homepage. I can remember setting it as my default homepage in IE because it got to the point that Yahoo would take MINUTES to load, if not just crash the browser.

Or maybe we need hardware to just suck again, to force the issue.


make a law requiring software developers to use 15 year old hardware when doing development


Electron apps are easy to develop, but the performance as it is a browser is pretty bad compared to native. The performance af all electrons apps I have used (Slack, Discord, Spotify, Twitch, ...) is always worse than native apps, and not just a little bit. You will notice it! I understand it is easy to develop a electron app when you come from a web background, but it still seems wrong to me.


Co-founder at Missive, an email client built entirely in HTML/JS; Electron on desktop; Cordova on mobile.

You can try the app, it's way faster than Mail from Apple. Apple even featured the app multiple times on the mobile App store; again all HTML/JS.

Making fast software in HTML/JS is definitely possible.

An email client is mostly HTML/Style rendering so our use case might be more aligned.

You can learn more about how we made it fast by listening to :

https://syntax.fm/show/184/desktop-and-mobile-apps-with-a-si...


Building fast applications on most platforms is usually possible. But you must play to those platforms strengths. I used to mess with my fellow devs by making toy vb6 apps that worked very quickly (not my favorite environment to work in). It usually meant throwing out tons of junk. Could never pull that trick off with pcode vb applications. That pcode interpreter was just too slow. I would also do this with win/C++ programs you can make surprisingly light applications if you know what your platform does. The mistake many people fall into is just tossing whatever in there and hoping for the best. That works at first when you are spinning up but usually you end up having to do something about it.


I haven't tried Missive, but how are memory and battery usage? I've seen fast electron apps, so I know they exist. Programmers basically treat memory as if there's enough of it to not care about, but Laptops are still sold with 8GB of RAM and the upgrade markups on current devices are insane.


Is it not insane to anyone else that we talk about only having 8 GB of ram with regards to the performance of mail and chat applications?


Seriously... 8GB is an ocean. People were reading and editing HTML email in the days of 16MB.


HTML has changed since then, though. HTML email might basically be stuck with whatever old version of HTML, but nobody writes an HTML renderer just for email--they embed a fully functioning browser engine in the mail client (or write the client itself to run in the browser).


My grandfather used to say, "The people who sit up front catch all the bugs in their teeth"

If you're trying to move your full desktop workflow to a platform that only added the option to work that way recently and as an afterthought compared to its original use-case, you're going to be catching bugs for quite a long time. If you're unlucky, your particular use-case for that hardware will never catch on enough for the value of working out those kinks to outweigh the cost.

I know techies like most of us on here love to be early adopters, but you have to draw a line between using tech as a toy and using it as a tool. If you're using something that you don't expect to be stable 99.9% of the time, odds are that it's a toy and you should hold your expectations a hell of a lot lower for any sort of productivity.


The problem is that in some cases you are forced to upgrade your tool and turn it into a toy. Windows 10 is an example. macOS Catalina (required for latest Xcode which itself is required to target latest iOS) is another one and I've been bitten by it.


Economics, economics, economics. Companies only put the amount of effort into their software that's necessary to optimize profits.

Startups are incentivized to move fast and break things, and then to keep adding to their broken prototype instead of rebuilding the product, because it's more affordable.

OS vendors benefit from lock-in and hardware is fast enough that the vast majority of consumers don't notice. If something breaks, you just take it to the Apple store and they reset it. It's cheaper for everybody involved.

Online ad vendors have no incentive to create a less-than-terrible web experience because it's not their site that's being trashed.

On top of it all, there's no regulatory or institutional quality standard. It's left to be a race to the bottom.

I don't know what the fix is, or whether there is one at all, but we should at least stop being surprised. We shouldn't really be blaming it on "kids these days" either, which is an all-too-common refrain. There simply isn't a business incentive to invest in quality.


Of course, that hardware receiving a lot of praise has a lot of software (firmware) behind it too. The line between great hardware and software isn't really that distinct in my mind. Lots of things that may have once been mechanical in nature have software behind it instead because of that amorphous trait of it brings about great flexibility.


I feel that we as software engineers need a hippocratic oath of software development. One of the commitments should be to write efficient and responsive (responding to user input) software.

I feel both angered and embarrassed at software inefficiency. Trivial Electron "apps" require 1-2GB + of ram. Seconds of Input latency like typing on the keyboard or when pressing a button. Not to mention the gigawatts of electricity and gigatons of CO2 being wasted server-side on poor code , logging, encode-decode & other nonsense.

My desktop experience peaked on Windows 2k and has been declining rapidly for the last 10 years.


I don't know why you are being down-voted. I think you make a good point here.

If we were real engineers; not just by pretension; maybe we would have to abide to a minimal set of rules that respect our users and theirs wallet.


Agreed. Think how how powerful modern smartphones are, yet the apps still drag. Many apps still sync the UI with the server, making the UI hang with any radio hiccup, and even take seconds to respond in good conditions


> Something strange is happening in the world of software: It’s slowly getting worse. Not all software, but a lot of it. It’s becoming more sluggish, less responsive, and subtly less reliable than it was a few years ago.

> What baffles about these software moans is that Apple’s hardware is ever-more refined. While far from flawless, the entire lineup is now (finally) largely free from these “foundational” issues you see in software.

The answer to this is very simple (at some level): It is impossible to produce mechanical designs that are the equivalent of the software engineering abominations the world is stuck with today.

Going back to iPhone 3 days, I remember coding a genetic solver in Objective-C that was an absolute dog. I optimized as much as I could and could only squeeze so much performance out of it.

I finally gave up and re-coded the entire thing in clean C++. The code was somewhere in the order of 290 times faster. Objective-C was an absolute dog because, at a minimum, every data type you had to use was an object oriented mess. This was absolutely unnecessary, the proof of it being that my C++ equivalent performed exactly the same function and lacked nothing. In fact, it was so fast that it allowed the use of this genetic solver in real time in the context of this app. Objective-C, er, objectively speaking, had no reason to exist. Yet, it did, and lots of people believed it had to exist and likely many do today. They are wrong.

Another way to look at this is that the clean solution used somewhere in the order of 200~300 times less energy. This is something hardware engineers are keenly aware of. Software consumes energy and badly written software is even worse. Think of it this way: A bit transition costs energy. Inefficient code requires more bit transitions per unit time, therefore, more energy and more power dissipation.

Imagine the mechanical engineering equivalent. Some monstrosity where, instead of using a simple screw and a nut one ends up using a complex fastener that has layers upon layers of mechanisms and unnecessary complexity. A screw with the ability to grow and shrink to any length and diameter, including all the wrenches, tools, nuts, washers and devices needed to install, tighten and test them. Very quickly a simple product that could mechanically fit in the palm of your hand would quickly become the size of a car.

And so, in this way, mechanical and industrial design is always "low level", like always coding in assembler (not proposing we do that). Sure, materials and manufacturing techniques improve, yet, at the most fundamental level, excellent mechanical and industrial design is clean, uncomplicated, easy to understand and easy to manufacture. It's machine language, or C. Not Objective-C.

Software engineers who are not exposed to this reality, through no fault of their own, are not aware of these issues. I don't blame them. If most of what someone sees in school amounts to Python, their view of reality will be skewed.

My son is studying Computer Science at one of our top universities and has less than a year to graduate. He is home now for both the summer and due to the virus. I've been working on several embedded projects and showed him a few tricks to improve performance. He was amazed to see how you could optimize code for execution speed (with sometimes dramatic results) by making simple high level choices. For example, a down counter in a "for" loop ("i=10; i!=0; i--") is generally faster than the typical up-counter: "i=0; i<10; i++". This is due to the fact that processors have instructions like "DJZ" or "DJNZ" (Decrement and Jump if Zero / Decrement and Jump if Not Zero) that don't require loading a comparison value and engaging the ALU and sometimes even fetching that value over and over again from memory.

Software engineering doesn't have the same physical constraints found in mechanical engineering unless someone is coding for an application and a domain where excess really matters. One example of this is writing code for space hardware, where, aside from reliability, you have to be aware of the fact that every bit you are flipping will cost a quantum of energy and you might not have enough to be careless about it. Energy quickly translates to mass in the form of large cooling radiating surfaces that must be launched into space along with the hardware.

It's an interesting problem this issue of bad or bloated software. Not sure if there's a solution. There won't be until there's a forcing function that requires a change in perspective and approach.

EDIT: To be clear, my point isn't to pick on a specific language or languages but rather than use easy examples to highlight one of the problems that has been building up since the introduction of object-oriented programming. I remember the early days of OO. It was a mess. Much like the early days of desktop publishing where people loaded-up documents with every font they could find.


Objective C will always be an absolute dog for high-performance work because of how everything has to go through dynamic dispatch and indirection steps. It's why even Apple is trying to replace it with Swift - that name is not coincidental! And the Rust language by Mozilla is even better performance than Swift while supporting a great set of principled, higher-level language features if you want them.


The average overhead of a message send using the runtime Apple ships these days is insanely low; it’s on the order of approximately two nanoseconds a call. With some minor insight you can bring that down to well under a nanosecond, literally single-digit clock cycles, just by testing if indirection is necessary. (For reference, this is about on par with, or on days I’m feeling a bit confident even better than, a C++ virtual method call.) There is no way that Objective-C is hundreds of times slower than C++ unless there is something else going on.


Agreed. See my edit.


I thoughr this trivial stuff like loops doesn't really matter anymore due to compilator optimizations. Isn't that the case?


You have to test. You also have to be intimately familiar with what your compiler is doing. I can't give you an example right now, but it is possible for the resulting compiler-optimized code to change from run to run based on changes in the source code. If you want repeatable and reliable performance you have to be drive the optimization rather than hoping for the compiler to do the work for you.

I very much doubt an optimizing compiler is going to take an up-counting for loop and convert it to a down-counting loop. One thing is certain, if such a thing exists, there is no way I would trust it to be consistently true across compilers and architectures.

There are also fundamental choices that can have a serious effect on performance and reliability.

What's faster, using a struct or a bunch of individual variables?

Say you have three different copies of the same data corresponding to three different items. What's the fastest way to represent, store and process the data?

Hypothetical example: You have three robots generating telemetry you must track using a real time embedded system in order to coordinate their actions and react to the environment. Each robot generates one hundred variables you must track.

Is it better to define a struct and then create and populate three instances, one per robot, or is it better create one hundred distinct variables (i.e.: robot01_temperature_01, robot01_voltage_01, etc.)?

Which one results in code that executes faster, with lower resources and consumes less power?

These are not decisions an optimizing compiler is going to make.


Optimizing compilers will occasionally make these decisions, but often they don’t have to because the processor will make them themselves. Modern Intel processors already do macro fusion for the common pattern of “test this and jump the result”, micro-optimizing by changing the direction your loop runs in is generally not a good idea unless you really know what you are doing, have tested the code, and intend to never change your compiler or processor ever again. (And it still depends on how your CPU is feeling that day.) In general, the kinds of things you are mentioning produce in their best case a minor win over the optimizing compiler, and if you’re not checking whether they actually work, often make no difference or do much worse most of the time. And they often come at a huge readability/maintainability cost, so really you have to ask yourself if it’s really worth making these superstitious microptimizations.


The affirmation is demonstrably false, at least while using recent compilers and a standard (-O2) optimization level:

https://godbolt.org/z/rNxpQy


You know what’s impressed me software wise has been Microsoft on Mac.

For home office I pretty much use their software constantly: Office 365, Teams, SharePoint, VSCode, and it’s not just rock solid, but pretty enjoyable as it is rock solidly integrated together.

Would never have thought I’d say MSFT having rock solid anything, nevermind on a Mac, but credit where it’s due.


An Apple keyboard is a funny example to open with. Weren't we just recently celebrating the end of Apple's ill-advised MacBook keyboard experiment?


The end of that failed experiment was marked by the release of a much better keyboard - which is what is shipped in the device featured.

If anything, it further illustrates the author's point that hardware has to respond to customer complaints, but software can seemingly get as awful as it wants and customers can't really vote with their feet, ironically.


The complexity of software has far exceeded the ability of traditional strategies to manage it. We know how to reliably scale our software: the principled application of formal methods. More advanced type systems, proofs of correctness, principled math-based strategies for software construction, etc. Yes, all of these are locally more expensive than the slipshod strategies most companies are using now, but I am confident that if the cost of buggy software was correctly internalized, it would become clear that improving the processes we use to create software would leave us a lot better off in the end.


It could have to do with talent shortage. If you're a company that wants to hire a team of people like that, how much are you going to have to pay for it?

And if you're making, say, a chat app, are you sure you're not going to be beaten by a bunch of 20 year olds who slap together JS and Electron, ultimately winning because they get that people want funny reaction gifs built in?

Maybe it freezes for a half second every so often and takes 10 seconds to load, but no one cares because 1) it gets to market three times faster and 2) it has funny reaction gifs.


> how much are you going to have to pay for it

Right, this is the part where you first have to internalize the true cost of bugs. You can't keep hiring $12k/yr third world sweatshop programmers if you want reliable software.

> no one cares because 1) it gets to market three times faster and 2) it has funny reaction gifs.

This is absolutely true, but I maintain this is mostly because businesses have an artificially high time preference due to government subsidies of debt, and in a less distorted market people would probably care a lot more about quality.


The biggest issue with this mindset is that attempting to deploy self-discipline at scale is a losing strategy, both on the individual project level (if they're large enough) and at the corporate level. That's why the most successful large projects always have a 'dictator' acting as the standard-bearer for quality.

Often it can be quite advantageous for individual instances, no doubt, but expecting humans to be disciplined en-masse without serious external oppression is a bit like expecting cats to neatly line up for treats.


That’s why you use the compiler to force discipline on people! No willpower needed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: