Hacker News new | past | comments | ask | show | jobs | submit login

> The problem is that both hardware and software are garbage.

I think it's incredible that, in my lifetime, computers went from giant mainframes with dedicated hard-line terminals to always-connected supercomputers in everyone's pocket, worldwide. Furthermore, anyone can use the internet to learn how to program.

Maybe that's garbage compared to some mythical ideal but in terms of impact on the world it's incredible.

> I know that most of the programmers did not work with systems where it's very, very easy to debug the whole stack and you can learn it in a short period but it's amazing when you have knowledge about EVERY part of the system in your head.

Well, you can tell from the above that I was around then. I started programming with a single manual and the Beagle Brothers "Peeks, Pokes and Pointers" cheatsheet[1].

People forget that the software itself had to do much less than it does today. Here's just one angle to judge: security. We did not have a worldwide interconnected network with so many people trying to steal data from it. We all used rsh and packets were flying around in cleartext, no problem. But today, all software will have to incorporate TLS.

And far fewer people built that simpler software. EA's first titles were written by one or two people. Now a typical EA title has hundreds of people working on it.

Things will get better than where they are today. In the future, the industry will have invest more money in "10X" productivity and reliability improvements. Eventually, I think that will happen as productivity continues to slow on large codebases.

[1] - https://downloads.reactivemicro.com/Apple%20II%20Items/Docum...




And yet, even doing something as basic as a smart doorbell requires a backend server somewhere (essentially a mainframe) and all kinds of NAT hole-punching, UPnP and proprietary push notifications despite the supercomputer in your pocket technically being able to listen for incoming TCP or UDP packets from the doorbell directly.

The "supercomputer" processing power is also being wasted on all kinds of malicious and defective-by-design endeavors such as ads, analytics, etc (install any mainstream app and look at the network traffic, 90% of it will be for analytics and can be blocked with no ill effects).

Despite the supercomputers being 10x faster than the early ones (back in the iPhone 3G days) we somehow lost the ability to render a 60fps non-stuttering UI despite modern UIs being less rich and consisting mostly of whitespace.

> anyone can use the internet to learn how to program

I think there used to be a "golden age" of this where the resources were all available for free and at the same time the stacks were manageable (think basic PHP deployed on shared hosting or Ruby or Django) where as nowadays it is considered "wrong" if you don't use Kubernetes, 10 microservices and 100+ NPM packages just to serve a "Hello world" page to a browser.


It requires all of those things because users want a doorbell that will easily connect to WiFi and a nice phone app, that includes not only movement alerts but also a miniature social network. And the startup making the doorbell wanted to make a quick proof of concept to get funding and then build organically using a flock of less expensive younger developers.

Alternatively, a company could invest money into writing something that looks beautiful to software developers that you could SSH into. The architecture would be sound because several gray-beared devs would talk about the relative merits of different approaches. It could offer so much more functionality if the user is willing to acquire minimal knowledge of Linux. The only problem is that the only people interested in it would be other software developers.

We're building stuff for people, not other geeks. Businesses invest in software as a means to an end, and if the backend is ugly but accomplishes that end, then it's successful.


> We're building stuff for people, not other geeks.

Of course, but everyone, including the end user, would greatly benefit from a less messy and more elegant, consistent "computing stack". The programmers would be (much) happier, the code would be smaller and better, the product would be better and cheaper and the freed resources could be allocated elsewhere.

These improvements (and IMHO there is huge potential for such improvements on all the layers) would bring the same kind of benefits that better automation has brought.


The end user does not care about the "computing stack" at all. "Does it work" and "is it cheap enough" are the main considerations.

And "the product would be better and cheaper" is really wishful thinking. I've worked in this industry for ~4 decades, so I really remember vividly working on those "less messy and more elegant" systems, and no, building a remote-controlled doorbell with life video on one of them would neither be better nor cheaper.

The sheer idea of doing this on a 128B machine (Olivetti Programma 101, my first machine) is ludicrous. The idea of processing any kind of video on a DG Nova Eclipse(second one) is... ambitious.

The first time we had machines with the processing power to do useful things with video for an at least semi-reasonable price was sometimes around the introduction of the Pentium, with dedicated hardware to process the incoming camera stream. I happen to know because I moved a video package that required a cluster of SGI machines to said P90.

Yes, the code was small. The video processing core was also the result of 6 rather sleepless weeks, squeezing every single cycle out of the CPU. I couldn't just grab OpenCV and make it happen. It also involved lots of spectacular hacks and shortcuts that happened to work in the environment in question, but would break down for a general purpose video processing system.

Around that same time the world also just barely had figured out the basic ideas behind making WiFi work usefully. But let's assume we would've just hard-wired that camera. If you wanted to do that with an embedded system, you wrote your own networking stack. That takes about 18 months to make work in anything that's not an extremely controlled environment - it turns out people's ability to misread networking specs and produce barely non-compliant systems is spectacular. (Source: Been there, done that ;)

So now we have the most basic system working. We have, however, no way of storing the video stream without more dedicated hardware, because realtime compression is not possible. (And if you think that dedicated hardware was less messy and more elegant... /lolsob)

Was the total codebase smaller? Yes. Were the programmers happier? F# NO. Would it have had anywhere close to the functionality that something like Ring has? Yeah, really, no. It's size would've been roughly a large desktop tower. It's cost would've been ~$5000.

The "good old days" really weren't.


A remote controlled doorbell with live video doesn't need video processing. It doesn't need to be digital. It doesn't need to be high resolution unless you're working on sensitive materials--for most people, low-quality analogue would be good enough.


At this point you're saying "if something that didn't have most of the features of a Ring camera would be built, it would be much simpler".

That's a tautology.

It's also ignoring market realities. We had doorbell systems with crappy analogue cameras, and nobody wanted them because they were overpriced for what they did, and what they did wasn't particularly useful. (The "good enough" low-res analog cams made for great guessing game who's at the door, but they didn't actually suffice for recognizing who was at the door without them plastering their face right in front of the camera)

And this is true for other consumer electronics too - of course you can build something inferior with less effort. But nobody wants that. (I mean, it's not like modern systems get built because somebody just wants to lose some money)

This leaves the conclusion that you're mostly arguing for it because you liked the developer "simplicity", and... it wasn't simple. It became simpler once we introduced computers. It became simpler as we built abstraction layers on top.

Yes, current systems are harder to understand top-to-bottom, and we paid for ease of development with increased systems complexity. But the rose-colored glasses of a "better past" are just that. Yes, we could understand the things we built more easily, but we couldn't build what we build today in that past.


These arguments have been made, mutatis mutandis, roughly since the beginning of computing. At least, certainly since the 70s/80s.

Gabriel articulated some of these in "Worse is Better" (89?) but it wasn't new then.


...what? The doorbell can't talk to the phone because the supercomputer in our pockets is not really ours - it's functionality is gimped by the various software and operating systems providers that control it from the respective motherships.


This is accurate, but is not the whole story. I’m running a terminal on iOS right now to ssh into my Linux server.[1] There are terminal emulators on Android too.

I wish there were more GUI apps centered around hybrid cloud/shell use cases. I would like to be able to make GUI widgets to do things in a ssh session on my server. I’m not sure how important it would be to run on the device; it could be a webapp I run on the server itself to trigger scripts. It’s a UI/UX that centers around touchscreen input, is reconfigurable, and can perform arbitrary commands or events server-side, which I find lacking. Anyone know of tools that scratch this itch?

[1] https://github.com/ish-app/ish


I ain't sure how "I can run a terminal on a phone" has much to do with "I have full control over the physical machine I paid for and ostensibly own". Unless you're sideloading via jailbreaking (which has a score of problems, not the least of which being that Apple is hell-bent on "fixing" jailbreaks, resulting in a cat-and-mouse game with no end in sight), your ability to run that terminal is exclusively the result of your device's manufacturer explicitly deeming that particular terminal (and the programs it executes) worthy.

Android is slightly better in this regard in the sense that it's (usually) easier to sideload applications, and sometimes even possible to modify or outright replace (most of) the operating system, but this, too, is subject to the whims of manufacturers rather than being something the user (and the user alone) controls.

----

On another note, I, too, would be interested in scratching that itch. It seems like it'd be straightforward to throw together.


My understanding is that Android is going to start disallowing execution of downloaded binaries, and they'll need to be in an apps lib folder. See the termux issues on GitHub for a discussion of this.


> It requires all of those things because users want a doorbell that will easily connect to WiFi and a nice phone app, that includes not only movement alerts but also a miniature social network.

You’re missing the point. The issue is that the underlying layer is more complex than it needs to be, not that the company needs to use that underlying layer to solve business requirements.

This is analogous to a freeway that’s too small.


> users want a doorbell that will easily connect to WiFi and a nice phone app, that includes not only movement alerts but also a miniature social network

I think "users" is a stretch here. Having worked at a couple $MEGACORPs and Unicorns, the bar for a feature being successful is very low. Most of the time, features are driven by Product Managers on product teams that come up with ideas. Validation of the success of these ideas (unless they are highly visible and core to the business), in my experience has been minimal (was there a regression? no, then great!), and don't even get me started on the numerous methodological issues.

> We're building stuff for people, not other geeks

I think computing is unique in how much we (the royal "we" here, and I have nothing more than anecdata to back these observations up, so take them with a grain of salt) focus on trying to hide the internals, almost as if it stems from an embarrassment with what computing is, as something only for "geeks". How often do you hear of musicians receiving censure for not making music that "other people" listen to, or artists receiving criticism for art that "regular people" consume? Obviously, business needs necessitate a tradeoff between beauty and functionality in any field, but despite the ubiquity of tech, it feels to be one that is uniquely embarrassed by the art and technique behind it. Maybe this is just an outgrowth of the "nerds are weird" culture of the '80s and '90s?

I think the reason that users put up with such bad software is twofold:

1. Computing is new, and the general population doesn't really understand what it means to be good at it yet. The general population has enough education with things like woodwork to understand what a shoddy table is, but not yet what shoddy software is. That said, I know several non-technical users that prefer using the iOS ecosystem because it has lower perceived input latency and higher perceived stability (much like the OP of the article), so users are certainly not blind to the problems of software.

2. Software as a field is deeply embarrassed about its technical internals. The fact of the matter is, we don't need to be worried about "our grandparents" being able to use our software anymore; the vast majority of young folk in industrialized countries have spent their whole lives growing up using technology. Yet, we are still obsessed with the creation of a dichotomy between "graybeards" and "normal people", or "geeks" and "average users". We need to stop treating users as these opposed forces, hidden from the innards of the technical beast, and instead embrace them as part of what it means to create a successful product.


> How often do you hear of musicians receiving censure for not making music that "other people" listen to, or artists receiving criticism for art that "regular people" consume?

All the time, when those artists have outside investors who are investing to make a profit off of their work. It's a recurring theme in pretty much every music documentary I've watched. Accessibility is frequently at odds with artistic development. At some point, people just want you to crank out the same boring (widgets/songs/art/software) that you've done a million times, and do it cheaply.

Indie rock artists who are just playing in bars to their friends don't have that economic pressure, and you're always welcome to build your own software for your friends along with maybe a few hardcore enthusiasts.


> Indie rock artists who are just playing in bars to their friends don't have that economic pressure, and you're always welcome to build your own software for your friends along with maybe a few hardcore enthusiasts.

I don't think it's as simple as "indie artists" and "artists with investors". There's room for innovation while making profit, and there's room to make catchy songs when you're an indie artist. In software, there seems to be a large divide between hobbyist code and industrial software. How many folks really write software for themselves and friends? I'd love to see a world where people grow up tinkering with software, so they can have an informed stake as a user instead of being a blind consumer.


> All the time, when those artists have outside investors who are investing to make a profit off of their work.

Prioritizing commercial gain over intellectual and creative stimulation and enrichment is, on that note, one of the biggest issues I have with the modern music industry. My views don't change much if you replace "music" with "film" or "television" or "journalism" or, to the point, "hardware" or "software".


I agree that software has gotten overly complex for the benefit, and Kubernetes is a good example. But it will improve again.

You mentioned Ruby and Django... the popularity of those in the Aughts were a swing back towards simplicity from overly complex Enterprise Java. Remember?


I wish smaller shops had some smaller role model to follow instead of your Facebooks or Googles. No, we don't need the same tools they use, because they use those tools to deal with deep organizational and technical complexity. If you don't have deep organizational complexity, and especially if you don't have separate data stores, you don't need microservices.


It’s exactly my philosophy in the small shop I founded, it’s a real struggle to educate both developers and clients.


It's easy to do, too. Just make boring software. C# is fine. SQL server is fine. A monolith is fine. That's it. (Replace with your favorite language.)


Making boring software is easy. Getting other people to pay for it is harder in a world where everything needs to be Kubernetes, React, AI or Blockchain.


Are you an agency? I'd love to hear your experience with this problem. As a contractor I find it very hard to find clients with "sane" stacks to work on. Even early stage companies nowadays (who would typically get contractors to build their MVP) seem to already settle on an overcomplicated mess despite their scale not warranting it. Maybe I'm just looking in the wrong place.


Yes we are an Agency, so we are fortunately able to choose the stack, but still have to convince (it hopefully gives us an edge on pricing). It’s a reality clients are looking for buzzwords because that’s what they hear and they want industry standard, cannot blame them. From what I see in our niche what we are choosing to do is the exception, so indeed I imagine it must be hard as a contractor to find simple stacks to work on.


You don't need a backend server to make a smart doorbell that pings your phone over Wlan. You only need one if you want to add loads of non-trivial functionality for non-technical users who might not be on the same network. You're welcome to build your own device with a wifi capable microcontroller and a simple phone app directly accessing the socket interface.


Nothing wrong with being excited by the progress made but I think engineers (well, especially myself) tend to be critical of things because it is in our nature. We see what they could be, not what they are.

I think vocalizing how bad the current situation feels is the only path to improvement.

You seem convinced that things will get better. I don't think all of us are that optimistic (and I am generally an optimist!).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: