If I had a time traveling magic wand, the one Software Thing I would wish for would be that native, cross platform toolkits had won the war rather than the whole industry punting and declaring The Web Browser to be the target platform, or these super-high-level Game Engines. For decades, I always hopelessly thought the OS and hardware gaps would eventually be bridged by something like Qt or SDL or wxWidgets, and we'd all one day be happily programming cross platform apps using plain old native languages and SDKs instead of Electron or the HTML/CSS/JS triad of pain. As the years go on, and OS vendors move even more towards their own proprietary incompatible native APIs, this dream seems less and less likely.
Platform innovation requires control over your own API, because you want to expose the features and architectures that make your platform excel and that weren’t accounted for in abstracted tools. There will always be incompatible native API’s.
Meanwhile, a ton of apps and games are completely agnostic to those cutting edge platform differences and are going to thrive in least common denominator sandboxes. And making those sandboxes easy to use for some specific style/genre/skill-level is always going to be the competitive difference between them. So the big high-level things are always going to exist too.
But… so are the near-metal abstractions that let you cut through and interleave cross-platform and platform-specific code even in high-performance paths.
You wanted the last group to “win”, but the ecosystem inevitably involves all three. There will always be something like Metal, there will always be something like Unity, and there will always be something like SDL. Winning isn’t necessary.
> Platform innovation requires control over your own API, because you want to expose the features and architectures that make your platform excel and that weren’t accounted for in abstracted tools.
Yeah, I’m not buying that. It’s the story they tell you of course, but I think that’s a marketing lie.
Let’s be clear first that hardware is the platform. Your comment seems to agree with that. Note that for quite a long time, the Windows and Mac world used the same hardware (same CPU, same GPU), and therefore the same platform. They could have went together and specified a common API to work on both MacOS and Windows, and they could both expose all the hardware has to offer. Heck, if they really wanted to expose all the goodness hardware has to offer, they would give us the actual data sheets. They don’t, for various reasons that are generally tied to "IP".
They tell us sweet words about innovation, but let’s be honest they just want to lock us in.
I was trying to respond to "Platform innovation requires control over your own API". The short answer "no it does not": look at CPUs, we just need their ISA to take advantage of any improvement.
In fact, the best way to expose any hardware improvements is to give us the data sheet. Gate keeping direct access to the hardware with an API effectively reduces user access to innovation.
One could criticise how I conflate hardware and platform. I’ll just note that all the goodness we’ve seen the past 40 years were made possible by hardware. Personally I saw precious little innovation coming from software specifically. So even if a platform is more than just hardware, actual innovation mostly comes from hardware anyway.
What would make such a platform less of a compromise than a web browser?
How would programming in C++ be less pain than programming in JS / HTML / CSS? At the very least, JS code won't write past array bounds, or smash the stack.
From relevant olden times, Lisp and Smalltalk environments were closest to the ideal. They were expensive though, and nobody distributed them for free, as Netscape did with the browser. They also notably lacked any protections against untrusted code. But worst of all, they'd likely run even more poorly on consumer PCs circa 1995.
So, enjoy Typescript, V8, flexbox, canvas, web workers, etc. You could end up having a worse deal.
> How would programming in C++ be less pain than programming in JS / HTML / CSS? At the very least, JS code won't write past array bounds, or smash the stack.
A native ABI doesn't mean you have to use C++ though. I can use Qt from Python if I like, or even from the JVM (slightly fiddlier, but doable). I can't do that with the browser.
> nobody distributed them for free, as Netscape did with the browser. They also notably lacked any protections against untrusted code.
The JVM avoids both those problems though - it had a robust security model and was distributed for free. What killed it was that corporations refused to install Java Web Start on their computers because it's a scary "application runtime". But they would happily install web browsers because that's just a "document viewer". Even though they both do the same thing!
I believe that aforementioned “robust security model” was removed several years ago due to issues with its actual robustness https://openjdk.org/jeps/411
If you read your link, it was removed largely because the things that used it (such as Java Web Start) had been removed, which was more because they failed in the market than because its actual security record was particularly poor.
(Yes, there were occasional sandbox escapes, but there are occasional sandbox escapes in web browsers too. Few security mechanisms are perfect)
> A native ABI doesn't mean you have to use C++ though. I can use Qt from Python if I like, or even from the JVM (slightly fiddlier, but doable). I can't do that with the browser.
wasm is that ABI for browser. Yes it would make everything bit slower, but I am fine given a lot more added security.
If we reach the point where a WASM-only app is a first-class citizen and I can write an app that doesn't have to touch HTML/CSS/JS (doing the UI with canvas or whatever), I'll be happy. We're not there yet though.
I think people really underrate browsers. The browser standards are open and have multiple open source implementations. People associate browsers too much with annoying trashy ad-based and other questionable websites to see how good they are themselves.
Electron has an annoyingly heavy download size but it's not the only option for native releases of web-based apps. Windows and some other OSes have built-in browser widgets that can be used with Tauri.
> The browser standards are open and have multiple open source implementations.
The browser standards are open only in name. The sad fact is, implementing those standards are flat out impossible if you’re not a megacorp. They’re just too damn big: I recall someone counted like more than a hundred million words.
Now using those standards is easy, you can implement a subset. But the number of browser engines that actually supports enough of those standards will only decrease.
I'm not really sure for how long we will have multiple implementations. And we won't have for sure any new implementation, we are stuck with the 3 we have and can only hope the 2 non chrome ones will survive.
That's exactly what people said about IE over 20 years ago. History has proven this reasoning untrue. Web isn't going anywhere. If there's an opportunity to build something 10x better than Chrome, it'll be shipped.
> If there's an opportunity to build something 10x better than Chrome, it'll be shipped.
There won’t be. Since IE6 the standards have grown to inhuman proportions, and implementing a new browser engine is even more difficult than it was then.
I'm sure with enough dedicated and enthusiastic people something better than Chrome can be implemented. Though will it survive at all is another question. Chrome has an effective stranglehold on the market, so for anything else to succeed it will take political will rather than development effort.
Browsers are absurdly well-optimized for performance. If you know how to tap it, you can make screaming-fast apps of various kinds, with top-notch graphics, font rendering, accessibility support, audio, video, etc. They also have really solid networking capabilities, as long as you don't need raw TCP or UDP. In particular, HTTP/2, HTTP/3, WebSockets, and WebRTC allow for a lot of advanced things.
By now, you also have WebGL and WASM, if JS's JIT is not fast enough for you.
I love how people use hyperbole such as "screaming fast" and yet a native application that's not even all that optimized will tend to run absolute circles around these "screaming fast" solutions.
How are we supposed to describe these native apps? What's faster than screaming fast? ear shatteringly quick?
> I love how people use hyperbole such as "screaming fast" and yet a native application that's not even all that optimized will tend to run absolute circles around these "screaming fast" solutions.
And you of course have nontrivial examples to prove that? Or as always source: trust me, bro?
This is like asking to prove the sky is blue in a sunny day. Walk outside and you'll see it. I was talking to friends about how we've forgotten how fast computers are because all we see are web pages and Electron applications. People don't even remember the wonder of native applications.
Try building an operating system or browser engine in Javascript and you'll see what the parent is saying. I'm just giving you these examples because these are some of the last remaining native applications everybody still uses, but pretty much any native application will be much faster than the Javascript version. The reason browser wins is that we got to a point where the performance is "good enough" and the development cost is significantly lower.
I've definitely seen some really well optimized web targets. Unfortunately that is not the common case in my experience currently.
That said the WebGL/WASM stuff is generally very nice in my experience and is very much changing my opinion. I'm interested to see what comes in the future!
For that to happen, OS vendors would have actually had to care about sandboxing and security, to enable local execution of completely untrusted code without any gatekeeper. It's their complete security failure, still continuing today, that forces everyone to the web.
The other, slightly less important thing is petty rejection of cross-platform APIs (e.g. Apple's refusal to allow Vulkan support in macOS). It's fine to additionally have platform-specific APIs, but there should be a least common denominator cross-platform standard. But middleware can smooth over this problem, while the security problem is something only OS vendors could fix.
Unfortunately, the position of gatekeeper turned out to be so profitable that vendors don't actually want to improve their security to the point where it's unnecessary. And they're also incentivized to prevent the web from improving to the point where it would threaten their gatekeeper status.
Even without the time traveling, I would be happy if there was just a single stable, non-bloated, reliable, portable platform that could be used for when you just want to Write Once and then know that it will Run Everywhere _forever_ (* insert disclaimer about nothing literally lasting forever). Not something that rolls out breaking changes every six months. Or six years for that matter. Would not even have to be an entire API, just a clear declaration that a subset of some APIs will never change, and some tool to verify that my code did not accidentally use any of the other parts of the API.
Unfortunately running things in a browser is no guarantee, even for those that would otherwise consider that a good option.
Web browsers are remarkably backwards compatible. 20 year old websites continue to work fine.
The things you linked are only advised against for new code:
> These features are likely stable because removing them will cause backward compatibility issues and break legacy websites. (JavaScript has the design goal of "don't break the web".) Still, they are not cross-platform portable and may not be supported by all analysis tools, so you are advised to not use them [...]
They are also typically browser-specific extensions that were never cross-platform in the first place, features added based on proposals that were not in the end accepted (such as the Object.observe/unobserve API), or features from the Old Times™ before the specs were fully defined (and therefore typically also not cross-platform).
You've also got a bunch of deprecations for things that were in the spec, will almost certainly be supported forever, but are now seen as bad API design for one reason or another - usually because they don't handle edge cases correctly for historical reasons, or the name doesn't reflect what the function actually does. Unless any of these features actively leads to a security issue, they're very unlikely to be removed.
20 year old browsers don’t work at all, though. You can’t browse any of the top 100 sites, and you won’t be able to download an old release of firefox with your old version of internet explorer, because SSL.
You can get closer by limiting the depth and breadth of the API, for example by using VT100 and the I/O operations from the C standard library.
That is ‘a bit’ minimalistic, but it is “just a single stable, non-bloated, reliable, portable platform that could be used for when you just want to Write Once and then know that it will Run Everywhere _forever_ […] Would not even have to be an entire API”, and it could run on hardware that has no chance to run Win32.
I don't have experience with .net core, but working on a cross platform java desktop application has me pretty convinced that "write one, run everywhere" is a pipedream even before you add on "forever". It's maybe more of less fine for small, simple applications, but eventually you'll run into something that doesn't work the same in Windows as in Mac and you'll start having to write platform-specific workarounds. At some point you'll find yourself reflecting into the platform-specific jvm internals to work around some bug or another. Then an os update will break something. Then a jvm update will break something.
I must admit I never looked at that ecosystem. Does it just happen to have been quite stable or is it a serious design decision they made and are sticking to?
From a quick search I do not get the impression that .NET has been deprecating parts of the API, including Core APIs, in the past, e.g.:
.NET was pretty stable. I remember porting old .NET Framework 4.x MVC web app to .NET Core 2, and then to .NET 5. Both times it took less than an hour to port. Old directory structure and APIs still work even if they are not the new hot way to do things. Microsoft is known for backward compatibility.
Open source, powered by Skia, backed by JetBrains, and quite battle-tested at this point for small to medium-sized apps. In theory perfectly capable for enterprise as well, since it's basically a spiritual successor to WPF, which has been an industry standard for about 15 years.
They're diving into mobile and WASM well, but that's more of a recent effort and I haven't tested that yet.
Electron would be great if it weren't for the performance, security, configuration, and packaging issues. The latter two seem to be what OP suffered the most.
html/css/js (and the frameworks on top of it) seem like a pretty low bar to build games and business logic for a variety of apps which, despite huge efforts from OP, could run on pretty much any modern platform.
Native is the only way to go. The problem of the mythology of cross-platform (pseudo-)development is having to know both the underlying platform AND the abstraction layer. This explains why RubyMotion, PhoneGap/Cordova, and Appcelerator/Titanium were flops, and so shall Microsoft's MAUI.
There is nothing for free. Abstractions cost performance and confuse troubleshooting.
Flutter is actually pretty close to this right now. I'm building an app that targets Windows, Mac, iOS and Android and so far it's working really well on all of them with more than 90% code reuse.
If Google doesn't give up on it I think it's going to be a much better stack for cross platform applications than the browser is.
Flutter has a multitude of problems, at Google's inability to support anything long-term is probably the smallest.
Flutter apps don't look or work like native apps, and the only people who will put up with that are people who have to do so because their enterprise mandates it. Flutter apps have horrible battery performance. Flutter apps are always at least six months behind what is possible with native toolkits and SDKs. Flutter apps use a language that literally no one other than Sass or Flutter developers actually want to use and that offers exactly no benefit over the dozens of other possible languages out there.
Flutter is Java Swing, but worse in pretty much every way.
No so far I've had a great experience with it. But keeping up with new improvements and changes to the underlying platforms is going to require ongoing investment. Hopefully google continues to think it's worth it.
If you think any of these will save you from the issues of cross platform development and platform specifics, in a different way from what is described by the post, you are wrong.
You will still suffer with notarization and appleness, Android stuff being pressured by Play Store policies changing constantly, and every platform/store specifics, adapting controls, form factors, gestures...
You misunderstand me. I think Java's great. Java applets were quite bloated. moreso than Electron. You can still make Java Applets if you want, but people have moved on.
There's way more than enough room for both given how many million UIs get made. I think more time should be spent wondering why cross-platform toolkits aren't good enough. It's kind of lazy to point at the incumbent and say it's their fault for some reason.
Or framed this way: your dream exists and it's called Qt and can be used to make some absolutely fantastic applications[1]. What's deficient about it and why?