> Cynics might of course argue that we have come full circle, from thick client to butt back to thick client, but that would miss the point of what "the edge" is all about: In a world of increasingly ubiquitous computing power we are well advised to reflect on where our computation happens and how we can make the most efficient use of the resources at our disposal.
Which is exactly why we the cynics say we've come full circle.
Also, while the cyclical nature of client/server design seems to be a thing, unfortunately underlying ownership is not cyclical. For instance, a cycle or two ago, the "edge" computer meant software I bought running on my machine. This cycle, it means software I don't have any stake in running on machine I lease and have little control about.
This is the part of the trend that's worrying to me. Cycles of thin/thick clients are irrelevant. Disenfranchising end users is a problem.
In case anybody else is confused like I was: there is a plug-in for browsers that turns all mention of “cloud” to “butt”. Parent apparently has this plugin installed, and thus the “to butt” part is “to cloud”. (Users of said plugin will see me just saying butt a lot here :P )
I know I have one, that's why I never notice when the extension is on (I have it installed in Chrome, but I spend half of the browsing time on Firefox these days).
Indeed, edge computing for some is the answer for „privacy”. Nowadays the native or mobile app, although running locally will still not even launch without the internet because of the subscription licensing model, and one has no reason to believe the support and marketing that „all data stays locally”, unless one degugged the network traffic with wireshark or similar tool... and it might change with any version upgrade.
Flash/Silverlight/HTML5/WASM - this, to me, is a story of a technology being useful, then growing to give too much power to web publishers - who tend to abuse any given power - then reverting to a weaker technology in order to unscrew the web. Rinse, repeat - HTML5 is already rapidly approaching peak Flash, and introduction of WASM isn't going to help here.
Soap/REST/JSON API - you could call it a story of simplification, but I don't get how it even came to be. That is, how XML gained so much popularity, given that simpler and better tools for almost all of its uses were already available and known.
I suspect the reasons that XML ultimately lost are:
1. You could represent it as a nested structure of your language's standard lists & maps, which you already know the API of, which you could directly operate on. For dynamic languages, this is much faster to stick a prototype together, even if it bites you in the ass eventually. But by the stage, you've already chosen JSON.
2. It was less characters to manually type.
3. Particular uses of XML were extremely verbose. The S in SOAP stood for Simple, which looks ironic in retrospect.
4. In a time when payloads didn't routinely use compression, the closing tags could be a noticeable increase in size.
5. The vast majority of communication that now uses JSON didn't benefit significantly from XPath (people prefer to navigate data structures using their language features, not a generic API), namespaces, DTDs, XML Schema etc.
In about that order.
You could argue that much of this is superficial, and it is, but the industry has shown time and again that lowering the barrier to entry, even in ways that make little difference in the long run, usually wins out.
The sad thing about #5 is that XPath is epic and trying to simulate queries using it in almost any programming language without just implementing it is tons of grunt work :(. When I a trying to parse through some complex data structure I still to this day find myself getting tired after writing a bunch of nested for loops resulting in my saying "screw this", convering the data to XML, and making short work of the problem with XPath (which only got better and better in subsequent releases of XSL/T).
6. Too easy to accidentally reinvent Lisp via xml.
I thought it was funny the first time I ran into an instance of it happening, which quickly turned into horror. Not only was the logic split, but it meant mentally parsing this:
Basically it's a Markup Language, not a data transfer language, so it was far from ideal to use it as one. See for example the ambiguity between what should go into tag attributes vs. tag content etc. Also the eXtensibility makes things too complicated for many use-cases.
You don't have to care about any of this with JSON.
The X in AJAX is indeed XML. Quickly appeared though that XML is too heavyweight and redundant for the browser (XML DOM!) and even for the internet speeds at that time, while JSON is as simple as JS' object literal. Web 2.0 adapting JSON formed a critical mass so that later (almost) everyone and everything went (almost) full JSON.
There's a word replacer extension I used a few years ago to implement XKCD's entire list of suggested replacements, to my great amusement. Due to my predisposition I consider great amusement to be of tremendous benefit to my well-being.
Which is exactly why we the cynics say we've come full circle.
Also, while the cyclical nature of client/server design seems to be a thing, unfortunately underlying ownership is not cyclical. For instance, a cycle or two ago, the "edge" computer meant software I bought running on my machine. This cycle, it means software I don't have any stake in running on machine I lease and have little control about.
This is the part of the trend that's worrying to me. Cycles of thin/thick clients are irrelevant. Disenfranchising end users is a problem.