There is a huge assertion in there - that Windows isn't as adaptable to multiple different environments. I know that empirically this looks to be the case - we don't see the NT kernel running on gadgets - but is there actually any true technical reason stopping this? By that, I mean is there any barrier that a team of Microsoft engineers couldn't overcome in say a year?
It would be interesting to know what those limitations are if that is the case - there would no doubt be lessons to learn for all of us.
You know, NT was developed on the i960, a RISC chip from Intel that is now mainly found in printers, and once ran on AXP, PPC and MIPS in addition to x86. There was even a SPARC version. The decision to target only x86, ia64 and x64 now is purely a commercial one - people just didn't buy NT on anything else. And ISVs weren't interested in cross-compiling their software.
I actually used NT on AXP in the mid-90s. Kick-ass platform for SQL Server. But ultimately the momentum wasn't there.
Yes, but I believe those were all desktop systems or servers, weren't they? I was thinking more about limitations that make it difficult to adapt in resource constrained environments such as we find in today's smartphones. We do see Windows used in places such as ATMs (I got a great photo of one rebooting on me the other day, Windows icon and all!). But even Microsoft don't seem to want to use it in their portable gadgets. I for one do not have a good understanding of the issues that make this type of adaption difficult on Windows, or even if it's not a technical question, but rather a political one...
There is no technical reason that you couldn't run NT on anything you could run Unix on. It was architected from day 1 with a hardware abstraction layer (HAL) specifically for that. In theory, port the HAL (assembly language) then the rest is just a recompile. There's more to it than that (the SPARC version suffered severe performance issues due to endianness) but that's the general idea.
I expect Dave Cutler remembered that VMS (at the time) could only run on VAX, the OS and CPU were developed in tandem and VMS relied on some features not present on other CPUs (I don't recall exactly what offhand). It was a huge engineering effort to get it onto Itanic.
The main problem was that a single Alpha could replace an awful lot of VAX HW and so to claw the profit back, the license fees meant a VMS Alpha was about 3x the price of the same machine running NT.
We used to buy NT Alphas and install Linux - to compete with Sun Sparcs.
Wasn't the NT core the starting point for the XBox team? And the original XBox had fewer resources than today's mobile gadgets.
The only reason Windows hasn't been scaled into more segments well, is that Microsoft's bureaucracy doesn't allow it to be stripped to its core as with the Xbox. They insist on layering on the cruft as with Windows CE.
I ran NT3.5 on a 16MB Pentium machine at one point and it ran comfortably, true.
I also ran System/390 on 32MB of RAM a couple of years before that. That's no indication that System/390 is appropriate in any way or manner for the embedded world.
Windows 7 is not easily portable to a phone; That's why Windows Phone 7, a CE derivative, -- an entirely different codebase that MS has been maintaining for the last 15 years, exists.
Cruft and legacy support. Windows machines are generally expected to run anything from the DOS era to modern day without a hitch. Windows APIs, up until the .NET era, were very leaky abstractions that require a lot of legwork to properly sandbox.
Unix systems, on the other hand, have infrastructure for recompiling and reconfiguring applications for new architectures with very little effort from the end user.
While the NT kernel was designed with support for multiple architectures in mind (MS call this HAL), it doesn't really matter in the end. And I'm not saying that MS engineers are not capable of releasing the NT kernel for other architectures in reasonable time. The real problem here is the userspace. For political/business/culture reasons, there is almost nobody taking other architectures into account when developing theirs applications. Most vendors face difficulties even when porting programs to 64 version of the same thing. So why would anybody sane port NT kernel to other architectures, when all the userspace programs are practically unportable? Compare that with the world of unix clones and it's culture: the portability it's not just theoretically possible, it's also part of it's culture.
I suspect the main advantage that the Unix family has in targeting these smaller environments is more the user-space than the kernel - init, the shell, the contents of /bin and so on.
It's pretty easy to configure a minimal but useful unix system without needing a GUI, graphics drivers, GUI configuration tools and user interface toolkits - look at one of the Linux-based router firmwares like DD-WRT for example. I don't think the existing windows user-space can be shrunk to fit on these small platforms, and if Microsoft built a new user-space to run on an NT kernel they would lose the developer familiarity that is probably the main selling point in the first place.
I don't think the existing windows user-space can be shrunk to fit on these small platforms
You should take a look at Windows Server Core Edition. It has enough of a GUI to run Notepad and CMD.EXE and that's it. Everything else is done remotely.
This entire article is ridiculous. Even if were true, it wouldn't be interesting as very few smartphone users know what OS kernel their phones run, and fewer still care.
And even as a developer you won't get very far trying to develop applications for iOS, Android or RIM's QNX based devices using the original Unix API.
It would be much truer to say that the rise of smartphones represents the "revenge" of ARM over Intel.
You won't get very far developing GUI applications for modern Unixes using just the original Unix APIs, either. And most Windows programmers today aren't using Win16 APIs. I think you're missing the point.
I've written a few iPhone apps. My background in Unix, and the Unix libraries I've been able to use without modification, were both extremely helpful.
It would be interesting to know what those limitations are if that is the case - there would no doubt be lessons to learn for all of us.