Linux: Seg-fault in the gcc toolchain. After casting about for several days, we couldn't even find someone willing to take our money to fix it. Digging into the code (ld), it's so cryptic it may as well be written in undocumented assembler. Responses from mailing lists and forums: "Go back to Windows" "Restructure your project to avoid that problem" "Why are you complaining, fix it yourself, it's open source" etc. We couldn't submit a test case, because it would have required making public a lot of code containing trade secrets.
Solaris: Seg-fault in the Solaris compiler toolchain. We called Sun, and were talking directly to dev tools engineers within minutes. They put the case on 24 hour support and had a patch for us to test a day later. We could submit a reproducible case because Sun accepted the code under NDA.
Not the kind of thing that shows up in feature bullet point comparisons, but when you have 30 developers held up on one side and the clients need the code to calculate their bid for an eleven-figure project (as in this case), it makes a difference.
In other words, Sun is at a disadvantage, since one of their main selling points requires an understanding of IT risks that most people, unfortunately, do not have. IT guys may think linux is 'cheaper' but they don't have their end-users' acute desire to minimise the already-remote chances of some random problem costing them millions in commission. Edit - it is also an unfortunate reality that being able to pin the problem on Sun (or Microsoft, etc) can save your career in a situation like this. There's no-one to blame if you are using open source and it goes wrong. The buck stops with you not just for your code, but for the code of any free tools you are using. I wish that wasn't how it worked, but there it is.
Plus, if you stay 'on piste' doing stuff that has been proven to work (you could hardly argue against Linux + apache for example), you have a relatively small chance of stepping on these engineering landmines.
If you pay for support, you get support. I doubt that there would have been much difference in your experience if you were paying for a support contract with one of the larger Linux distributors, like Redhat or Novell. Linux doesn't mean you're stuck without vendor support, although that's a common misconception.
I don't know how Sun's support compares to the various commercial distros, but comparing an unsupported Linux install to a supported Solaris install isn't an apples-to-apples comparison.
What would be interesting however, would be a nice comparison of a supported Linux with a supported Solaris.
Then again, when you've paid for Linux support, do what is price difference between it and Solaris then? Whit is the price difference between a supported Solaris install and a supported Red-Hat install? Is Red-Hat significantly cheaper? Does it's less restrictive licensing bring about an advantage? Does it have features that compare with Solaris?
In addition to Red-Hat, Novell and supported Ubuntu are also worth talking about. Ubuntu may be free, but support licenses sure aren't.
You could also then compare supported Linux distros to unsupported Linux distros, but that's a whole other box of worms.
Indeed, you can buy Linux support from Red Hat and other companies. But this rings hollow - they do not provide the level of support that Sun or Microsoft does - try getting them or anyone else to fix an obscure bug in the GNU linker, especially if you're not using their distro (even if you want to pay). If you think you're going to convince the likes of prickly GNU toolchain developers to prioritise the development or release of a patch your company depends on, guess again.
And those support contracts cost money, just like Sun/MS support contracts. And as soon as you start relying on software that isn't in their distro, you rapidly run into uncharted territory. The Linux meme of 'you can get support', is, I think, promoted mostly by people who haven't worked on critical IT systems, or have done so without understanding their responsibility.
That's not to say Sun's engineering is better than OSS (after all, we got their compiler to segfault as well as gcc, and my BSD/Apache shared hosting has had better uptime than my landline and TV for the last few years, and I'm sure many people want to set fire to the source code for NFS). Linux and other OSS provide the bulk of features that most people use. And you're probably better off with an unsupported apache server than some of the hideous commercial webservers that used to be around.
The bottom line is Sun charges a lot of money to mitigate somewhat nebulous risks that are hard to understand and communicate, and their OS has no killer features. They have also mastered the art of alienating potential enthusiasts. All of which put them on shaky ground. It is a shame, because platform diversity is one of the best defences of the internet as a whole.
(If you have access to dtrace, learn it. Understanding what it does and being able to use it will make you a better and probably richer developer).
As I said, I have no clue on how Redhat's paid support compares to Sun's, although I know they hire lots of developers working on things throughout the entire stack, so I'm assuming that if you paid enough they'd set one of their developers on your problem. I'm not qualified to judge how helpful they are though.
Interestingly, I seem to have the entire opposite experience with unpaid support. If I asked nicely, I quickly got help about where to dig into the software and what to look for when fixing stuff. The programmers on the projects were quick to respond with requests for more information and suggested fixes, and often the problem was solved within a day or so. All I did was poke them on IRC.
A better question is - is SPARC on it's deathbed? If yes, then Solaris (as we know it) probably is too.
Answer is probably only "maybe" at this stage. The SPARC/Solaris combination (usually w/ Oracle) is a mainstay in the financial industry, so the current climate might be a significant hit for Sun.
SPARC deployments are a fraction of Solaris deployments, which are in turn a small fraction of server deployments. That's for an entire computer architecture. SPARC has also been displaced by MIPS on the high-end and PPC on the low end of "pizza-box" and network infrastructure, and the embedded market seems to belong to ARM.
Financials and WebSphere farms definitely do still run Solaris/SPARC, but plenty of huge financials have also transitioned to RHEL.
Yeah I agree - I know they are not totally encumbered. Think it's more of a question of buoyancy. If SPARC deploys were to drop off it might be enough to drop below the waterline.
I guess I should have said that Solaris is actually being eroded on two fronts - On the x86 architecture they are under growing threat from Linux (and Windows). On the SPARC front Solaris is still a mainstay, but the SPARC market is being eroded... That is to say, if SPARC goes into serious decline they are in trouble.
SPARC has a good multicore strategy, but remains to be seen how this progresses. Again, a key market for this is financials (e.g. hedge funds like these architectures).
Solaris/SPARC still deployed in a lot of scenarios where (1) midrange is required, (2) it's a vendor requirement and (3) where there is security paranoia (e.g. Internet Banking, particularly in the DMZ) - Not saying these can't be achieved with other architectures, just the correlated factors I've witnessed.
Unfortunately, most of these are holding positions rather than growth areas. You're right about RHEL in the financial industry - particularly as Oracle is now more at home with Linux. It used to be Oracle and Solaris going hand in hand (anyone remember Oracles "Raw Iron" strategy?).
I have run both side by side, and still do run Linux for some customers and Solaris for others. Linux wins on ease of installation and apps available, Solaris wins on just about everything else (better threading, VM pager, etc).
If you have the source code to your application and it runs on both platforms, Solaris will run it better.
I don't doubt that for totally standard three-tier apps with no OS-specific customization, on a J2EE stack, Solaris will run your app "better". I strongly doubt it runs it better enough to cost-justify. Many of the largest web apps today scale horizontally, instead of trying to squeeze blood from stones in the VM system and thread scheduler.
I know that the plural of anecdote isn't data, but our experiences with Solaris at work seem to not be unusual.
Over the past ten years Sun has NOT had their story straight on the availability of Solaris x86 ISOs and support patches (formerly packaged as X_Recommended bundles). While I could download x86 and install it when I was considering trying it out for an important but non-funded project (replacing authoritative DNS servers for our domain), AFAIK the only way to install updates was to download individual patches from a FTP server and apply them one by one -- and there was no way to tell which applied or why.
I think CentOS is saving RHEL's market penetration... if CentOS (or equivalent) wasn't available to cover all of those little cases that nobody would ever fund in a business, all of the Linux expertise would move on to other distros. I think Sun might have figured that out now (OpenSolaris), but if not they have an even harder road ahead as they continue to lose more market share.
RE: x86 support for Solaris: How recent was your experience?
Re: CentOS: I agree, I am considering "upgrading" from CentOS 5.2 to RHEL 5.2 to get support for some servers. If CentOS didn't exist then I wouldn't probably went with Debian or Ubuntu on those servers when I set them up, and then I'd be looking to get support from Canonical.
The specific moment I'm referring to was in 2007. I believe it's still the case that you can not download patch clusters without a support account. I'd love to be proven wrong about this, though.
There was a previous time (circa 2002) when I was considering it for another company but they pulled the freely available x86 ISOs altogether. This lack of commitment on Sun's part means I can't trust the free stuff for test / low end, even if they started offering everything again today, because I no longer trust that they won't do the same thing again next week.
Linux Foundation is funded by IBM, HP, Intel. So I guess its just an anti-sun attack. Its just a negative-PR.
>>["The future is Linux and Microsoft Windows," says foundation Executive Director Jim Zemlin]
Stop predicting...Start working!!
Michael Dell once predicted similar thing about Apple. But look at where Apple is today!! And look at Dell!!
What are the advantages of Solaris and RHEL in these settings over FreeBSD? Is it all about commercial support?
I've found FreeBSD to be a nicer environment from a few aspects - better man pages than the GNU world and less SysV-vs-BSD cruft than Solaris. But my perspective is based on my experience as an operator rather than any big-picture considerations.
The version of FreeBSD I installed to try ZFS loses all data on the volume after every reboot. It's like having a 146g RAID 1 SCSI RAM disk. :/
Granted I know I picked a random version (7.0 RC1), but the differences in maturity between the platforms is astonishing. This isn't a big deal when it's something that might at worst take rebooting to recover from (say, a SMP related crash), but with filesystems it's a little more crucial that they come through.
Zones are inferior to almost every other OS's virtualization/isolation strategy, a fact that Sun seems to be recognizing now. If virtualization is a key part of your IT strategy --- and it is for most large enterprises --- Solaris isn't your OS.
I've done more than "use" zones, but I'm not going to go into details; you can infer what you'd like from my background.
On the other hand, you didn't actually make any arguments here. All you did was assert that I'd never used the zones feature, make a point about something unrelated to zones, and then say that you made money with zones. Nobody is disputing that there is money to be made selling people Solaris instead of Linux.
I would at this point be more comfortable running applications under FreeBSD jails than zones, but, for obvious reasons, I would be much more comfortable running those same applications under virtualized Linux.
OK, I will expand on my original comment to give you a better idea of my perspective.
Zones are a useful tool because they provide the needed amount of separation (for me anyways) without a lot of overhead. They are portable to whatever the Solaris kernel is ported to (x86, x64, SPARC, and there is a PPC port being worked on).
A zone with /usr, /opt, etc. mounted read-only in the zone, is more secure (assuming no security holes to bypass the read-only property) than a non-zone Solaris system, yet it works exactly the same way. I can compile something in the global (root) zone and when installed under /usr it is available in every zone, and if there is a security hole that involves writing to e.g. /usr/bin/ping , it will fail.
Note that the kernel only loads one copy of each library, no matter how many programs reference or use it; this saves RAM compared to e.g. VMWare, and may reduce disk accesses if you have short lived processes as the library may already be loaded and resolved by the link editor.
You could duplicate this, of course, under any OS with a combination of NFS read-only mounts (loopback or over ethernet) and jails, although the administration overhead would be higher.
My reference to XVM (Sun's customized Xen) was to point out that if you don't like zones, you can still use "full" virtualization from Sun; it is not an either/or choice.
In my experience, Solaris zones and VMWare's virtualization offerings are the most solid and reliable virtualization solutions available (VMWare is quite pricey though). Zones are the best game in town when you need OS level virtualization.
There are plenty of things that suck about Solaris -- zones aren't one of them.
You're almost making an apples-to-oranges comparison here, albeit a comparison I begged you to make.
Solaris Zones aren't virtualization. They're an isolation feature that tries to find all the shared kernel namespaces between applications to present the illusion of multiple machines. "Zoned" applications share a running kernel instance, and share a number of kernel namespaces that are not carefully isolated.
VMWare images do not share kernels. Their entire running state can be frozen and shipped across a network (or marshalled out to an iSCSI SAN) on demand.
I think Solaris Zones are a pretty crappy answer to "virtualization". It's basically just a stronger version of chroot. It's inferior to VMWare-style virtualization on security (all zones on a single Solaris instance are vulnerable to the same kernel flaws, and kernel flaws have been the majority of Solaris security issues over the past several years), and they're inferior on management and logistics.
As others have said, VMWare is virtualization and Zones is not. Solaris Zones provides a high degree of isolation that is sufficient for the vast majority of cases that Xen is being used for, with virtually ZERO runtime overhead, simple and fast configuration, and streamlined maintenence. If you need more isolation than Zones offers then you probably have to skip Xen and go with a fully virtualized solution like VMware or similar. The cost of that extra isolation is a notable increase in runtime overhead, setup effort, and maintenance cost.
Things an enterprise gets with Xen/VMWare that they don't get with Zones:
* A security model that extends through the kernel
* A performance and resource sharing model that extends through the kernel
* Push-button migration
* Support for anything other than Solaris
* "Hardware"-level suspend/resume
* Centralized management
I can go on and on about the security implications of Zones (and Jails) --- I don't think this model is well thought-through. But on the feature-list alone, Zones (and Jails) are a pale shadow of what the "mainstream" OS's offer today.
What do you mean by "security model that extends through the kernel" and "A performance and resource sharing model that extends through the kernel"?
I don't believe that most people need the suspend/resume/migration feature. If you have a cluster that can handle system failure then you can easily migrate a zone the same way you would deal with a failed system.
Anyway, I agree that VMWare/Xen offers important features for pausing and moving running applications. I use those features of VMWare every day. But, most people will do very well with Zones because they don't need and won't use and didn't learn and don't want to pay for the extra features that VMWare offers.
Again: any Solaris kernel vulnerability likely allows a non-root zone to compromise the root zone. There are other real and potential problems with pretending that kernel security is just about the filesystem namespace and some additional access control on the process table, but "one kernel memory corruption bug costs you the whole server" is a simple enough security problem to get your head around.
VMWare does not have this problem --- you need both a kernel fault (not rare) and a hypervisor fault (quite rare) to take over a whole VMWare server.
You can say "most people don't need" the features Zones don't offer, but I see my clients using them, and expect they'd mention them immediately if asked why they use VMWare.
Very few people will do well with Zones, because very few people still deploy Solaris. The choice between shelling out for Sun gear and shelling out for ESX is a no-brainer.
Around '95, I was a college student with a brand new Pentium-90. I had read about this Solaris thingy that could run Windows apps and thought, "Hey, I'd love to try that out..I have 64 megs of RAM!" So, I called Sun to ask about getting a student copy. $600! I was a sophomore in college. When I asked the Sun salesperson why I couldn't have a copy for $50, she explained that they'd be losing money that way since they had to recoup their R&D costs! My argument that I wouldn't be purchasing a copy for $600 anyway fell on deaf (dumb?) ears? Right after that, a friend helped me install Linux -- no small feat as a kernel hack was required for network connectivity in the 0.9 kernel!
To this day, I have never owned a computer that ran Solaris. However, I work on a project with over 500 Linux production servers. Serves Sun right for not being nice to me when I was nineteen.
Interesting read. Especially considering the school of engineering at my university just replaced its last Solaris station in favor of some flavor of Linux.
I manage number of Solaris servers for a major university and we face the decision of switching to RHEL every budget cycle. We are already moving some over piece by piece.
I will note that OpenSolaris is promising, and tools like dtrace and zfs, if positioned correctly, can give Solaris better leverage than they do.
RHEL is shockingly expensive for what you get. Either you do what they approve (in which case you won't really need their support, so you might as well be using CentOS) or you don't (in which case you won't get any support, so you might as well be using CentOS). But RH has the mindshare among developers; if you need to run something like Oracle on x86 then your options are RHEL or Windows.
Solx86 is free and of very high quality (far better than RHEL in package management alone!). But application vendors (e.g. again, Oracle) treat it as a tier-2 platform. So in many cases, it's just not an option.
I'd be more inclined to use Solaris if there were a virtual machine image I could try out from my present platform of choice. Why doesn't Sun make those available? And without annoying promotional crapplications and trialware asking me to buy when I'm just checking stuff out.
Linux: Seg-fault in the gcc toolchain. After casting about for several days, we couldn't even find someone willing to take our money to fix it. Digging into the code (ld), it's so cryptic it may as well be written in undocumented assembler. Responses from mailing lists and forums: "Go back to Windows" "Restructure your project to avoid that problem" "Why are you complaining, fix it yourself, it's open source" etc. We couldn't submit a test case, because it would have required making public a lot of code containing trade secrets.
Solaris: Seg-fault in the Solaris compiler toolchain. We called Sun, and were talking directly to dev tools engineers within minutes. They put the case on 24 hour support and had a patch for us to test a day later. We could submit a reproducible case because Sun accepted the code under NDA.
Not the kind of thing that shows up in feature bullet point comparisons, but when you have 30 developers held up on one side and the clients need the code to calculate their bid for an eleven-figure project (as in this case), it makes a difference.
In other words, Sun is at a disadvantage, since one of their main selling points requires an understanding of IT risks that most people, unfortunately, do not have. IT guys may think linux is 'cheaper' but they don't have their end-users' acute desire to minimise the already-remote chances of some random problem costing them millions in commission. Edit - it is also an unfortunate reality that being able to pin the problem on Sun (or Microsoft, etc) can save your career in a situation like this. There's no-one to blame if you are using open source and it goes wrong. The buck stops with you not just for your code, but for the code of any free tools you are using. I wish that wasn't how it worked, but there it is.
Plus, if you stay 'on piste' doing stuff that has been proven to work (you could hardly argue against Linux + apache for example), you have a relatively small chance of stepping on these engineering landmines.