I think Shipley has missed the point of OS X sandboxing.
He's right that sandboxes won't directly prevent malware. But that's not why users want sandboxes. Sandboxes mitigate accidental vulnerabilities. Virtually every security vulnerability you've ever heard about has been an accident.
Sandboxes aren't a new idea (the sandbox code didn't even originate at Apple). The idea goes back to the '70s; an obvious and more recent example is the Java applet sandbox, and a somewhat more successful instance is Daniel Bernstein's qmail security architecture (qmail has one of th best security track records of any major software package).
It's best to think of sandboxes not as a preventer of malice, but as one incremental form of forced developer competence: all developers are being asked to at least consider what privileges their applications need (most developers do not do this today), so that they can voluntarily renounce capabilities that will be turned against them when someone finds a heap overflow in their code later on. The technology and policy Apple is working with here will do a fine job of that.
Also, no matter what Bertrand Serlet said, sandboxes don't need to cover "as many instructions as are in the human brain". Because of its Mach / Unix underpinnings, sandboxes have a relatively simple user/kernel interface they actually need to deal with. There are just a few hundred system calls, many of them already privileged.
The application sandbox should be considered like not storing credit card info on your system... you could do it but you open yourself to security problems. As a developer, you should opt into entitlements because it means that your application will be less likely to have a problem in the future.
This is also similar to dropping privileges in unix: if you need sudo to start up, you drop it as soon as possible so that a compromised binary doesn't lead to root access for the malware.
They are like fire doors. They don't prevent the fire, they just limit the damage. Anything that encourages developers to adopt the sandbox model is good, however I would say that requiring them everywhere is probably biting off more than they can chew. Furthermore - as Wil explained - being fast with the fire extinguisher is probably a better tactic than trying to monitor and disallow all things that may lead to fire.
>Furthermore - as Wil explained - being fast with the fire extinguisher is probably a better tactic than trying to monitor and disallow all things that may lead to fire.
I disagree here. In the case of humans, it doesn't cost much to put fire extinguishers at regular intervals and have people just use them in the case of a fire. If a first starts we'll notice it. On a computer, it won't "just notice". You would have to have some virus scanner scanning every action all the time. This is what makes Windows so much slower and I would really hate to see it come to Mac.
I wish the SELinux approach would become more popular.
Being fast with a fire extinguisher is no substitute for fire prevention :-)
The point about all security models is that you need multiple redundant systems not one silver bullet. My house is built to fire safety codes, the materials are tested and approved individually, we have fire alarms, fire extinguishers, a fire hydrant nearby and a fire department to use it, and we have fire insurance. Most houses are like this and houses still burn down.
Shipley's argument is that code signing is all you need. It's not quite the same as advocating fire extinguishers as the only line of defense against house fire, more like simply relying on your builder's credentials.
A bit of confusion: when I said 'fast with the fire extinguisher', I wasn't talking about users but about Apple. Malware is only practical if you can infect hundreds if not thousands of machines. If Apple can effectively kill malware that is discovered, then it ruins the economics behind it. So to apply my somewhat lacking analogy, even though the systems are horribly vulnerable and there is nothing a user can or should do if a fire starts, nobody is out there going around starting fires because they get put out by Apple quickly, with some cost to them and before the fire is useful.
I have the greatest respect (nay, love) for Mr. Shipley (who can forget his live twitter coverage of the bachelor), but hasn't web certificate revocation proven to be pretty much un-workable in practice? How would this be different? Entitlements sound similar to code access security which Microsoft baked in to the CLR, but which remain largely un-used.
The problem with revoking certificates for websites is: how do you know when to check if a certificate is black-listed? You can't download a list of blacklisted certificates, because who would keep track of them all, and how huge would the database be? Some certificates include the URL of a 'revocation list' which allows the browser to check if that certificate has been revoked, but if the browser checks each certificate that it sees then suddenly Verisign's revocation list server becomes a single-point-of-failure for half the Internet, and that's no good either.
Signing certs for Mac software would be different because there is exactly one central signing authority (Apple), so batch-downloading a list of revoked certs is feasible - bundle them with your monthly security patch updates, for example.
What? Your first paragraph concludes that if everyone is checking Verisign's CRL they become the single point of failure for half the internet, yet you conclude the a better solution is to only check Apple's certs, making them the ONLY point of failure? I don't think you really understand how digital certificates work.
Downloading blacklists of certs is exactly what happens. They are called OCSPs and CRLs. The operating system is responsible for tracking these through a combination of caching and downloading new OCSPs/CRLs. The database does not grow too large, because CAs use certificate chains. The cert that is vouching for some website is not the cert in your trusted root store. Root store certs are kept under lock and key - these very rarely get compromised (this is what happened to DigiNotar and they are no longer in business - their certs were invalidated by everyone in the industry and they were filing bankruptcy less than a month later).
What does happen are leaf certs being compromised. When this occurs only that leaf and everything below it in the chain is revoked, so if you have enough certs in the chain you usually don't revoke many certs at a time. Generally speaking, the lower the cert in the chain the lower the trust level, which includes stuff like the cert not remaining valid for as long a period.
Just a nitpick, but OCSPs is not the plural of OCSP, and you don't 'download' it the way a CRL works. OCSP stands for 'Online' Certificate Status Protocol, and is a query/response protocol, that functions much like the DNS - you don't download the whole deal, you ask about the validity of a specific, single record, and get a 'yes'/'no' response.
OCSP is a privacy problem, as well as a bottleneck/performance problem for any large, non-organizational CA, such as Verisign and their compatriots. Every single SSL certificate must be checked at each handshake in order to make sure it hasn't be revoked in the past 'n' hours (there is some caching, which kind of defeats the purpose). This means that in practice, the CA's know who is visiting what sites, and that they must be online and active and responding in a timely fashion (like the DNS) before a browser will allow an HTTPS connection.
The real problem with revocation lists for websites is that there is no secure channel to transfer them over. You can't transfer them over SSL because you don't know which certs you shouldn't trust until you get the list.
If someone is alreay man-in-the-middling you with a cert you trust, they can easily filter the revocation lists that you download.
But you can know whether a CRL is valid, provided Apple's own CA hasn't been compromised. If Apple were to issue a complete CRL every 6 hours or so, then man-in-the-middle filtering won't work since clients can simply not trust any new signatures until they have seen the current CRL.
So the man-in-the-middle attack can keep the client ignorant of the contents of the CRL, but can't trick the client into believing a revoked certificate has not been revoked.
SSL is a red herring here, since we care about authenticity, not privacy.
I would argue that real security requires a deeper approach like QubesOS takes - http://wiki.qubes-os.org/trac/wiki/SecurityGoals . Constraining and inconveniencing users for the sake of false sense of security doesn't feel like something that needs to be attempted in 2012.
But to be fair - I think the hardware is only now starting to get fast and capable enough for doing security via virtualization. So up until now may be OS vendors did not really have the luxury of thinking along those lines but it doesn't hurt to start thinking now.
Apple could really leapfrog if they worked with the hardware vendors to make Mac OS X something like QubesOS but much more usable. On top of it if they had a saner programming language that makes it darn near impossible to make security goof ups, real security still has a chance!
It's probably not reasonably to compare a hugely popular general purpose operating system to a research prototype designed specifically to demonstrate an extreme form of application segregation.
Similar logic would allow the "we - implemented - the - kernel - in - a - type - safe - runtime - so - we - don't - even - need - to - switch - out - of - ring - zero" crowd to say Qubes was inadequate.
QubesOS is Linux+Xen+Added Trickery on top. So calling that a research prototype may not be entirely correct. I have used QubesOS - besides hardware support and usability there are no real inadequacies as far as I could tell from a general purpose desktop OS standpoint - both of which can be fixed by Apple along with may be some more hardware integration.
Sandboxing in the appstore is probably not intended to directly protect against malware, it is there to minimize the risk of programmer error. The application developer decides the restrictions it imposes on itself. And while the current entitlements system seems quite weak compared to the normal interface to the sandbox, I think that the number of entitlements that you can request will grow with time, and with user awareness. (The Scheme-derivative that is the configuration right now, is probably not fit for user consumption)
The OS X sandbox has been around for a long while (and trusted BSD has been around for even longer), it is of course not bug free, but it is in use in a few exposed applications (like Safari), and deploying it appstore-wide will only make it even better.
I, as a programmer, sleeps better at night knowing that my (and others) applications has safety nets that protect it from doing stupid stuff.
"Apple then has the power, if any app is found to be malware, to shut it down remotely, immediately"
Actually I don't think this will work. They can "shut it down" in the sense that they can keep new installs from happening. But if the malware is already installed it controls the machine, so it can simply ignore the "kill switch" for itself.
I don't know how mac "apps" work, but if we assume it can gain full control over the os, there's really only one thing they can do: "anchor" the checks in hardware.
The way it (supposedly) works is that you have a hardware chip that checks the bios, which checks the boot loader, which checks the os, which checks the apps etc.. The goal is that you'll have to modify the actual hardware chip in order to "crack" the os. Sounds great, but has a lot of challenges in practice.
I don't know what a Bertrand Serlet is but the statement reads like bunk. The CommonSense-Ometer in my own brain says there's no comparison between the evolved complexity o the human brain and an operating system for the modern digital computer hardware.
I think the claim is bogus not because the brain has a much more complex instruction set, but because the phrase 'instruction set of the brain' does not make sense.
If one had to express the complexity of a neuron in the size of its 'instruction set', the best guess may be 'one': "act like a neuron until you break down".
The complexity of the brain most likely is not caused by complexity of the processing units, but of its parallelism.
He's right that sandboxes won't directly prevent malware. But that's not why users want sandboxes. Sandboxes mitigate accidental vulnerabilities. Virtually every security vulnerability you've ever heard about has been an accident.
Sandboxes aren't a new idea (the sandbox code didn't even originate at Apple). The idea goes back to the '70s; an obvious and more recent example is the Java applet sandbox, and a somewhat more successful instance is Daniel Bernstein's qmail security architecture (qmail has one of th best security track records of any major software package).
It's best to think of sandboxes not as a preventer of malice, but as one incremental form of forced developer competence: all developers are being asked to at least consider what privileges their applications need (most developers do not do this today), so that they can voluntarily renounce capabilities that will be turned against them when someone finds a heap overflow in their code later on. The technology and policy Apple is working with here will do a fine job of that.
Also, no matter what Bertrand Serlet said, sandboxes don't need to cover "as many instructions as are in the human brain". Because of its Mach / Unix underpinnings, sandboxes have a relatively simple user/kernel interface they actually need to deal with. There are just a few hundred system calls, many of them already privileged.