Hacker News new | past | comments | ask | show | jobs | submit login
24 Solid State Drives Open All of Microsoft Office In .5 Seconds (gizmodo.com)
46 points by darragjm on March 9, 2009 | hide | past | favorite | 44 comments



That gives some interesting possibilities for computers in a few years -- one relatively expensive drive for code, another cheap drive for data, for example.

Probably 80% of your hard drive is stuff where the access speed is irrelevant -- movies, for example, since your capability to read from the disk far outstrips your eyes' capability to watch the movie. Photos, Office docs, email, ditto ditto ditto.

Then there are programs -- or even a subset of programs, really -- that actually have appreciable startup times. Office, Eclipse, etc, I'm looking at you.

You could put those programs on the "fast" disk (along with most of the OS, presumably) then make it as big as you pleased with cheap spinning platters. With a little bit of software trickery, you could present the two disks as one physical drive to the operating system (or to the end user) and shift data between them using some sort of caching policy (LRU, whatever).

It sounds like a sexy idea for servers too -- can't afford to keep the entire working set in RAM? No problem -- back up the RAM with solid state and only write to spinning magnetic media when you need long-term non-volatile storage.


That's the way to do it today too. In my machines, I use a smaller 10K RPM drive for OS/programs and a large drive or two for data. It does make a significant difference in performance.


http://www.fusionio.com/Products.aspx

Have you seen the startup Woz has started working with?


It would be neat to automate this by using the SSD as a file level cache.


Wow. Simply -- wow.

I was amazed at really seeing the HD bottleneck minimized this way. I always knew about it, but -- wow.


Would be interesting to see a baseline against a RAIDx24 with conventional HDDs.


Agreed. I wouldn't go so far as to call it meaningless without something to compare it to, but I'd imagine conventional disks would still be very fast in that setup.


Yeah - and perhaps for certain scenarios it might be faster? (Pure conjecture on my part).

So - if seek time wasn't a big factor, a fast conventional drive could be faster? And if you have a very high level of striping (ie. 24 drives), seek time might be significantly mitigated.


Given the tested max throughput of the Hitachi 1TB SATAs (http://www.storagereview.com/Testbed4Compare.sr) you should be able to get ~2GBps with 2/3 the drives (16 instead of 24) and near triple the storage space (16TB instead of 6TB). They're cheaper, too, of course. That's just theory, though. It would probably be difficult to get all drives "bursting" at the same time but anyway, this kind of looks like a "jumping into jeans" viral published by Samsung.


I think this bottleneck is easily (and still amazingly) noticed when storing things in RAM rather than on disk. For instance, immediately after a restart of our production database it can take up to a minute to run certain queries. Give it a few seconds to load everything into RAM, though, and it's nearly instant. I've noticed the same thing when switching from file-based to RAM-based queuing.

Everybody thinks, "duh, RAM (or SSD or w/e) is faster than disk" but sometimes these sorts of real-world tests still amaze.


Now, as a software guy, can I stop hearing about how software "bloat" keeps absorbing all the hardware advances? If hard drives had been advancing, errr, at all, this would be considered normal performance.

And I daresay that in three or four years, this will indeed be considered "normal performance", if SSDs continue to develop.

All hail the SSDs! Maybe in a couple of years we actually will be able to feed our octo-cores some data to work on!


> Now, as a software guy, can I stop hearing about how software "bloat" keeps absorbing all the hardware advances?

Why should you stop hearing it? Hardware is getting faster and SW is using the capacity.

> If hard drives had been advancing, errr, at all,

They have been, just not uniformly. There have been huge increases in capacity. Less so in transfer rate, but still significant. Seek time hasn't been improving much and rotational latency is stuck.

In the same time, arithmetic and logical operations have essentially become free. Branch mispredictions are a big deal. Cache misses are a killer but memory bandwidth is about to become the biggest problem. (It already is for some people.)


When someone wants to talk about bloat, the odds of Microsoft Office being trotted out are around 80%.

Yet, here we see that with a modern system that is mostly only special in the I/O department that Microsoft Office as a whole opens in .5 seconds, which, believe me, is radically faster than Office 95 opened back in the day it was still cutting edge.

Yes, it was a powerful system, but the only thing that was radically more powerful that what anybody can buy for cheap was the I/O system.

Maybe it's "bloat" in the abstract if programs are simply "larger" than they used to be, but really, who cares about bloat if hardware is actually outpacing software development... in everything but the I/O hardware.

I'm just a bit tired of people babbling on about software bloat when this pretty conclusively proves its the I/O subsystem that is the problem.


> I'm just a bit tired of people babbling on about software bloat when this pretty conclusively proves its the I/O subsystem that is the problem.

This proves nothing of the sort. (Office95 was bloated too....)

The time to boot is large because software is huge, scattered all over the disk, and has unnecessary dependencies.

It may be convenient to believe that everything fits in L1 cache, but doing so often results in slow programs. The same principle applies here. Some operations cost more than others. Software which uses slow operations will run slower than software that doesn't.

To put it another way, big O is the start, not the end.


I wonder if there is a market for open desktop software that is designed to be as fast as possible and still get the job done.

Imagine a WYSIWYG editor that is actually 2^10 times faster than those made 15 years ago.

I think uTorrent has proven small can be loved.


It's actually quite clever marketing on behalf of Samsung, and impressive to seek them using 'geek marketing' in this way.

However, given that a 64GB Samsung SSD drive goes for about $500, 24 of them (plus RAID card for that many) is still looking quit steep!


Sold state is pretty new - I'd expect the price to come way down. I know I'm constantly looking at hardware from a years ago, thinking, "We were paying hundred/thousands for that back then? Damn, we've come a long way."


Reddit thread: http://www.reddit.com/r/reddit.com/comments/836i6/hey_reddit...

The IT guy from the video is a poster there, so there is some useful information if anyone is interested.


And then software will expand to make everything slow again.


Not if you consistently have $25,000 to spend on a computer.


It's funny how they never say how much RAM this computer has.

Vista has a feature called 'superfetch' which pre-caches applications in memory so they're never actually loaded from disk. Also, it's hard to tell, but if this computer has sufficient RAM, you could conceivable load everything directly from memory.

Also, the fragmentation test was a little dubious. A computer that new would have very little file fragmentation, so of course defragging would be fast.

That said, it was an entertaining video!


If superfetch explains that, than surely anyone with 4GB can replicate the performance within a factor of two or three. Can anybody with Vista replicate that? (I don't have Vista and have never used it, so I have no idea, though honestly, if Vista is superfetching 53 programs I'd be a little annoyed.)

Fragmentation test... well, it is a marketing video, after all. The real question is "why defragment an SSD at all?"


it did say - 4gb.

ans yes, i totally agree (re Superfetch)... even with XP, after loading an app once, windows was able to fetch the data again much faster the second time. They should have showed the tests from power to boot to tests.


Although I understand the awesomeness of this video I don't get that a marketing company comes up with this video. The nerd community is already familiar with the potential of SSD's and their speed increase. I think that the consumer market should be steamed ready for the 'next-gen' in hard disks via a video that's not focused on RAID configs, 6 TBs, a benchmark of 2GB/s and stuff. Focus on things like homevideo editing or working office, photo apps, mediaplayer and stuff all at once.

More exposure, more familiarity with the hardware and advantages, more potential buyers.


well... the video made me feel the urge to replicate the findings. At least to some scale.

So they may have sold an SSD or two.

But then again, I'd probably go Intel and not Samsung.

Still. They got people talking and the video probably wasn't all to expensive to produce, so they got at least SOME value for their money.


Desktop performance with solid-state drives is less interesting to me than server performance -- specifically databases and other services where seek time is more important than throughput.

Has anyone done performance tests with MySQL on low-latency storage devices? As another commenter suggested for the desktop, strategies of using solid-state for latency bound activities and traditional disks for throughput intensive stuff might be an interesting hybrid to explore.


Currently, SSDs' limited number of rewrite cycles make them unsuitable for that type of server application.


That's a myth. Enterprise flash drives are rated for years of constant writes.


The problem with those ratings is that you're only exposed to one side of the potential deviation from the mean. A MTBF without standard deviation is meaningless.

To put those ratings in context, consider that a Western Digital VelociRaptor has a 160 year MTBF.


The good news is that the wearout of an SSD is a gradual, measurable, monotonic process (because at every point you know how many spare blocks are left), while the failure of a hard disk is a random process.


I had another hard drive fail on me last week. At this point I would give part of my soul for SSD-style failure properties with sizes and prices comparable to conventional hard drives. (I mean, assuming partial souls regenerate, Xanth-style.)

I'm sure it will happen in a few years. I wonder if there's some starup opportunity here: are there any ways to take advantage of fast, reliable, cheap, flash-like storage becoming ubiquitous?

Probably you could add more fine-grained extension hooks to programs without them becoming bloated, by having plugins loaded lazily at run-time from a solid state drive. I'm convinced that the next big thing will be software that quietly integrates with other software, so you don't even have to click on an icon to use it.


True, that's a very good point.


Depends on the SSD; SLC-based Fusion-io cards are rated for 24-48 years at 5TB write-erase per day (source: http://www.fusionio.com/PDFs/Fusion%20Specsheet.pdf)


So, given that demo, does a single SSD perform substantially better enough on a normal laptop to warrant the expenditure yet? (Last I heard, standard advice was to wait until the price drops.)


The 128gb Samsung SSD in my Dell M1330 is amazing. I don't regret a penny of the upgrade cost. When I got it, I ran benchmarks against my desktop's 10k RPM HDD:

HDD: http://encosia.com/photos/things/drive-benchmark-10k.png

SSD: http://encosia.com/photos/things/drive-benchmark-SSD.png


Only if the drive is based on Intel's MLC or, even better - Samsung's SLC flash.

Regular (cheap) MLC-based SSDs are good only at reading big sequential files and lose out to 7200rpm drives when it comes to many small writes.

I can't find a ling to a guy who blogged about his disappointing experience compiling C/C++ sources on a regular MLC drive, but I googled Linus Torvald's experience with Intel:

http://torvalds-family.blogspot.com/2008/10/so-i-got-one-of-...

I personally find that I just have all my software open all the time, most of what I need gets pre-cached in 4GB of RAM I have, and I never reboot, i.e. no need to restart anything [unless, of course, freaking Adobe Flash freezes my Mac]


This anandtech review was very helpful to me:

http://www.anandtech.com/cpuchipsets/intel/showdoc.aspx?i=34...

Gives a lot of detail about the various types of SSDs and why the intel is currently the best bet. Warning, reading it might cause you to go into a "gear lust trance" and spend a lot of money.


Yes. It's definitely changed my computing life. Everything is instant and I can do things I can't do on my slightly more powerful but SSD-less work laptop.

http://flickr.com/photos/ttrueman/3051697418/


I have a 64GB SSD in my laptop and I love it. Upon a clean Ubuntu install my system booted in 28 seconds. After tweaking my system some I got my boot time down to 12 seconds.

I tried opening OpenOffice after watching this video and found that OpenOffice writer opens in less than 1 second and opening all Office applications except for database and impress takes less than three seconds 3.

The difference between SSDs and HDs is definitely noticeable.


It depends on your impatience. The X25-M has already dropped about 2X in price to $400.


This should work with USB thumbdrives too.

The ones I've tested are faster than a HDD (and faster than the SSD in a eee PC). They're also cheap.

Bottleneck would shift to USB bandwidth (60 MByte/s) and the PC itself.


The complete system defrags in about 3 seconds.

This part impressed me most.


Although I do wonder what the point of defragging an SSD is...


There is none. Even if there were some advantage with sequential blocks, SSDs should employ wear leveling, thereby spreading your bits across random blocks anyways.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: