Hacker News new | past | comments | ask | show | jobs | submit login
Raspberry Pi Zero vs. Elliott 405 (spinellis.gr)
204 points by sebkomianos on Dec 7, 2015 | hide | past | favorite | 98 comments



One thing that's missing from the comparison is that the Elliott 405 likely came with a full set of documentation, down to the circuit level, whereas Broadcom has kept much of the Pi's SoC details proprietary.

In 60 years, archives and museums will still have great primary-source information on the Elliott 405 and its contemporaries. Will the same be true of the Pi? It's easy to focus on the smaller, cheaper, faster, but it must be realised that a significant amount of openness has been lost; and not merely because computers have become more complex, but also because there are commercial interests strongly discouraging us from understanding the details of how these machines work.


You can find detailed block diagrams of Broadcom SOC's as well as fully detailed peripherals documentation beyond that it won't really do you any good a gate level diagram would give you no additional information that you could possibly use unless you are trying to reverse the SOC to make a 1 to 1 copy.

http://www.farnell.com/datasheets/1521578.pdf https://www.anaren.com/airforwiced/sites/default/files/temp/...


Another thing people always miss is that - unlike the Elliot 405 - the Pi doesn't come with a keyboard.

That's where they get ya.


Nah, it's the storage:

"8 GB (typical micro SD flash card - not included)"

Which one's the better value now, huh? (I mean, given it's no PS Vita memory card, but still ...)


The Elliot 405 also didn't come with a keyboard. There was a panel of switches available for the original model...

I'm sure you could get some sort of keyboard as an accessory for later models, but it certainly didn't come included.


I hope governments will intervene and force companies to publish all technical documentation, eventually, at least of older devices.


Unfortunately these companies will have mostly destroyed ("securely disposed of" is the usual phrasing) such documentation, because it is of little use to them to keep it around for something they neither make nor support anymore, and they don't want to leak any secrets.


The government has proven solutions to that. Just force them to provide the documents by fining them until they've delivered.


Documentation for Pi's SoC on paper would probably be the size of Elliott.


Just for fun, if you extrapolate to 2065 you get something so small it's invisible and practically free, with a cycle time in the hard x-ray to gamma range, more than 150TB of main memory, and an output bandwidth comfortably fast enough for at least ten life-size holographic displays.


> for at least ten life-size holographic displays.

I remember reading somewhere that the bandwidth of the optic nerve is about 10Mbit/s. Given that we've had 10Mbit/s Ethernet for years, couldn't it be said that we already have the hardware to build a display that is indistinguishable from reality, and it is now "just" a coding problem, in that if a 10Mbit/s stream contained exactly the right information the brain would interpret it as reality?


The combined sensory input of a human was estimated at 3Gbit/s back in 1959.

http://www.idi.ntnu.no/~gunnart/MTP/volum2_session4A_ocr.pdf

The first paper in the document.


Considering what 10Mbps compressed video can do, I think we're doing pretty well. Getting really close to the limit would require eye-tracking to only encode and transmit the details that the viewer is actually in a position to notice, and that's obviously only possible for a very limited slice of applications.


Wouldn't you need much more than 10Mbit/s before the lens, if it's getting compressed down to 10Mbit/s after the lens?

That is, unless you're talking about injecting that data directly into the optical nerve.


The only way it's below 10Mbit/s if you have a cataract and are legally blind. Or it's dark.

The eye has significantly more bandwidth than 10Mbit/s - it's "reality" in front of the retina, (say, 20-30K) a bit above 8K at the retina (rods/cones), and almost the same in the optic nerve.

We can see/study data directly on the optic nerves now, and understand pretty well how common aberrations or deformities affect the retina.


isn't the thing about holographic display that you have to output the data for more than one angle? Meaning, multiple different angles for different eye angles to create stereoscopy. So, 10Mbits * n_vertical_angles * n_horizontal_angles * k_compression_coeffiscient


This is currently under development ..

https://en.wikipedia.org/wiki/Magic_Leap


I hope a future HNer links to this ancient post then.


Hello future readers!


A funny exercise, though sadly we will not see the growth rates we saw this past decades for the next 50 years.


While "raw" progress is always fun, there's so much untapped potential in software that I'm not sure how much it matters. You stick one, two, three or a hundred ARM chips in a computer and, currently, it wouldn't really matter. The limits today are things like ecosystem, development process, robustness etc. at least for personal computer applications.


Yes, today it would not really matter. But in the early 90s we knew what to do with such a massively parallel architecture - see https://en.wikipedia.org/wiki/Transputer and https://en.wikipedia.org/wiki/Occam_(programming_language)


In terms of the hardware you hold in your hand, you're right. Progress is slowing down as the challenges to moving forwards become harder to solve, and as the reasons to move forwards change (e.g. power consumption over raw speed).

However, in terms of the accessible hardware I think we already have that practically limitless supercomputing power in our hands. The paradigm shift is in the realisation that "your computer" is a device that computes things, regardless of where it computes things, rather than being the beige box in the corner. With a little code and enough cash I can access millions of CPUs and terabytes of RAM from my phone. The fact that computing power isn't actually in my phone doesn't matter in the least.

The computing revolution is the network, not the CPU.


Reminds me of Sun over 20 years ago: "The network is the computer".


I agree. I remember seeing that slogan and thought it was backwards or mixed up, but in truth it makes sense.

It's a pity that the networks we currently use cannot be guaranteed to be secure and that there is strong emphasis on undoing decades of good work in microcomputers by shoving data to the other side of the world instead of using it locally!


Consider who owns the networks which form the Internet and their motives, as well as who end-users are paying for Internet access.


Dare to say the same thing from the Tube (or simply from a tunnel)?


Sure. Availability is a different problem to computing power, but they'll both be solved by network improvements. Underground metro systems will need cell towers (or just repeaters) in order to solve the lack of signal problem, but that's relatively simple. I imagine it's just a matter of cost and politics that stops them now. Alternatively, tube trains could have their own wifi that connects users to the internet.

Further into the future it's possible (although unlikely in my opinion) that we could use a mesh network to relay signals along series of other people's devices until it gets to a place where there's a connection to a cell tower available. The power requirements would be a problem, and there's the issue of data privacy, but it's theoretically possible.


> but they'll both be solved by network improvements

We won't ever have a 100% coverage (with a free roaming and all that). It's impossible.

> Underground metro systems will need cell towers

You'd have to wait few more centuries for this to happen. They still can't get rid of the signalling system from the 1890s.

> Alternatively, tube trains could have their own wifi that connects users to the internet.

They can't even consistently communicate with the train drivers at the moment.

> a mesh network to relay signals along series of other people's devices

In a middle of a desert, in an open ocean, in a taiga, on a spacecraft light seconds away from the Earth?

No way. We still need to cram as much compute power into as little space as possible. All the cloud fancy stuff is nowhere near a substitution for it. And no network improvements would ever overcome the speed of light limitations on latency.


In a middle of a desert, in an open ocean, in a taiga, on a spacecraft light seconds away from the Earth?

I'm sure there'll be edge cases where it won't work and we'll need other options. That's fine. It doesn't detract from the point that having a connected device that can access a cloud computing platform is exceptionally useful in 99.99% of cases. I'm not suggesting that means we can stop all mobile device innovation.


> I'm sure there'll be edge cases where it won't work and we'll need other options.

This is exactly the unfortunate attitude of most of the mobile apps developers. Exactly the reason why I can't use most of the apps, even the high profile ones like the BBC News. They must understand that it's not anywhere close to 99.99% of cases, that a good connection is available more like 10% of an active time, and, therefore, a great deal of caching and prefetching is mandatory. If most of the time most of the functionality of my mobile device is not available I'd rather stick to the tiny subset that is available consistently and ignore the rest.

And I suspect that most big cities got a similar patchy coverage pattern as in London.


While I honestly agree, people have been saying that for decades...


A decade ago, you could make the pi zero at the same size for a higher price. 20 years ago it would be hard.


Moore's law held faithfully for decades. Not any more.


>sadly we will not see the growth rates we saw

Maybe although if you look at Kurzweil's graph, and he may be cranky but I think his data is fairly ok, things have been speeding up in terms of computing / per dollar. https://www.youtube.com/watch?v=JIw8CQB8prg&feature=youtu.be...

I don't see much reason why that can't continue. Some things like clock frequency hit limits but a Raspberry Pi equivalent was roughly $25 a couple of years back, $5 now and could be $0.0005 in the future.


Since likes of Bill Gates and Lord Kelvin couldn't predict future of science and technology, I prefer never to indulge in what mayn't be possible, unless its theoretically impossible or improbable(P ≠ NP).


In tech space, even 9 years can produce 1000x results. I love this microSD storage example, 2005 vs 2014: http://i.imgur.com/1jyVev4.jpg


I had a similar realisation a few days ago - I was putting a new microSD card in my phone, and realised that it represented 100 times the storage of my first PC (200GB vs 2GB). I still get a feeling of "this shouldn't be able to store this much" every time I use a microSD card.


>this shouldn't be able to store this much

Same here. I was thinking about how many CD-ROMs it would take to store a certain amount of information, as a measure of just how much data something amounted to. Then I thought that with microSD cards, the same amount of data would sound much less impressive.


I love looking at MicroSD cards and imagining how, in your example there are 1,600,000,000,000 individual storage units inside it.

That's 200 x a billion x eight.


Nine years to change one letter? Jeez, and we pretend that technology moves fast.


3 letters, the X makes it sound cool.


What's the average/expected lifespan of a microSD card vs. a flash drive vs. a CD-R/DVD-R. I've seen various graphs but does anyone have even anecdotal personal experience?


Well anecdotally, I've never had a single storage piece die on me. The only exceptions are two microSDs that got fried in my rpi. Apparently you shouldn't constantly read/write to them, and instead just put boot files on it and move everything else to usb.


You can read them as much as you like. It's the constant writing that can be an issue.


The cost efficiency of the Elliot, adjusted for inflation, was 1445 dollars per cpu cycle per second (when running at the listed peak speed)

For the Pi Zero, it is 0.000000005 dollars per cpu cycle per second.


That doesn't take in account instruction-level parallelism, and of course the Elliot didn't have a global clock so the idea of CPU cycles doesn't really make sense.


The part that's missing in the comparison:

Elliot 405: Used to run business calculations

Pi Zero: Used run a sprinkler system

What has been really interesting to me over the last several years that these stick computers have been around is how people are using giga-Hertz range processors with an huge operating system and truckloads of memory for stuff that's easily done with a small 8 bit processor with K's of RAM vs G's of ram and a simple time-sliced tasking mini OS written in an afternoon.

Don't get me wrong, it's great that one can buy a computer like the Pi Zero for $5. I'm afraid something is being lost in the process. Maybe it doesn't matter. I mean, if one can get something the size of your pinky that cost less than coffee at Starbucks to run Linux...


How many people do you really think can just casually write an OS? I can set up automated almost anything with almost no lines of code. We've lose the specialization because so many people now have access to these incredible things.


People not being able to write an OS is part of "I'm afraid something is being lost in the process"

You don't need a memory allocator, file system or scheduler for an 'OS' that runs a single task.


Nothing is being lost in the process as long as we continue to churn out people who are still interested in the low-level bits of the process.

I used to point out to people that the control algorithm they were so laboriously writing on their PIC/AVR (now Arduino...) could be replaced with a 20 cent transistor and a handful of diodes and resistors. For exactly the same reason: the overkill bugged me and I was bothered that "kids today" weren't learning things "the right way." Then I began to shut up because I realized I was looking at progress. We don't need experts in analog electronics to build basic control systems: a web hacker who learns C can handle the simple stuff (emphasis on simple).

Likewise, I now find myself telling others to forget about building 555 timer circuits or elaborate digital contraptions to sequence their LEDs and relays and just buy a $5 Arduino Nano and get it done in a few minutes.

It's progress!


Except that the higher up you get in hardware the more everything turns into analog again. All of a sudden one's and zero's are not one's and zero's any more! :)


Not hard at all. Setup a 1 ms timer interrupt (the ONLY interrupt in the system) and fire-off non-preemptive tasks in a loop. Bingo! You have an OS that is good enough to maintain a clock, attend to a serial port, even USB (via FTDI chip or similar), blink some LED's and, yes, control the sprinkler system.

If you have a little more time, setup a FIFO a linked list or some other structure and have the keypad scanning routine issue messages that are then picked-up off the FIFO by a separate message processor routine that parses them and acts accordingly. In other words, not tasks can send each other messages through a very simple mechanism.

And now, with that in place, you can modify your serial or USB port handler to also allow you to place the same messages into the FIFO. What you've just done is enabled full remote control of your sprinkler system with very little code.

Now sensors (rain?) can also insert messages into the message loop and make things happen.

Change the LED routine to pick-up "LED" messages of the loop. Now you can control the LED's remotely through your serial port.

Then you get a little more ambitious, buy one of those Digi modules that turns an Ethernet connection into a serial port and write simple code that allows you to have said module insert messages into the message loop. Now you can control your sprinkler system from any network location at home.

Not that hard at all. No need for Linux. Just start with a 1 ms timer interrupt and the rest is easy as pie.

By throwing a server-grade OS at controlling a sprinkler system people are not learning the most basic things about computing.


You definitely don't have to write your own an OS. You use existing tools, for example FreeRTOS. It's surprisingly simple.

With a microcontroller, you can have a pretty much complete understanding of the whole machine.

All you need is a basic knowledge of C, and most of the people who would solder stuff to a Raspberry Pi already have that or are willing to learn.

Say you're making a thermostat. By convention, you leave address 0x0 empty, since dereferencing the NULL pointer is bad form :) At address 0x4 you have a int containing the target temperature. At address 0x8 you have a int containing the current measured temperature. Your whole app uses 8 bytes of memory, plus a few more for the stack.

You write a loop. It reads a few GPIOs, updates those two values, does a bit of logic to decide whether to turn on the heat, then writes one more GPIO to turn the heater on or off. GPIOs are "general purpose IO", metal pins that can be individually set to, say, +5V or 0V.

The beauty of this is that if you write it cleanly and debug it and test it once, it will work perfectly forever.

It will never throw up a popup that says "a new version of Ubuntu is available". It will never ask you to download 80MB of Java. It will never auto update. The server will never go down, because there is no server.

It feels totally different from typical software development. You're writing a bit of C, but you're not making an "app" with all the complexity and flakiness that entails. You're creating an embedded device.

And a wise man once said... "Hardware eventually fails. Software eventually works."

---

PS, here's a fire alarm that runs Linux. https://www.youtube.com/watch?v=BpsMkLaEiOY

Complexity is cheap now. You can get a full Linux PC with systemd, X11, Gnome, DBus, Java installed, the full works for $5 and in a tiny form factor. You could make a thermostat that way---but just because you can doesn't mean you should!


How many embedded applications really do need any OS at all?


Don't let anyone fool you, at six tons the 405 is almost 25 times more computer per dollar than the Raspberry Pi. Now that's value!


I have a boulder to sell you that you're gonna love!


The other day, I was looking at a smart lock on Amazon, and they listed the price per ounce. I think you may be on to something.


In Australian supermarkets the cost per unit weight net (typically 100 grams) has to be printed on shelf under / next to the price.

I've proposed the idea that the price of everything in the supermarket be averaged per unit weight, checking out then becomes just a matter of weighing the trolley / basket.

I see complications / gaming the system, fun idea though.

Maybe just knowing how much you spend per average kg of food, or per unit of energy, could be a useful metric for optimizing food expenses.


I've shown my girlfriend how important and informative it is to look at $ per ounce when buying food and the importance of trying to stick to ingredients that transfer across multiple different types of meals to maximize efficiency of buying larger packages of X good.


They'd sell out of helium balloons instantaneously, for one thing.


You'd have to weigh the trolley in a vacuum, of course.


I think the helium balloon prices are adjusted for inflation though.....


I submitted the same post https://news.ycombinator.com/item?id=10689371 11 hours ago. But 4 hours ago, dang moved my comment to this post which was created 3 hours ago. It's very weird.


A kind of time travel, welcome!


BTW Elliot had quite interesting memory design: https://en.wikipedia.org/wiki/Delay_line_memory


Ferranti Pegasus used the same (and it's still up and running!): http://www.sciencemuseum.org.uk/images/I039/10307498.aspx


Also used on the Elliott 803, but the 4100 series used .050 core memory, hand-woven.


Technological progress is an excellent example of how deflation helps the average person. Your money buys more and more compute power each year.

Central banks are targeting inflation, which slowly robs the saver of purchasing power.

Areas with high government regulation and control suffer from price inflation, where you get less and less each year for the same money: healthcare, education, war.


An article on HN recently argued that we DO in fact have deflation on healthcare. Treatments from 1970 are probably a lot cheaper today but in anycase I would rather get an expensive MRI from a lab with a recent model MRI Scanner rather than a cheap scan from an old CT Scanner.

If you could choose between 1970s treatments at 1970s-equivalent prices or 2015 treatments at 2015 prices, which would you choose? I would definitely choose modern treatment every time.

As for inflation, as long as demand outstrips supply (even just because of population growth), we should expect some inflation. The problem is that when deflation occurs it generally means underlying economic issues.


Deflation for particular products is different than deflation in general, which eg includes labour.


And though prices are continually falling for tech goods, people do not refrain from buying (as the myth says they will).

You know the iPhone you buy today will be much less next year, but you still buy because it serves a purpose for you now.

They only type of spending perpetually lower prices would probably decrease is speculative buying. If you don't need a particular good now, you may wait knowing the price will fall. But this is also a good thing.


Disagree, I've refrained from buying several products in recent years because I know if I can keep the old one going for one or two more product generations then the new one will be much, much better. It also influences my choice of product, I'm more likely to get something high end and keep it for longer.


OK, but in a general sense, the number of computing devices being purchased continues to grow.

Apple keeps setting iPhone sales records. Your examples, I would argue, are a good thing. People buy only when they absolutely need these goods, not because they are afraid they need to buy in excess now because their savings is going to lose value relative to the cost of these goods. They get full use out of the goods they purchase. Less wasteful spending, and wasteful use of resources needed to produce them. Moore's law illustrates that the cost of computing power has continually fallen. The industry has certainly not suffered because of this, would you agree?

So without putting to fine a point on it: The tech industry is as healthy as any in the entire economy. And yet, prices fall faster than just about any other. Consumers unambiguously benefit from these falling prices. Keep this in mind when economists or politicians sing the praises of "mild" inflation. This would certainly benefit governments who want to run a deficit and spend money they have to borrow. But I'd argue this does not benefit the consumer.

Thought experiment: will you plan on spending more on discretionary items (entertainment, vacation, home remodeling, investments, whatever...) if you anticipate / observe rising prices on staples such as food, healthcare, rent/mortgage, gas, utilities, etc?

Or, will you spend more on discretionary items if you anticipate / observe falling prices on those same items? Your available budget will be greater and of course your spending will increase in these areas if you have more money to spend. So the utility, or benefit you receive as a result is clearly higher.

Debtors benefit from rising prices because they can pay off debt with cheaper currency. So, as previously mentioned government, the biggest debtor, promotes inflationary policy.


That is one odd definition of deflation.

Anyways, the reason deflation is worriesome is that it is downright lethal to anyone with debt.

Just look at the great depression and its deflationary death spiral. Companies were dropping prices to try to sell anything at all, but the debt was ratcheting up faster, and to they keeled over in droves.

Compared to that mild(!) inflation is preferable, at least as long as wages keep up. This because debt is rarely inflation adjusted, and thus gets easier to manage over time.

Note thought that the only examples of "hyperinflation" are more likely to have been foreign exchange related. This because in both cases you were dealing with nations with little to no exports, and massive foreign expenses.


Education has become more and more DEREGULATED over the last 15 years, so you're going to need to go back to the drawing board with that one.


If you're going to present numbers on the change in education's cost, you'll need to pair them with changes in its quality. No point using deregulation to lower prices if you needed higher prices to deliver it at a decent quality.


Awesome. The only thing that would make this chart better is seeing the factor of improvement on each stat. The differing units make it harder to eyeball than it could be. Also would be cool if the price was inflation-adjusted.


Interesting, but crazier is that the Zero kicks the snot out of the laptop I took to college in 1997.


Excuse me for the digression, but the best thing of this post was to reencounter with Mr. Spinellis blog. I love his books (there's a new one!) and since I ditched RSS readers, I've never more visited it. I advice you to spend some time there.


At least the Elliott 405 shipped with IO devices.


and that case.


The Raspberry Pi Zero even compares favorably to the Cray 1 supercomputer from 1975.


Could have someone made a microchip in 1957 if they just "knew enough", I mean, if you sent someone back in time to 1957 with today's knowledge, could someone build a microchip from today back then?


The answer is no. Today's integrated circuits are all manufactured using extremely precise equipment not available back then. Furthermore, the designs are impossible to execute by hand and require complex software, not to mention extensive verification.

For a CPU like that of the Zero (ARM11), we're talking about at the very least upwards of 100 million transistors on a single chip. Only by simulating the design can you actually catch that single incorrectly biased MOSFET. Think software debugging, but on a whole different level. Then you have all the other components on the Zero responsible for tasks ranging from power management, networking, graphics processing, clocking, sound processing, and so on.


On thing missing from the top trump card is power consumption.


What difference it makes if all it does is blinking a LED?


The comparison starts to look different if you include an "Availability Date" row...


Haha. Love this. I just setup my PiZero to run Blockstore this weekend. I'm absolutely amazing by this little machine. Was actually blown away with the speed even though it's 1/2G of RAM. Impressed.


Blown away with the speed even though it's 512MB ram... Are you, by chance, coming from a windows os? If so, welcome!


I'd like to see a more recent comparison, honestly. Where can I find that? (10 years, for instance)


Weight: 9g vs 3-6 tons

Price: $5 vs £85,000 (1957)

Amazing technology advance!


£85,000 (1957) ≈ £1,895,500 (2015) ≈ $2,852,680 (2015)


Also within that timespan, 1.2MB of magnetic film vs 8GB MircoSD card (upto 128GB on a Pi Zero?) That's Moore's Law in action!


edit: Specifically to storage, the Moore's Law equivalent is Kryder's Law, named after Mark Kryder. https://en.wikipedia.org/wiki/Mark_Kryder


Interesting tidbit: Pi Zero is really $5, despite being British, too.


something irks me about retrospective and I've seen it all. retrospective steals from daring by making ourself feel good about progress done. Fight the retrospective! do outrageous stuff with this bit of tech today. I say this with insight how this uneasy bright kid does weird stuff with things people don't look at.

I think - leave history to historians.


I think I prefer the Elliott 405 than the Raspberry Pi Zero ! (Joke)




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: