I'm amazed and still can't get my head around how today we can receive images and valuable data from a device which is currently lightdays away from us and was built with 60's technology & know-how and is operating well beyond it's expected lifetime.
I raise my glass to the engineers who worked on this. Makes my daily programming tasks feel stupid in comparison.
I know a person who worked on voyager. He is a very smart person who has contributed to a wide range of technologies over a long period of time. I have talked to him and the reality is that by the time that project was being done, the people who ran it had a lot of experience, and they applied the hell out of it to the project.
I've spent a lot of time since then looking into how to build reliable systems that operate for decades and... there are no easy answers. You have to have an amazing amount of knowledge about the engineering context (what's it like to run a computer in space), the scientific mission (IE, given this payload mass, what instruments can we fit), judicious software engineering skills (just updating the firmware on a machine that's millions of miles away is a challenging problem, worse yet if updating the firmware bricks your control plane), project management skills (to ensure you make your launch date), and the dedication to keep things going long after most people got bored of them.
After a long time playing with complicated systems I went back and played with 8-bit microcontrollers and they were actually really fun because it forces you to build systems that are reliable without a terminal and resource constrained (you'd have a hard time fitting a program as large as this comment into an arduino...)
The biggest factor working to their advantage was that the tech back then was much simpler and more robust. To get todays tech to work for a decade without interruption would be a very tall order. Layer upon layer of abstraction has made it impossible to know for sure that there are no edge cases that will only trigger once every 3 years or so.
The physical hardware for the computers of 70's probes had more parts and complexity. Voyager used magnetic tape recorders, for example. Newer tech has allowed for simpler computer hardware, but does shift problems into the realm of software and file system management.
Both Spirit (Mars) and New Horizons (Pluto) had down days as issues with file system management puzzled the IT staff.
The New Horizons case was a crazy mad-scramble, as the probe was scheduled to pass by Pluto in a few days whether the probe was working or not. There was no re-do. Dozens of choice careers were in the balance.
Probe chips are still not very powerful by today's standards because they are designed to work in the harsh conditions of space. Thus, they are more comparable to a 1980's PC, and may mostly stay that way, being smaller components don't handle radiation well.
I read somewhere it's estimated that even surface-reaching cosmic radiation fouls up the typical desktop PC roughly once a year. Most just grumble at Microsoft and reboot.
I sort-of agree, although actually there is a lot of older stuff still made on fabs with huge (transistor) feature sizes and limited (software) feature sets that could be used to build extremely reliable long-term systems.
Voyager 1 is 0.00234 light years from Earth, Voyager 2 is 0.00194 ly. Or in more useful units: 20.5 and 17 light hours.
Also we do not receive images any more (the cameras were made for planetary encounters and have been turned off long ago).
But yes, getting data (at 160 bits per second) from man made objects that are the better part of a light day away is impressive. Even more so because they have been working for 42 years now, 30 years past the initial planetary mission.
Both Voyager 1 and 2 use RTG's (Radioisotope Thermoelectric Generators)[1] for power. Every year the RTG puts out less and less power, so instruments are gradually powered down to preserve the functionality of the critical ones for as long as possible.
According to JPL[2] at least one scientific instrument will continue functioning until about 2025, and the crafts will remain in range of the Deep Space Network until 2036.
Let's just hope the Deep Space Network will continue being funded. Further capsizing its budget and shutting it down would make it impossible to stay in contact with one of the greatest achievement of Mankind.
If you ever have the chance to check out JPL during one of their open houses, I highly recommend it. They have an exhibit with replica models of various satellites throughout history, and in the entry way there are screens showing data about what is currently being pulled in from the DSN. When I was there, it was pulling down data from New Horizons (at some absurdly low bit rate)!
Consider that light emitted from the Sun will decay with the square of the distance – as it is irradiated in all directions. To Voyager, the Sun looks like an ordinary star – although one much brighter than the others. Not much energy can be harvested there.
Until recently, photovoltaics were useless as far away as Jupiter. Given the tech advancements, we can now use them there (and have). Farther than that and you are bumping into practical limits.
>(...) Intensity is inversely proportional to the square of the distance from the source of that physical quantity.
Shit gets dimmer at the square of the distance. So if at a distance of 1 a thing has a brightness of one, at a distance of 2 it has a brightness of 1/(2^2), or 1/4. At a distance of 8, you are looking at a brightness of 1/(8^2) or 1/64th.
Voyager 2 is ~122 AU distant. So the sun's apparent brightness would be 1/(122^2), or 1/14884, or 0.00067 % as bright as the sun as perceived at the earth-sun distance (ignoring the atmosphere of course).
Voyager snapped a picture of our Earch (sorry I thought it was the Sun) in 1990. It's the famous Pale Blue Dot picture. It's the small blue-white speck (or almost pixel) halfway down the brown band on the right.
https://en.es-static.us/upl/2012/07/Pale_Blue_Dot.png
EDIT: Ooops, sorry - this is Earth and not Sun :-(.
You should look up or play Elite Dangerous, it's (I believe) a fairly accurate / to scale simulation of solar system navigation. You can go at several hundred times the speed of light and you're still waiting for ten minutes to reach your destination.
I used to have it in my head that the Voyager probes were about as powerful as an Apple II computer. But it's hard to make a direct comparison as the probes have three computers made of two processors each, running at 1/20th or 1/6th the speed of an Apple. So it's not a one-on-one comparison in any way, but for discussion purposes:
An original Apple II used a MOS 6502 8-bit word CPU clocked at 1.023 MHz and typically had 4Kb to 48Kb of RAM. It ran about 500,000 instructions per second [1].
Computer Command System (CCS) - two 18-bit word interrupt-type processors with 4096 words each of plated wire memory. It ran about 25,000 instructions per second. [2]
Flight Data System (FDS) - two 16-bit word machine with modular memories and 8198 words each of memory. It ran about 25,000 instructions per second. [2]
Attitude and Articulation Control System (AACS) - two 18-bit word machines with 4096 words each of memory. The AACS ran about 80,000 instructions per second. [2]
So maybe the total amount of computing power is equal? The total memory on a Voyager was about 32K [3], and since the word sizes were bigger, theoretically they could access larger chunks of memory in a single instruction? What's for sure is that a single CCS is at least an of magnitude less powerful than an Apple II.
>So maybe the total amount of computing power is equal?
Probably not. Those 18bit words likely contain 2 bit parity to prevent space radiation from making this go bad. Plus, it's only one machine, not two, the two always run in parallel (for the CSS, the FDS only has one running at a time and AACS is turned off if not needed) and redundantly in case on fails, so in total you're running about 180'000 instructions per second, not 360'000. The memory totals 34 kilobytes (18bit words in two of the memories, 16 in the third), with a simple redundancy like the computer itself (ie, each computer has it's own memory).
The CCS is also responsible for managing the memory of the other two systems (MEMLOAD), so their realistic instruction speed may be limited by slower memory access in memory heavy applications.
Even all systems combined, I don't think the voyager is faster than an Apple II within an order of magnitude.
voyager 1 which is currently the farthest man made object from earth is approx 147 AU away, where as 1 light year is equiv to 63241.1 AU... so it has a little ways to go still :)
> EDIT: I don't understand the downvotes. Isn't this just fact?
Nobody's questioning that fact (barring well-deserved pedantry about radio signals), but GP's claim was light days so the downvotes are probably about the nonsequitur.
In the 1970s, dealing with primitive electronic systems, often without screen and most definitely without a full OS.
Goosebumps. To imagine that the tech phenomenon that we can only regard as recent (2000-) but there are people from as far as 30 years back before that, working on things that have lasting impact to us.
I recently read bwk's Unix history and memoir, it's a great read, to see many of these pioneers being old and passed away is a great sadness.
You're making it sound like the Voyager software and hardware was developed by poking soft clay tablets with a pointy reed. The systems had OS's, screens were surely available, ICs and high-level languages were in use. The '[computer-ish] tech phenomenon' itself is a fair bit older - 'Silicon Valley' is named after literal, not figurative silicon-based electronic components.
Look again at that dot. That's here. That's home. That's us. On it everyone you love, everyone you know, everyone you ever heard of, every human being who ever was, lived out their lives. The aggregate of our joy and suffering, thousands of confident religions, ideologies, and economic doctrines, every hunter and forager, every hero and coward, every creator and destroyer of civilization, every king and peasant, every young couple in love, every mother and father, hopeful child, inventor and explorer, every teacher of morals, every corrupt politician, every "superstar," every "supreme leader," every saint and sinner in the history of our species lived there--on a mote of dust suspended in a sunbeam.
The Earth is a very small stage in a vast cosmic arena. Think of the rivers of blood spilled by all those generals and emperors so that, in glory and triumph, they could become the momentary masters of a fraction of a dot. Think of the endless cruelties visited by the inhabitants of one corner of this pixel on the scarcely distinguishable inhabitants of some other corner, how frequent their misunderstandings, how eager they are to kill one another, how fervent their hatreds.
Our posturings, our imagined self-importance, the delusion that we have some privileged position in the Universe, are challenged by this point of pale light. Our planet is a lonely speck in the great enveloping cosmic dark. In our obscurity, in all this vastness, there is no hint that help will come from elsewhere to save us from ourselves.
The Earth is the only world known so far to harbor life. There is nowhere else, at least in the near future, to which our species could migrate. Visit, yes. Settle, not yet. Like it or not, for the moment the Earth is where we make our stand.
It has been said that astronomy is a humbling and character-building experience. There is perhaps no better demonstration of the folly of human conceits than this distant image of our tiny world. To me, it underscores our responsibility to deal more kindly with one another, and to preserve and cherish the pale blue dot, the only home we've ever known.
Yeah but did they use Agile methods and have good team velocity as they burned down the backlog? If not, how can we really be sure the project was a success?
I am always baffled by the conundrum of the futility of launching a spacecraft to go far away by our own ability to invent faster engines.
Say it takes 300 years to get to Alpha Centauri. If you launch it you have 150 years to invent a spacecraft that goes twice the speed of the first one. Very likely. If you launch that one it will overtake the other one. Now you have 75 years to find an even faster engine. Launch it and you got 37 years. etc.
Quite likely you can keep doing this until you can reach it in e.g. a year. So why not hold all launches until then....
Does this have a name? Or original source? Did I get this from Futurama?
The irony is that without the practical experience of those intermediate steps, you might never reach the point where the diminishing returns become negligible.
You're basically describing the reverse of Zeno's paradox - the "why bother" paradox.
This is a fairly common trope in SciFi settings which have Generation Ships.
Some time after the ship gets launched (centuries, even), FTL gets developed. Then they either arrive and find a civilization at their destination, or they are considered to be off-limits and not to be messed with, depending on the setting.
That assumes such improvements are even possible.
At least for non-magical propulsion, we don't really need "faster" engines. Take whatever engine you want, let's say it can accelerate at a constant 1g. In less than 4 years(from an observer standpoint), you are at 97% of lightspeed. From your point of view, that would take less than two years.
If you can have an engine that can accelerate at a constant 1g for years (fueling it is left as an exercise for the reader), then you are all set. You can go anywhere in the galaxy inside a human lifespan – of course, assuming you don't care for the people left behind.
If you want to accelerate at more than 1g, there may be physiological issues.
As in most cases, problems left for the reader are usually the hard ones!
The rocket equation tells us that if one somehow managed to build a spacecraft containing an infinite power source that weighed a single kilogram, and incorporated the most efficient engine designs known to us (electrostatic ion thrusters), it would still need to be fueled with a reaction mass of xenon equivalent to approximately two billion times the estimated mass of the observable universe to achieve even 10% of the speed of light.
The crucial point is that to get to (low) relativistic speeds, you also need a relativistic exhaust velocity or the reaction mass just becomes untenable. Current ion thrusters are not up to the job.
For example, if you had a vehicle weighing 10t empty (but including energy), and a thruster capable of accelerating ions to 1% the speed of light, you'll only need 17t of reaction mass to achieve a speed delta of 1% the speed of light.
This ratio holds up as you scale exhaust velocity and delta V, so the exact same reaction mass would be needed for a 10% light speed delta V using a thruster shooting ions at 10% c. As a rule of thumb, you always need a reaction mass about 170% of the vehicle's dry mass when your thruster exactly matches your target velocity.
Now, to illustrate how critical exhaust velocity is, imagine you make thrusters with exhaust velocities 10x that of the target delta V. In those cases, you'll only need a reaction mass of roughly 10%. However, if your exhaust velocity is 0.1x the target delta V, you'll need a 2200000% reaction mass!
The rocket equation is dv = isp * g0 * ln(m0/mf) [1]. For dv we assume 10% the speed of light, isp of a ion thruster is say 19300 s [2], for g0 I will use 9.80665m/s^2 and mf is 1kg. For m0 this gives us m0 = mf * exp(0.1 c / (isp*g0)) = 6.2e69 kg. The (baryonic mass) of the observable universe is something like 1.5e53 kg [3]. Or in other words about a factor 4e15 (much more than the 2e9 the grand parent poster claimed) less then the necessary fuel.
One fairly decent argument against waiting until we have the technology is that it cuts against how we develop the necessary technology in the first place. We can't improve our propulsion/spacefaring tech without designing, building and launching spacecraft. There's absolutely no reason to assume that we will have invented a faster engine if we've just been sitting around for 100 years not building anything. Hell, we're having trouble rebuilding a moon program fifty years after Apollo 11, inspite of all the advances that have happened in the meantime.
"It has been argued that an interstellar mission that cannot be completed within 50 years should not be started at all. Instead, assuming that a civilization is still on an increasing curve of propulsion system velocity and not yet having reached the limit, the resources should be invested in designing a better propulsion system. This is because a slow spacecraft would probably be passed by another mission sent later with more advanced propulsion (the incessant obsolescence postulate)"
You can't get faster engine without having slower one first.
How do they even know what to improve upon without building it and experiment with it?
Also simulation is not a replacement for real world testing. They should launch stuff into space to figure out if it works and their simulation is making the correct assumptions.
Also I'd imagine these launches are sending back valuable data that they can use to make adjustment and improvement for future engines.
Based on what we seem to be learning about how the body responds to the hostile environment of space, it seems reasonable to think we should wait for the faster engines. Send drones if you must.
Say you built a computer in late 2009 with AMD Radeon HD 5870, Intel Core i7 975, Intel's X25-M SSD and 16 GB RAM.
That computer would still feel pretty snappy today for desktop tasks, running dual 2560x1440 screens. Although new AAA games would probably have issues, either low frame rates or not running at all. Most indie games would work just fine.
I think the change will be even less in next 10 years. 2019 hardware will perform just fine in 2029.
Unless something dramatically different pops up, we're in the era of diminishing returns.
That's more or less the result of having already waited. There was a time, not very long ago in the grand scheme of things, when the computer you wanted would cost $5000 and be essentially obsoleted by the $5000 computer coming in 60-90 days. (Perspective: PC review magazines of the day would usually include the time for a Gaussian blur on a 1MP image as a meaningful test.) When you were talking about cutting minutes off of a large spreadsheet recalc, the immediate gain from upgrading today had to be measured carefully against the even greater gains possible next quarter.
If you need to run a computation that is going to take 100 years with today’s computers, you probably should wait a bit to start it with faster computers.
But then you've spent more money. You could have just bought more hardware in the future.
For a fixed budget, assuming that Moore's Law still holds (dubious, these days), you should wait until the computation will take 26 months on hardware purchased within that budget [0].
More like "why start walking now when I can wait for a bus." In the case of a trip to the corner store, like, Saturn, get walking. If you're crossing the city, like, Alpha Centauri, wait for the faster transport
In fact is it possible most of the energy it got is not from human but from inter planet force? Hence we need someone to keep on calculating when is the best time to send another one. Not that we have sent any ...
It has been 47 years since a person walked on the moon, so it might just be a good idea to do something when you can and not wait for it to be possibly easier to do in the future.
This is why I didn't invest in solar yet. Even with subsidies, it looked to me like they would take far more than double the time to pay for themselves than it would for the price to cut itself in half based on current trends. Might as well wait a few years.
I think it might be time to look again? Cost of panels is extraordinarily low and cost of installation (e.g humans) is not on a similar curve. I looked at price/watt and went ahead and installed. The payoff period is ~11 years, 1 1/2 years in. Warranted to 20. You could take panel price to zero and it wouldn't have changed the math much.
Not a comment of the article itself, but for anyone interested in both Voyagers, PBS recently did a great documentary called "The Farthest". I highly recommend it.
BBC made a series called The Planets (2019) that talks about almost all space probes from the last 30 years and the data retrieved so far, you might have access to that but not sure of your country.
Do you mean that the website is not available outside of the U.S.? "The Farthest" is also on Netflix, but I don't know about non-US availability. It's also available for rent/purchase on Youtube. https://www.youtube.com/watch?v=Cu3kuUB1sOQ. If all else fails, it's available on DVD -- I bought two copies myself so I can loan it to friends without worrying about it being lost.
I'm disappointed that it even as it become very obvious we were going to continue gathering useful information from Voyager 1/2 and yet no follow on missions have been launched.
I suppose the idea is that there is more to be gained via specialized missions inside the solar system, compared with new long running extreme range probes, but you would think there might be some value in starting to send a chain of probes out, so that communication can be maintained even further.
Voyagers 1 and 2 took advantage of a fleeting phenomenon - an alignment of all the outer planets that allowed successive gravity assists, and so let us send probes out to those planets and then on to escape velocity with a very small launcher/budget. These come around every 175 years, so the next point at which solar system escape will be that easy is around 2150.
All the ideas thrown around for follow-on missions have relied either on very powerful boosters (on the scale of SLS), smaller probes (New Horizons, a bit over half the size of the Voyagers and on a much lower-energy trajectory, despite a more powerful launcher), or an Oberth maneuver close to the sun in a very challenging thermal and radiation environment.
The problem is we can't launch another Voyager mission. The Voyagers took advantage of the planets being aligned in just the right way to allow them slingshot through the outer solar system. This won't happen again for another 100 years or so.
Now we could design probes that are just for going into interstellar space but it would be a hard sell when you are competing against missions like sending a drone to Titan. Even the New Horizons mission to Pluto barely made it through NASA's project selection process and IMO expanding our knowledge of Pluto is more valuable (and sellable to the public) than a mission to literally nothing.
"Future New Horizons extended missions, if funded by NASA, could explore even farther out. The spacecraft is on an escape trajectory from the Sun, traveling about three astronomical units per year. Moreover, New Horizons and its payload sensors are healthy and operating perfectly. The spacecraft has enough power and fuel to operate into the mid-2030s or longer, perhaps enough to reach the boundary of interstellar space."
Yeah but what do you even do out there with the sensor payload it has?
Its imaging devices really aren't structured for deep space remote sensing. They were built to image planet and planet sized objects.
The cynic in me thinks that there is little marketing value in throwing probes into the void, whereas landing and taking pictures of “the new frontier” can be sold to the (taxpaying) public more easily.
I've always fantasized about drifting out there in person.
Wouldn't a human operator in such spacecraft gather more data by being able to reorient instruments in realtime etc. and spotting minute phenomena that would otherwise be missed?
If NASA or Elon Musk or anyone has considered such a mission but can't imagine asking for volunteers for it, let me know. I don't need a ride back.
I raise my glass to the engineers who worked on this. Makes my daily programming tasks feel stupid in comparison.
Edit: corrected, thanks for the numbers