Uhhhh this statement in the article about halfway down is incredible!
“You need to be able to tell the difference between one wave front and the next, and if the next wave front is 1.3 mm behind and traveling at the speed of light, then you need to reliably distinguishing between events 4 picoseconds apart (4 trillionths of a second). So, every telescope needs a shiny new atomic clock and a really fast camera. You begin to get a sense of why the data consolidation looks more like a cargo shipment than an email attachment; trillions of snapshots every second of not just the waves you’re looking for, but also the waves that will cancel out once all the data is processed. Add to that the challenge of figuring out (generally after the fact) where every detector is to within a fraction of a millimeter even as their orientation changes and the Earth rotates, and you’ve got a problem.”
Calling it a "really fast camera" elides much of the actual difficulty. We're trying to tag individual wavefronts of light at different telescopes, record them, and then play them back at a central "correlator" with the appropriate delays so that the waves come to a focus.
For a wavelength of 1.3 mm, we'd want the time tagging to be better than a quarter of the wavelength at least - say 0.3 mm. The speed of light is 300 mm/ns (a foot per nanosecond is the shorthand beloved of circuit and chip designers). So, for 0.3 mm, we're going to have to get down to a wavefront tagging accuracy of 0.001 ns.
No clock is going to get there, but if we can get ~close enough, we can use a procedure called fringe fitting to determine the clock corrections by looking at the wavefronts. (Does it line up this way? How about this way? How about now? Yes, it's as laborious as it sounds, but computers, eh.)
This is all in the calibration of data, before we do the Fourier inversion to create images - the magic of radio interferometry is that we can record the signal to disk while preserving phase. Optical photons can not be recorded and played back with phase preserved - optical interferometry has to split up the photon streams and send different parts to be correlated against streams from other telescopes, so you run out of signal quickly. Meanwhile, we can record radio waves at the 27 VLA dishes, say, and play them back for correlation on all 27*26/2 = 354 baselines, no problem. That's why radio VLBI is a thing, but not optical VLBI.
Even as a professional radio astronomer, the underlying physics is deep and almost magical.
> That's why radio VLBI is a thing, but not optical VLBI.
Hi, long baseline optical interferometrist here who specializes in modeling and image reconstruction.
To set the record straight, long baseline optical interferometry really is a thing. At present there are two optical interferometers operating in the USA and one under construction: Georgia State University's Center for High Angular Resolution Astronomy (CHARA), and the Navy Precision Optical Interferometer (NPOI), and New Mexico Tech's Magdalena Ridge Optical Interferometer (MROI, under construction). Europe operates the Very Large Telescope Interferometer (VLTI) in Chile. Australia has the Sydney University Stellar Interferometer (SUSI). Optical interferometers have been around for a really long time. Michelson famously measured the diameter of Betelgeuse in December 1920. The first image from an optical interferometer was of Capella produced by the University of Cambridge's COAST telescope in September 1995.
The key difference between VLBI and optical interferometry is that we must combine the light from each telescope in real time, rather than recording the RF data to disk and forming the interference patterns later using correlation. Our interference patterns are recorded on high speed cameras, extracted, calibrated, and then stored as OIFITS files. These files are then later reconstructed using a variety of methods, including Markov chain processes and regularized maximum entropy.
Except for the CLEAN deconvolution process, the methods used to reconstruct images from the EHT data are identical to what optical interferometry has been doing for decades (see https://iopscience.iop.org/article/10.3847/2041-8213/ab0e85, Section 2.2.2 for references to literature). The maximum entropy process used for optical interferometric image reconstruction was, in turn, developed for MRI image reconstruction.
Don't get me wrong, I am not attempting to trivialize the result of the EHT team. The effort involved is monumental and the result is astonishing. In fact, I suspect my facial expression was very similar to Katie Bouman's now famous photo when I first saw the image. Then my jaw hit the floor when I found that some of my work (Baron, Monnier, Kloppenborg 2010) was cited in their imaging paper! However, my first inspection of the "eht-imaging" and "SMILI" repositories has yet to reveal anything new or novel that is not regularly employed by optical interferometrists.
Because you'd also like to know the phase of the waves. If you get unlucky and get just DC offset of the waves (think of it as the 0 crossing of a sine wave) then you've no idea what the phase is. The peak could be before the 0 crossing and the minimum could be afterwards, or it could be the opposite (pi offset in phase). Granted, this is unlikely to occur, you're much more likely to not run into these scenarios. Ideally, you want 4 pieces of data per wave, the 0 crossings and the max/min, for each wave. From that you can get the amplitude and the phase for certain (again, luck is involved)
Can you not make a coherent quadrature detector? I was under the impression that those were pretty common in optical communication systems. Edit: nevermind, 230GHz, so millimeter wave but mixers and oscillators exist at that frequency, are the noise figures just too high to use?
Luck isn't involved as long as you can ensure your sampling noise is uncorrelated with the incoming wave, either by direct insertion of dither or characterization of environmental noise. Regardless, your point still stands, just a fun tidbit.
> the magic of radio interferometry is that we can record the signal to disk while preserving phase. Optical photons can not be recorded and played back with phase preserved
Why is that for the optical photons, when it’s “deeper” than just a higher frequency as you answered elsewhere?
I can’t speak to any physical limitations that this poster seems to be speaking to (I only studied physics theory so I’m not too sharp on the details of lab / experimental devices / apparatuses) but I would guess intuitively that a fundamental limitation is that radio tends to be more coherent, so you aren’t relying on individual photons but rather aggregating a bunch of photons to measure a wave. In contrast, optical light tends to be incoherent (unless from a laser), so you have to measure individual photons and they aren’t really correlated with each other (so interferometry doesn’t work).
Yes, but I did not use that one as I don't like that visualization. My reptilian brain made that sound way bigger than it actually is. As the horizon is only 5-10km away, I cannot really estimate a distance of 500km. I can see the problem of measuring 5 atoms height at arms length though, even when I cannot imagine the size of an atom.
Black holes are small things. They're just very heavy.
Messier 87* has a Schwarzschild radius of ~17.784 light hours. The scale of the universe is enough that the mother of my ex had a panic attack when watching a YouTube video about things much smaller than that — hypergiant stars.
From what I understood, the realisation that the Earth, that everything and everyone she’d ever known, was an invisible speck next to an invisible speck compared to the largest star.
That's a good point. Perhaps we need to just use things that are well understood.
The moon is about 350,000 to 400,000 km away. By my estimate (of other people's estimate), we're looking at a spot in the sky roughly the size of a dime (US 10-cent coin) on the moon.
I think my reptilian brain understands the distance of the moon and the size of a dime.
Hmm. My inner eye can’t really get a handle on the distance to the moon. I know it’s 33 Earth-diameters away, that it’s 400 times further than my longest cycle ride, but my mind somehow keeps shortening every distance longer than I can go in a day under my own power to similar levels of “quite far”.
You can get a sense of scale from mountain landscapes or city skylines. As you get closer to the city, you get a better sense of scale and size based off of how much the city has grown or shrunk.
The moon is very far away, but it DOES grow and shrink as it comes closer and further away from us. Using the size differential, you can get an innate feel to the size of the moon. Technically speaking, just driving towards (or away from) the Moon will have you traveling closer / further away from it, and give you a sense of scale.
The next time you drive to a major city or large, recognizable landscape, keep an eye on how big and small mountains (or buildings) are and how quickly they move parallax to the foreground. It really does give an instinctive sense of scale. Train this instinct well enough, and you can use it on the moon.
The trouble with driving (or trains) is that my unthinking processing treats the speed as constant and shrinks the distances. Even cycling 1080 km along the Rhine, from the North Sea to the Swiss not-quite-Alps had that foreshortening, though to a lower degree.
The specific mention of the box of hard drives in the press release, was because there are no flights in/out of South Pole (where the 10m South Pole Telescope is) for ~9 months in winter, and the Internet access there is very expensive and not very high bandwidth.
This is why you aren't likely to be doing VLBI in the visible light spectrum any time soon. Your wavelength is three orders of magnitude smaller, so the equipment you use to capture phase information has to be three orders of magnitude faster and the correlation is likely to require at at least three orders of magnitude more capacity.
Everybody can understand, at least the introduction. The author treats two related problems. The first is the well-known (today) integration of data from several separate antennae. The second is the problem of recovering an image of an object from the light reflected on a white wall. They are formally quite close, and the underlying math is, at many points, the same.
You can't just elide the verb there... The comment says the author "treats two related problems" in the paper, one of them being the integration of data. Is that not true? The paper certainly covers that topic.
> Once we set up an array of space telescopes throughout cislunar space (the volume inside the Moon’s orbit) we’ll get pictures of the SMBs in the cores of every nearby galaxy and that’s when the science really gets started.
This gave me goosebumps. That's the kind of stuff that justifies a permanently inhabited moon-base.
It justifies telescopes on the really high orbit, but not on the Moon: unless telescope is manufactured on the Moon, logistics of flying large telescope there and safely landing it are so complex, that it’s way easier to leave in Earth’s orbit and get same results from there.
Stationkeeping within a fraction of a millimeter is much easier when your instrument is on solid ground. It might simplify the platform to the point where descent into the lunar gravity well is worthwhile.
Building on the surface destroys the entire benefit of the project. The moon isn’t large enough to give you the effective ape rather size you need - if it was you’d be fine on earth.
If you took the straw from a Big Gulp (1/4 inch in diameter) and made the straw 20,000 miles long, then looked at the sky through that straw, the patch of sky you'd see would be the size of the shadow of the M87 black hole.
The real story is that my childhood was filled with lies, and every "photo" of a black hole I saw in gradeschool science books were really artist renderings.
What you can do is to use methods where you [have] do not need any calibration whatsoever and you can still can get pretty good results.
So here on the bottom at the top is the truth image, and this is simulated data, as we are increasing the amount of amplitude error and you can see here ... it's hard to see ... but it breaks down once you add too much gain here. But if we use just closure quantities - we are invariant to that.
So that really, actually, been a really huge step for the project, because we had such bad gains.
~~~~~~~
They also deleted multiple critical comments from that video presentation.
E.g. "Pratik Maitra" posted multiple comments that later disappeared.
Do you think the fact that the CT scanner at your local hospital needs to be calibrated and computationally reconstructed from X-ray intensities mean it does not result in an "image"?
When we use side-scan sonar to create representations of the ocean floor (e.g. https://commons.wikimedia.org/wiki/File:Laevavrakk_"Aid".png), they are computationally reconstructed from the raw data which are not intrinsically recognized as pixels without reconstruction. Are these not "images"?
What is your actual contention here? Is it that any representation which is not the result of a traditional visible-light camera doesn't count as an "image"?
If so it's an irrelevant distinction to make. If not, you need to articulate in a specific and informed way why the way they reconstructed the image was wrong or could be improved.
It seems from your blog that you don't really understand what a "prior" is and why it might be useful for this kind of signal processing.
> CT scanner at your local hospital needs to be calibrated
Of course the scanner (and any other measurement tool) need to be calibrated. Specifically, the scanner (and telescope) needs to be pre-calibrated based on already known samples.
In case of telescope, it needs to be precalibrated based on known images of remote stars.
Katie Bouman (the face of EHT imaging team), however, claims: "you [have] do not need any calibration whatsoever and you can still can get pretty good results"
EHT team tested for some biases, but did not test for the most significant bias.
Because they try to make an image of a black hole, their strongest bias is to see a black hole in anything.
So they should have tested if their final implementation of "imaging method" does NOT see black hole when incoming sparse data does not contain the black hole.
Unfortunately, there is no such test in the presentation.
EHT team tested that "imaging method" that was trained for recognizing a disk (without a hole) - is still able to recognize black hole. See it at [31:55]
But they did not test the reverse: train an imaging method for recognizing black hole, but then feed sparse disk data to that imaging method. Would it be able to see disk or still would see a black hole?
How about trying to feed sparse data of 2 bright stars. Would this imaging method that was trained to recognize black holes -- still be able to see these 2 stars?
Unfortunately, there was no testing like that ... or worse -- they did such testing, but then discarded the results, because it does not impress the public and financial sponsors.
Write a paper or blog post that convincingly makes your case and shows that you deeply understand their approach so are qualified to criticize its flaws.
If you actually believe their result is fake, then it's not like the people you need to convince are hacker news readers; you need to convince other physicists who are in a position to agree with you and do something about it.
Anyway if you go around pointing out things like "comments were deleted! they must be covering something up" you are just (rightly) written off as a conspiracy theorist.
Writing a blog post that you do not succeed in publicizing is the same as not writing a blog post.
Convincing people is not a side effect, it is the goal of a post, or of your strong stance.
The people you need to convince are people who know this subject well. You are in the wrong place. Physicists or data analysis people, whatever, people here are not deeply informed on this, and their opinions, whichever way they go on this, would be pretty irrelevant to the truth of the matter.
Comments were almost certainly deleted for entirely different reasons than suppression of the truth. Scientists, in reality, welcome well-reasoned criticism. Bizarre and ill-argued salvos, however, may very well be ignored or deleted.
2nd stupid question:
I though gravity acting on light was a 'light as particle' thing, rather than 'light as wave' thing? If that were the case gravity acting on light as particle manifesting in light as wave doesn't seem consistent.
I think most of the time it's best to think in terms of waves, but every now and then they exhibit particle-like behaviour. Any such attempt at simplification hits problems though.
I think the best approach is to be pragmatic and simply follow the observations and what the experiments tell us rather then force it into one paradigm or the other and complain when it doesn't make sense. This is sometimes referred to as the "Shut up and do the math" approach.
A photon does not switch from particle behavior to wave behavior. Physicists do. These are two different models for the same thing.
In some ballistics problems, you'll consider a flat earth (baseball problems) in some others a spherical one (spaceflight). It is about which model makes the math simpler.
The equivalent of 'slowing down' for light is stretching out (increasing) the wavelength. It reduces it's energy in the same way that gravitic deceleration reduces an object's kinetic energy, but without affecting it's 'speed'.
The model of gravity as distorting 4D spacetime helps this make sense; just as atomic clocks run slower in Earth's gravity well, an atomic clock placed near a black hole would also run much slower. Similarly light leaving the black hole has to move through curved spacetime to get out of the gravity well.
This also circumvents all the wave/particle questions; it doesn't matter which conceptualisation you choose for the light, it's the space the light is moving through that is distorted.
So if it's a matter of increasing the effective aperture, does that mean we can launch some spaceships and do the same trick with an arbitrarily large interferometer?
Yes, in theory. The challenges are likely to be dish size and getting accurate enough positioning. Not to mention the downlink capacity required (and also onboard storage).
Well, each antenna needs to be sensitive enough to pick up the signals at all.
You also need very high data rates for producing interference fringes with 1.3 mm waves, which is probably one of the reasons why a satellite like RadioAstron works at longer wavelengths.
I’ll admit: I wasn’t a person who was wowed by the picture of a black hole. For all intents and purposes, it isn’t a very good picture.
Reading this write up gave me a much better appreciation for the difficulty of actually just capturing that image, which is, I'm sure, what people wanted me to focus on when seeing the photo, but that sort of context requires a write up like this, and can’t be relayed through a blurry small picture.
does anyone have a link to the original, full-size image of the black hole? i can only find 800x600 versions laying around. I want to know how large the original is. =)
Honestly curious: why did Bouman get the credit she did? According to Github, she didn't commit nearly as much code, 2.5k LOC vs 850k lines. I understand she migut have had a greater role in management etc.; my point is more that it was a team effort (which she reiterated). Is this a case of the media trying to "celebrate diversity" by unfairly cutting out the whole team which accomplished something incredible? Or is she just the public face and responsible for their media presence?
> Eschew flamebait. Don't introduce flamewar topics unless you have something genuinely new to say. Avoid unrelated controversies and generic tangents.
Because that's how the world works, project leaders get fame and line workers don't.
Somehow people are OK when Elon Musk gets credit for Tesla and SpaceX, Steve Jobs for the iPhone but suddenly if it's a women people will dig the github accounts.
I don't think the Elon Musk or Steve Jobs analogy is accurate. Katie Bouman was co-leader of one of four imaging teams on the project. It seems to me the other person who co-led that team deserves equal praise and fame, and perhaps also the leaders of the other 3 teams.
Having said that, the look of excitement and joy on Dr Bouman's face in that photo is so lovely and relatable that the pic was always destined to go viral. So in that sense you could say she was bound to become the human face of the project.
My guess is because the people who found the Higgs boson, landed the lander on the comet, etc. are for all intents and purposes not well known: there wasn't a lot of fanfare for them so much as the discovery. As such, people are oddly upset about her getting praise for this.
I look at it two ways: one, what she did was not insignificant. Whether she received more praise than someone discovering something else that happened to be a guy, not a concern that is going to cause me to lose sleep at night nor is it something she should be punished for. Two, if her story getting bubbled up inspires girls (or anyone, I suppose) to get involved in or at least more interested in the sciences, hell, I'd say we're all better for it.
I mean, even if you believe that all gender (and minority in general) issues have been solved in the sciences, as of [pick arbitrary date between 1980 and 2019], and there is no longer any dismissal, harassment, or discouragement of women pursuing a career in the sciences, I think we as a society can handle another forty, fifty years of rubbing every "see? Women can too do science as well as the menfolk" achievement in the metaphorical face of the centuries of dismissing women as by-and-large sub-intelligent creatures not temperamentally suited to any serious intellectual work.
Even if it is completely unnecessary nowadays, because women are no longer dismissed, individually or collectively, as not contributing to science, because women are no longer harassed, sexually or misogynistically, when they try to pursue a career in science, I still think it's fair to let people glory for awhile longer in achieving what people once thought they wouldn't, even if most of the disbelievers are centuries dead and hardly any of them are still alive and posting on the internet.
I recently had a percussionist in a band I was in assert that "women don't have the brains to be mechanical engineers", and he honestly seemed to be serious. These people still exist
edit: I realise the post I'm responding to is sarcastic, but not everyone who thinks women are intellectually inferior to men is a troll
Well, the thing about a black hole - its main distinguishing feature - is it's black. And the thing about space, the colour of space, your basic space colour, is black. So how are you supposed to see them?
It seems like the main issue here was that it was very far away and comparatively small to other intergalactic features like, say, galaxies themselves.
And you can notice like at the bottom we get really terrible reconstruction, just cause if it fits the data very well, because you know it maybe wants to smooth out the flux as much as possible and we don't select things like that in the true data.
==============
They simply delete image interpretation because it does not fit the theoretical image that they want to see. How convenient. They call it "Calibration Free Imaging":
“You need to be able to tell the difference between one wave front and the next, and if the next wave front is 1.3 mm behind and traveling at the speed of light, then you need to reliably distinguishing between events 4 picoseconds apart (4 trillionths of a second). So, every telescope needs a shiny new atomic clock and a really fast camera. You begin to get a sense of why the data consolidation looks more like a cargo shipment than an email attachment; trillions of snapshots every second of not just the waves you’re looking for, but also the waves that will cancel out once all the data is processed. Add to that the challenge of figuring out (generally after the fact) where every detector is to within a fraction of a millimeter even as their orientation changes and the Earth rotates, and you’ve got a problem.”
Just stupefyingly complex and amazing!