Hacker News new | past | comments | ask | show | jobs | submit login
Supersharp Images from New VLT Adaptive Optics (eso.org)
222 points by sohkamyung on July 18, 2018 | hide | past | favorite | 56 comments



This is very exciting. I'd love to see some of this make its way into amature equipment.

Technology has helped us go past what would have been though of as possible with similar optics equipment 50 years ago. For the most part, optical mirrors and lenses are the same but what we can now do with them has changed quite a bit.

For example, here is a video of Mars through a small telescope: http://i.imgur.com/8juHPdn.gifv

If we take the best parts of each video frame in that video and combine them in a smart way, a process called lucky imaging, we can reduce the impact of the atmosphere: http://i.imgur.com/CzLZTlv.png


Uh, there is much more than that, if you feel fancy, you can reverse the distortion in each frame and "fix" the atmospheric issues given sufficient SNR. Unfortunately this is not enough for dark stars with reasonable sized optics...

See the work here and the comparison to Lucky Imaging using iirc. Avistack: https://publikationen.uni-tuebingen.de/xmlui/handle/10900/49...


Thank you for the link, there that is one hell of a reference. Please share more if you have any others handy.

I wonder how much of that work would be generalizable through specialized neural nets: https://arxiv.org/abs/1702.00403


Err, this is far from neural networks. I once tried to combine it with some thing that can do the optimization faster, so that I could make it a tree structure (as to prevent feedback form amplifying iteratively, kind of like how you need to be careful that your GAN doesn't start doing dog pictures instead of cat pictures), and run the images that compromise a tree against the best guess of it's sibling tree, as well as vice versa, before using these new estimates of the distortion to get a new best guess from all images contained by the parent of these sibling trees. Repeat until you reach the top. You can use more than one sibling to handle non-power-of-two framecounts. There is partial software on my Github, in case someone is interested I can be reached, and while working for free isn't really in my interested (other things make more fun/seem more promising), I'd be happy to start working on it again if there was a reason.

Regarding your paper, I have to remind you that Michael got nice results from upsampling the images before running his software. I actually planned on using the texture units for this, to save on bandwidth/address calculation overhead in the pending partial rewrite of my software. The GAN there also uses just a single frame, whereas this uses the properties of the distribution of the distortions when seen in the frequency domain to figure out how the distortions are most likely, and then combines the SNR from the many frames to a single image. There is research using a method very similar to Michael's with a GPU, GTX 580 or so iirc, which does >15 fps @720p in real time, with less than 2 frames latency and no more than 1 frame necessary latency if you run the GPU work queues rather empty (risking underutilisation if you don't get CPU time fast enough again). Combine with e.g. a nice Volta DGX, and something like a 400mm Schmidt camera including a field flattening lens and a CMOSIS CMV12000 (like, take the sensor out of an AXIOM beta camera, shrink the board around it to the smallest you can get, and stick it with a lens on top facing a 20 cm spherical mirror, with a corrective plate ~80cm from the mirror. This is about ~1000$ optics, 2500$ image hardware (including that necessary to get the full stream at >100 fps into the DGX), and whatever rent you pay for the DGX. Distortion free 10x slow motion with a pixel size of 14mm at 1km distance.

If you'd want to sell such a thing to non-military...

[0]: KIM, Dongmin; SRA, Suvrit; DHILLON, Inderjit S. A non-monotonic method for large-scale non-negative least squares. Optimization Methods and Software, 2013, 28. Jg., Nr. 5, S. 1012-1039. https://pdfs.semanticscholar.org/622c/84cfba9781ad846105f28d...


I'm a PhD student working with data of globular clusters from this instrument for quite some time now. I will be happy to answer your questions!


How does it compare to my amateur attempts with a 12" Newton https://www.flickr.com/photos/joelkuiper/42502865635/ :p ?

All kidding aside, do you think there is some scientific value in the efforts of hobby astronomers and astrophotographers around the world?


I gotta say I'm quite jealous of your dark skies and beautiful photography.

Also, how are you overcoming flexure and mirror flop with your setup!? I have troubles keeping a 6" stable for a minute with a reasonable mount. Do you have more info on your setup anywhere?


Are the images natural colour, or have they been 'enhanced' in any way? i.e. is Neptune really that blue?


I see this question a lot. I used to have an obsession with 'true color'; images felt fake otherwise. Artificial.

I'm a working scientist now, and my view has changed. I realize how limited our senses are. How much of the world--of the universe--I'd miss by restricting it to just what my eyes can see natively. Even among colors that I can see, but perhaps the signal is too faint ... I'm a lot more tolerant of color-mapped images now. I don't see them as artificial anymore, but as beautiful and transcendental. A window into a hyper-spectral world normally invisible to me. It's really something special. I wish I could share this perspective with more people.


Alex Grey studied cadavers at Harvard for years. His art tries to show the true medium, not one limited by visible light. Sort of like what Superman might see. Our bodies are emanating light in a spectrum of frequencies (Planks law.) All this light is leaving our bodies at C, whiles all the light from the universe is coming at us, our "light cone." We see the surface of bodies..but the actual substance of reality has interfering rippling waves emanating and being absorbed..not unlike a pool. So the next time someone tells you someone is ugly, remember that the visible light surface is just the beginning..

https://m.alexgrey.com/art/paintings/soul/alex_grey_humming_...


The artist of drawings for Scientific American for many years made his drawings super-real by emphasizing components of interest. And these were black-and-white.


I think the beauty is lost on me when I don't know what the color means. I either want the real deal or to know what the mapping is so I can appreciate that. Otherwise it's just a pretty picture.


> or to know what the mapping is

Agreed! This is important. Scale bars would be nice too, as well as info on other pre-/post-processing. Usually all this is in an associated publication (which is hopefully freely available), since it usually takes a surprising amount of information to fully understand an image like this.

> beauty is lost on me > pretty picture

Pick one ;) Sometimes we can find things beautiful without fully understanding them (arguably this is always the case). For me, knowing whether it’s derived from real measurements is what matters. But everyone’s threshold is different. I’ve seen beautiful simulated data too, but that’s something different again — more like the beauty of an equation to me.


I think it boils down to two things (at least it does for me):

- If the picture is shown as if it was a photo, how similar is it to what I'd see if I were magically transported in a spacesuit into object's vicinity?

- If the picture is an obvious false-color render, does it have a reasonable color map, or some "artist's impression"?


It's a real eye-opener when you realize that our eyes are no more "true color" than a CCD... I didn't really get that until I took a graduate optical observing class.


The advantage of MUSE is that you get all color information, i. e. the flux at any wavelength from blue to red. In principle, one can use this together with the sensitivity curve for our eyes to construct a natural image. In this case, I think, they tried to imitate the color scheme from the Hubble image which is more limited.

In short: Not sure how realistic this is, but one could make a realistic image from the new data.


So MUSE is hyperspectral? there's a full spectrograph at each pixel?


Exactly! One datacube that comes out from the instrument contains 300 x 300 spectra. This is actually the main capability of the instrument which has 24 individual spectrographs. Here's a nice animation of the path the light takes inside MUSE: https://www.youtube.com/watch?v=-fh2Y6Zyhwc&feature=youtu.be...


Awe-inspiring. You can see the individual spectrographs at https://youtu.be/-fh2Y6Zyhwc?t=1089


Could one use that information the other way around to make estimates for expected "missing data" in Hubble images taken in areas where VLT has not looked yet, for example to decide where to look next?

<something something throw machine learning at it cliché>


Not sure if you meant it like this but redshift estimation comes to my mind. The farther away a galaxy is, the redder it becomes. You can measure the distance (redshift) from galaxy spectra (with MUSE for example) but not from directly HST images. This mapping color -> redshift is called photo-z and was tested with MUSE data in an very famous area observed with HST, the Hubble Ultra Deep field. https://arxiv.org/abs/1710.05062



> With this new capability, the 8-metre UT4 reaches the theoretical limit of image sharpness and is no longer limited by atmospheric blur.

Theoretical limit as in diffraction limited? How will this technology "scale" to other frequencies and resolutions? Related to this diffraction limit: is there any overlap in the advances in microscopy and astronomy? For example, do advances in super-resolution microscopy[0] affect advances in optics in astronomy? Could advances in adaptive optics in astronomy somehow translate to microscopy?

(I'm also curious if this technology will make putting telescopes in satellites not worth the cost, but that question was already asked and answered here: https://news.ycombinator.com/item?id=17557482)

[0] https://en.wikipedia.org/wiki/Super-resolution_microscopy


Yes, the diffraction limit is meant here. The VLT has four 8 m mirrors, for each of them the angular resolution limit is = wavelength/diameter = 8 * 10^(-8) rad. The practical resolution of the new narrow-field mode is about 4*10^(-7) rad, and it was one order of magnitude larger before.

Adaptive optics is the key invention here. As far as I know, it works better in the near-infrared than in the red part of the optical range, and it gets worse toward the blue part. Due to this, our resolution changes as a function of the wavenlength, since MUSE captures the flux from all wavelengths at the same time.

ESO wants to achieve an even higher resolution at the 40m Extremely Large Telescope (another order of magnitude better): https://www.eso.org/public/teles-instr/elt/

It's funny that your mention super-resolution microscopy because Stefan Hell, one of the Nobel Prize winners for advances in that field, works in the same city as we do. So far, I don't think we have any overlap with what he does.


> The practical resolution of the new narrow-field mode is about 4 * 10^(-7) rad, and it was one order of magnitude larger before.

Amazing! :)

> As far as I know, it works better in the near-infrared than in the red part of the optical range, and it gets worse toward the blue part.

Do you know what is the reason for this? Noise from Rayleigh scattering? EDIT: Already answered here: https://news.ycombinator.com/item?id=17559121

> It's funny that your mention super-resolution microscopy because Stefan Hell, one of the Nobel Prize winners for advances in that field, works in the same city as we do. So far, I don't think we have any overlap with what he does.

Why not arrange a kind of meet-up? :) Surely exchanging ideas would lead to interesting ideas, and in the worst case you can at least by inspired by geeking out over mega- and micro-optics together.


In my first year of grad school (1995) our microscopy professor showed us an astronomy adaptive optics paper and said "we're going to do that". Years later, they did that.


the article makes it sound like the approach is so effective that it makes space-based telescopes unnecessary.

is that true? does this mean that we should simply use ground-based 'scopes with adaptive optics?


We can achieve a very high resolution from the ground but only in a very small field of view. To cover one typical HST image with MUSE at the VLT, we would need a mosaic of hundreds of exposures. The reason for this are the four artificial guiding stars from the lasers. The closer they are together on the sky, the more atmospheric distortion you can correct.

Here is an image of them: https://www.eso.org/public/unitedkingdom/images/vlt-laser-cc...

Some parts of the electromagnetic spectrum are also not possible to observe from the ground. That's mainly UV and shorter wavelengths (X-ray, gamma-rays). We will always need space telescopes if we want to have these photons.


The Hubble field of view is pretty small too, something like 25 arcseconds the internet tells me.


I was comparing it to HST WFC3 with a field of view of 160 x 160 arcsec^2 (https://www.spacetelescope.org/about/general/instruments/wfc...). Thats about 450 times larger than the MUSE narrow-field mode FOV.

I think you mean the high-resolution mode of the ACS instrument (https://www.spacetelescope.org/about/general/instruments/acs...) but that is broken and it was not repaired during the last HST service mission.


Actually what I was looking at was the field of view of an individual MAMA detector in the Space Telescope Imaging Spectrograph, that instrument has about ~ 100 x 100 arcsec2 of total field of view apparently.


Do the lasers create light pollution problems for the telescopes?


Absolutely! When the lasers are used, MUSE uses a filter at the correct wavelength (Natrium D) to get rid of the laser photons.

Only one telescope is currently equipped with lasers. The other ones can't observe the same region of the sky when the lasers are activated.


Does adding more lasers fix the field of view problem?


Yes. You need one dot for each patch, where the distortion inside each patch is approximately constant inside a single instant. Now, handling these multiple dots in a good way, that's another story. Compare the patch size/discussion in https://publikationen.uni-tuebingen.de/xmlui/handle/10900/49...


This is very much unexplored territory, but ESO thinks so. The ELT (https://www.eso.org/public/teles-instr/elt/) will use more lasers but the exact configuration is still work in progress, as far as I know.


Not if we want to look in the infrared.


Adaptive optics is really only effective in the infrared. And really only in the near-infrared, as past 5 microns, we can't really see through the atmosphere. In the visible, ground-based can't match space observatories (in the visible, the atmospheric turbulence is way harder to correct for).


> In the visible, ground-based can't match space observatories (in the visible, the atmospheric turbulence is way harder to correct for).

The image this article is about is mostly in the optical (MUSE only goes from 465nm to 930nm; and the synthetic filters used in the MUSE image [4] seem to be quite close to the used HST filters).

> And really only in the near-infrared, as past 5 microns, we can't really see through the atmosphere.

Not quite true [1] (at least if only considering absorption), it's just that the background becomes more and more of a problem (both continuum and narrow emission lines), and one has less nicely defined windows of transmission and lots of strongly variable absorption lines (picking dry places for the telescopes and selecting nights with low water vapour column densities helps). At the VLT for example there is VISIR [2], which does mid-IR imaging and spectroscopy.

Of course the sensitivty from the ground is much lower than from space or somewhere in between (for example there is SOFIA [3] which is a 2.5m telescope on an airplance) and some bands of interest are indeed absorbed. But there are indeed projects that involve mid-IR observations that can be done from the ground.

[1] https://www.gemini.edu/sciops/telescopes-and-sites/observing... [2] http://www.eso.org/sci/facilities/paranal/instruments/visir/... [3] https://en.wikipedia.org/wiki/Stratospheric_Observatory_for_... [4] https://www.eso.org/public/unitedkingdom/images/eso1824c/


Ah my bad, I mean that in the visible, you can't reach the diffraction limit with AO like you can in the near infrared. Certainly impressive matching HST from the ground.

I don't think past 5 microns there's been a lot of science done from the ground (not counting SOFIA). Practically, I think everyone is waiting for JWST. A lot of the interesting molecular lines also get absorbed by the Earth's atmosphere.


I'll bite.

In those pictures of neptune, what is the KM-per-pixel were looking at?

Is there a minimum focal length on this? Purely hypothetical: Could we basically see astronaut's footprints on the moon with this? What about looking into the window of the ISS?


In this narrow-field mode of MUSE, the CCD detector can resolve 0.025 arcseconds per pixel (arcsecond is a weird unit for angles used in astronomy). At the current distance to Neptune (according to wolframalpha: about 30 au = 4.5 bn km), this corresponds to about 500 km/px. Due to observing conditions, I think the real resolution was more like 0.07...0.08 arcseconds, so maybe it was 1000 to 2000 km/px.

I'm not sure if the focal length plays any role here. The resolution is usually limited by the telescope size (true for all telescopes, scales with 1/diameter) and atmospheric conditions (only relevant for ground based ones). At the distance of the moon (300,000 km), the physical resolution is 36 m/px and for the ISS (400 km) it is 5 cm/px.

If you want to play around with it, here's the formula: length_still_resolved = angular_resolution * distance

The angular resolution is 1.2 * 10^-7 (= 0.025 arcseconds converted to radian), distance and length_still_resolved have the same units.


From the FAQ: http://www.eso.org/public/about-eso/faq/faq-vlt-paranal/#18

> Q: Could the VLT take a picture of the Moon-landing sites?

> A: Yes, but the images would not be detailed enough to show the equipment left behind by the astronauts. Using its adaptive optics system, the VLT has already taken one of the sharpest ever images of the lunar surface as seen from Earth: http://www.eso.org/public/news/eso0222/. However, the smallest details visible in this image are still about one hundred metres on the surface of the Moon, while the parts of the lunar modules which are left on the Moon are less than 10 metres in size. A telescope 200 metres in diameter would be needed to show them. [continued]


Can you tell us about your favourite globular clusters? I know some of them have very interesting properties like having similar stellar ages but are there any really peculiar ones you can tell us about? Also, I'd love to see some of the images your referring to.

Thanks!


I like NGC 3201 because we found a stellar mass black hole in it (https://www.eso.org/public/news/eso1802/). There should be many more of them in all clusters, but they are hard to find. Theorists can use this to check their N-body simulations of globular clusters.

Some clusters (omega Cen, 47 Tuc) are really weird and different from all others. We think that they might be the remnant cores of dwarf galaxies.

What images do you mean?



I think this one is even more impressive https://www.eso.org/public/images/eso1824d/

If I understand correctly the middle and the image on the right are the same thing, both taken by VLT array. But the right is using the new MUSE technique.


Kinda hard to find the pics so here’s top 100 https://www.eso.org/public/images/archive/top100/


Great images. However, the text below the first image says:

> The Very Large Telescope snaps a stellar nursery and celebrates fifteen years of operations

... so presumably most of those were not taking with this new technology.

Thanks for linking anyway, though!


Really cool profile page of the telescope http://www.eso.org/public/teles-instr/paranal-observatory/vl...


Jesus, the description of the technology sounds amazing. I am blown away at how much ingenuity was required to make this.


The neptune image is breathtaking. It instantly summons a yearning to explore this unknown icy neighbour.


So the light we see from the lasers is emitted by atoms in space? I thought space was a vacuum? Or is it that the emitted light only comes from atoms in the atmosphere? So the further up we go, the dimmer becomes the emitted light.

Also, is there a way to determine how much blurring is happening from gravitational waves? In other words, if a ripple in spacetime washes across the space between us and a far away star, will the star become fuzzy like the schlieren distortion on a hot day here on earth? (faraway objects become ghostlike as heat from the hot ground alters the air density between observer and target)


> The correction algorithm is then optimized ... to reach an image quality almost as good as with a natural guide star.

Then why not use natural guide stars?


Because they have to be very bright and very close to the target. This limits the observable targets quite a lot (we don't want to observe something, but specific targets for most projects).


Lots of bla bla bla and only one image! Now I feel cheated, that was clickbait!




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: