Hacker News new | past | comments | ask | show | jobs | submit login
Metamaterial based flat lens promises possible revolution in optics (bbc.com)
148 points by mdf on June 3, 2016 | hide | past | favorite | 43 comments



My understanding of these flat metamaterial lenses is that they're fine-tuned for a single wavelength of light; you can't put one in a camera and expect it to work like a glass lens did. I'm sure there are applications where control of monochromatic light is important, but I couldn't tell you what they are.

It's an awfully important detail to completely omit from an article.


According to the abstract (http://science.sciencemag.org/content/352/6290/1190),

>Diffraction-limited focusing is demonstrated at wavelengths of 405, 532, and 660 nm with corresponding efficiencies of 86, 73, and 66%.

...but I'm not clear on whether that is from a single lens or if they constructed different lenses for each color. (Those wavelengths correspond to violet, green, and red light fwiw) EDIT: the full paper makes it clear this is three separate lenses, so yeah, you're right.


screens separate out different colors into different areas at a small scale, wouldn't this work here as well?


For smartphones, where thinness is very important but area is less so, it would not be unprecedented to make an array of 4 monochromatic cameras and rectify and combine the images computationally.

  R G
  G B


Unfortunately, that is not how light and perception works.

There are thousands/millions¹ of separate visible light frequencies. Our eyes and brains takes that all in and does an enormously lossy mapping of that to 3 perceived colors.

If you only record 3 of those thousands/millions of frequencies, you will lose 99.9% of the light, and mostly make black photos.

¹ depending on how wide the frequency interval considered monochromatic is.


This is incorrect.

Cone cells have fairly peaked frequency responses, and due to random projections caused by the cascading, this is sufficient to fully reconstruct the signal, i.e. the converse of the statement is true, you can retain 99.9% of the perceptual information using 3 sensors, all you need is to perceive in time and some randomness in the sensor placement.


They're not really that peaked, the M and L cones largely overlap, even, and are spread over at least a third of the entire visible spectrum. Pictures at: https://en.wikipedia.org/wiki/Cone_cell


You can maybe retain 99.9% of the color information humans can perceive, but that is a very small part of what's in the visible spectrum signal.

Maybe this example will clarify: Our eyes can not tell the difference between monochromatic green light (540nm) and a mix of blue (470nm) and yellow (580nm) light.


That only says something about the display to eyes end of things. If you record only a subset of the visible spectrum the image can end up wrong. You might actually not get any color at all in the worst case.


I didn't quite understand. Are you refuting the eye-argument or the camera-argument?


The original poster was much more apt than your explanation. CMOS sensors in modern cameras actually have arrangements exactly like that on a pixel level. They use filters for each pixel, doubling up on green. With something like this you might be able to combine the filter and the lens.

Your explanation of 'thousands of frequencies' is somewhere between misleading and incorrect. Electromagnetic radiation is a continuous frequency. Our eyes' 'sensors' are not sensitive to exact wavelengths of course, they are sensitive to a range. Some more sensitive to red, falling off into orange, into yellow, into green (as the frequency of light goes up). We are most sensitive to colors in the green spectrum, least sensitive to blue.

Do you know why the sky is blue? It is because of the spacing of ozone particles. It absorbs and reflects higher and lower wavelengths _more_ but red and green light still make it through, as well as UV radiation.


I think the difference is that a modern camera filters in wide ranges.

So blue is maybe 400-500nm, green 500-600nm and red 550-650mn.

But these new lenses only give sharp images in much narrower bands. If it has blue at 448-449nm, green 540-541nm and red 580-581nm, most of the spectrum will be lost.


It _is_ possible to do trichromatic color photography with sets of single-color filters. In fact, the earliest color photography was done this way. Technicolor films were shot in a camera that recorded three monochromatic strips simultaneously. (it is true that the filters were generally not strictly a single color frequency, but a narrow band)


The difference is that they were using film which was sensitive over the entire visible spectrum with color filters that allowed a broad range of wavelengths that are all perceived as the same color. If you had a smartphone camera filtered with a 1 nm bandpass filter for a particular wavelength, you're going to get so little light that it will be like taking pictures in the dark.

The filter approach is still used in most cameras (Bayer filters) but a vary narrow wavelength bandpass would not be appropriate for a consumer camera.


Amount of light aside, the color rendition would be horrible too. Akin to the light from an RGB LED panel.


What's wrong with the color rendition of RGB panels? The newest ones are some of the most color accurate displays available.


I don't mean the images displayed on a panel; I mean the light from the panel if it were used to illuminate something else. Because you have only three narrow wavelengths of light, you lose 99% of the color information contained in the illuminated object's absorption spectrum.

Actual LED lamps work around this issue by using phosphors to broaden the spectrum.

This quality of light is measured by the Color Rendering Index; the phenomenon of color differences due to narrowband light sources is known as metamerism.

The images displayed on a panel do not suffer this issue, because our eyes can detect only 3 "dimensions" of color; hence 3 narrowband illuminants suffice. (And in fact narrowband is necessary for a broad color gamut.) It is only when reflection/absorption comes into play that metamerism matters.


The spectral transmission of a typical bayer filter is fairly wide, with the R G and B subpixels each capturing a large range of wave lengths. See http://blog.teledynedalsa.com/2012/11/ir-cut-filter-for-baye... for an example spectrum.

You could get some image with single wavelength red, green, and blue sensors, and it might be pretty interesting looking, but it wouldn't see much like our eyes do.


That would treat monochromatic yellow light differently from yellow light made up of a combination of red and green, instead of treating them the same like the human eye does. The yellow of an actual banana would distort differently from the yellow coming from a picture of banana displayed on an LCD.


Eureka! We can make high precision, flat optics for Low Pressure Sodium lights!

http://www.atmo.arizona.edu/students/courselinks/spring08/at...


>thinness is very important

For manufacturers but not for consumers, to an extent.


The most obvious application of single-frequency lenses is in fiber optic communication transceivers (a single-frequency laser, or a series of such lasers in Wavelength Division Multiplexing). From my understanding these are a big source of component cost and failure rates. See e.g. http://www.panduit.com/heiler/InstallInstructions/PN541.pdf


Yep. Still useful for things like fluorescence microscopy or astrophotography though. There you're usually interested in just the specific wave length, like the (Green) Fluorescent Protein emissions for microscopes, or the Hydrogen Alpha emissions in astrophotography. Rather than working with filter sliders, you'd essentially need to use a different lens for each wave length though …


How about lasers? How about flat lasers, on chips, using laser diodes and flat lenses? Or camera pixels on the nano scale?


Why couldn't multiple patterns sensitive to different frequencies be printed on the same surface?


Unless you can couple them with very tiny prisms, space + getting enough light is the problem.


I'm sure there are applications where control of monochromatic light is important, but I couldn't tell you what they are.

What about VR headsets? Flat lenses are good in that case, and you'd just have to manipulate 3 wavelengths.


Still more than 3 wavelengths, even with OLED screens. Check out the gray line here: http://www.displaymate.com/Spectra_4.html

Maybe close enough that it would work, but I'm also not sure whether you can do these metamaterial lenses for three wavelengths at once. If it's even possible, AFAIK it's not a solved problem.


Chromatic aberration is already a thing. Are you saying that these lenses only work for a certain wavelength and their performance drop offs significantly when you go even slightly off-wavelength?


I can imagine a monochromatic microscope, where you can see the shape of tiny things, and you do not care about colors.


Full article: http://science.sciencemag.org.sci-hub.cc/content/352/6290/11...

These lenses are designed for a specific wavelength, and if I am reading the paper properly, only work with circularly polarised light. Essentially, for a given design wavelength and focal length there is a desired phase shift at each point on the lens. This phase shift is caused by the titanium dioxide "nanofins" which rotate the circularly polarised light to produce the desired phase shift. The phase shift is determined by the angle at which each fin is rotated. This produces a pattern of fins rotated relative to one another, which can be seen in the images of the BBC article.

While the lenses are designed for a target wavelength, they're not entirely useless at other wavelengths, they just have terrible chromatic aberration. In all other respects they seem to be excellent (especially for their size), but this makes them useless for most commercial applications.

To manufacture the lenses, they start with a substrate of silicon dioxide; not actually glass as said in the article, but quartz, like sand. This is coated by a resist, which is patterned by electron-beam lithography. The resist is "positive", meaning that the exposed part is removed when developed. A thin layer of titanium dioxide is deposited using atomic layer deposition. This is a type of thin film deposition technique that allows the deposition of a single atomic layer at a time. This is accomplished by introducing two different precursors one at a time alternately in sequence, the number of cycles determines the number of layers. With this they can essentially deposit just enough TiO2 to fill the holes left in the resist, though it also deposited on top of the unexposed resist.

The TiO2 remaining on top of the undeveloped resist is etched off and the undeveloped resist is removed, leaving just the nanofins. The nanofins have a high "aspect ratio", meaning height-to-width, which makes them challenging to produce using most semiconductor fabrication techniques. They are however quite large compared to modern semiconductors, on the order of hundreds of nanometers, which makes most things easier. Semiconductor fabrication uses photolithography, this used electron-beam lithography. While electron-beam lithography can in principle produce smaller feature sizes than photolithography (due to the smaller wavelength of electrons), that was not needed for this application; rather electron-beam lithography does not require the creation of a photomask and is consequently much more useful for small scale prototyping.

Commercially producing these lenses at scale could potentially be done with photolithography, though there would be a large upfront cost due to the need to fabricate photomasks. Monocrystalline silicon substrates are standard and silicon-dioxide-on-silicon is extremely common; I suspect the lenses could be fabricated on such a SiO2-Si substrate and the silicon on the back face removed, leaving optically transparent lenses.


"Shapes on the surface of this lens are smaller than the wavelength of light involved: a thousandth of a millimetre."

A micron is 1000 nm and visible light is about 900 nm and down. Close but no cigar.


You have to love pop sci headlines with phrases like "...promises possible..."

I promise you, BBC's Roland Pease, that it's possible the sun won't rise tomorrow and Linus Torvalds with announce that he will be Microsoft's next CEO.


How is this different than what can be achieved using holographic optical elements, which can routinely make optical lenses and materials using the principle of holography and can be diffraction limited, producing feature sizes that are 1/n the wavelength of light (depending on the mastering process)?


Looks exciting on paper, but I wonder how much work is needed to bring this to mass production.


It looks like they had that in mind while they were designing it:

"But our lenses, being planar, can be fabricated in the same foundries that make computer chips. So all of a sudden the factories that make integrated circuits can make our lenses."

If this is true, I imagine old foundries could produce these since they probably don't need anything near the precision or consistency that current-gen chips require.


I suspect that demand is more likely to drive production scale than technology, but I am drawing an analogy with the economics of microprocessors and their fabrication.


I'm not sure if this is where and when metamaterials break into the mass market, but it's bound to happen sometime in the next 5-10 years, why not now?


Can it see proteins, cell walls or viruses?


Is this much different from Diffractive Optical Elements in use today?


I wonder if it can be used to improve VR optics and make HMDs cheaper.


Better for transparent AR lenses than the current holographic and waveguide approaches.


Gah. Maybe we should add a "[flash]" warning for links that seems to want Adobe Flash installed for important parts of the content. :(




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: