My understanding of these flat metamaterial lenses is that they're fine-tuned for a single wavelength of light; you can't put one in a camera and expect it to work like a glass lens did. I'm sure there are applications where control of monochromatic light is important, but I couldn't tell you what they are.
It's an awfully important detail to completely omit from an article.
>Diffraction-limited focusing is demonstrated at wavelengths of 405, 532, and 660 nm with corresponding efficiencies of 86, 73, and 66%.
...but I'm not clear on whether that is from a single lens or if they constructed different lenses for each color. (Those wavelengths correspond to violet, green, and red light fwiw) EDIT: the full paper makes it clear this is three separate lenses, so yeah, you're right.
For smartphones, where thinness is very important but area is less so, it would not be unprecedented to make an array of 4 monochromatic cameras and rectify and combine the images computationally.
Unfortunately, that is not how light and perception works.
There are thousands/millions¹ of separate visible light frequencies. Our eyes and brains takes that all in and does an enormously lossy mapping of that to 3 perceived colors.
If you only record 3 of those thousands/millions of frequencies, you will lose 99.9% of the light, and mostly make black photos.
¹ depending on how wide the frequency interval considered monochromatic is.
Cone cells have fairly peaked frequency responses, and due to random projections caused by the cascading, this is sufficient to fully reconstruct the signal, i.e. the converse of the statement is true, you can retain 99.9% of the perceptual information using 3 sensors, all you need is to perceive in time and some randomness in the sensor placement.
They're not really that peaked, the M and L cones largely overlap, even, and are spread over at least a third of the entire visible spectrum. Pictures at: https://en.wikipedia.org/wiki/Cone_cell
You can maybe retain 99.9% of the color information humans can perceive, but that is a very small part of what's in the visible spectrum signal.
Maybe this example will clarify: Our eyes can not tell the difference between monochromatic green light (540nm) and a mix of blue (470nm) and yellow (580nm) light.
That only says something about the display to eyes end of things. If you record only a subset of the visible spectrum the image can end up wrong. You might actually not get any color at all in the worst case.
The original poster was much more apt than your explanation. CMOS sensors in modern cameras actually have arrangements exactly like that on a pixel level. They use filters for each pixel, doubling up on green. With something like this you might be able to combine the filter and the lens.
Your explanation of 'thousands of frequencies' is somewhere between misleading and incorrect. Electromagnetic radiation is a continuous frequency. Our eyes' 'sensors' are not sensitive to exact wavelengths of course, they are sensitive to a range. Some more sensitive to red, falling off into orange, into yellow, into green (as the frequency of light goes up). We are most sensitive to colors in the green spectrum, least sensitive to blue.
Do you know why the sky is blue? It is because of the spacing of ozone particles. It absorbs and reflects higher and lower wavelengths _more_ but red and green light still make it through, as well as UV radiation.
I think the difference is that a modern camera filters in wide ranges.
So blue is maybe 400-500nm, green 500-600nm and red 550-650mn.
But these new lenses only give sharp images in much narrower bands. If it has blue at 448-449nm, green 540-541nm and red 580-581nm, most of the spectrum will be lost.
It _is_ possible to do trichromatic color photography with sets of single-color filters. In fact, the earliest color photography was done this way. Technicolor films were shot in a camera that recorded three monochromatic strips simultaneously. (it is true that the filters were generally not strictly a single color frequency, but a narrow band)
The difference is that they were using film which was sensitive over the entire visible spectrum with color filters that allowed a broad range of wavelengths that are all perceived as the same color. If you had a smartphone camera filtered with a 1 nm bandpass filter for a particular wavelength, you're going to get so little light that it will be like taking pictures in the dark.
The filter approach is still used in most cameras (Bayer filters) but a vary narrow wavelength bandpass would not be appropriate for a consumer camera.
I don't mean the images displayed on a panel; I mean the light from the panel if it were used to illuminate something else. Because you have only three narrow wavelengths of light, you lose 99% of the color information contained in the illuminated object's absorption spectrum.
Actual LED lamps work around this issue by using phosphors to broaden the spectrum.
This quality of light is measured by the Color Rendering Index; the phenomenon of color differences due to narrowband light sources is known as metamerism.
The images displayed on a panel do not suffer this issue, because our eyes can detect only 3 "dimensions" of color; hence 3 narrowband illuminants suffice. (And in fact narrowband is necessary for a broad color gamut.) It is only when reflection/absorption comes into play that metamerism matters.
You could get some image with single wavelength red, green, and blue sensors, and it might be pretty interesting looking, but it wouldn't see much like our eyes do.
That would treat monochromatic yellow light differently from yellow light made up of a combination of red and green, instead of treating them the same like the human eye does. The yellow of an actual banana would distort differently from the yellow coming from a picture of banana displayed on an LCD.
The most obvious application of single-frequency lenses is in fiber optic communication transceivers (a single-frequency laser, or a series of such lasers in Wavelength Division Multiplexing). From my understanding these are a big source of component cost and failure rates. See e.g. http://www.panduit.com/heiler/InstallInstructions/PN541.pdf
Yep. Still useful for things like fluorescence microscopy or astrophotography though. There you're usually interested in just the specific wave length, like the (Green) Fluorescent Protein emissions for microscopes, or the Hydrogen Alpha emissions in astrophotography. Rather than working with filter sliders, you'd essentially need to use a different lens for each wave length though …
Maybe close enough that it would work, but I'm also not sure whether you can do these metamaterial lenses for three wavelengths at once. If it's even possible, AFAIK it's not a solved problem.
Chromatic aberration is already a thing. Are you saying that these lenses only work for a certain wavelength and their performance drop offs significantly when you go even slightly off-wavelength?
These lenses are designed for a specific wavelength, and if I am reading the paper properly, only work with circularly polarised light. Essentially, for a given design wavelength and focal length there is a desired phase shift at each point on the lens. This phase shift is caused by the titanium dioxide "nanofins" which rotate the circularly polarised light to produce the desired phase shift. The phase shift is determined by the angle at which each fin is rotated. This produces a pattern of fins rotated relative to one another, which can be seen in the images of the BBC article.
While the lenses are designed for a target wavelength, they're not entirely useless at other wavelengths, they just have terrible chromatic aberration. In all other respects they seem to be excellent (especially for their size), but this makes them useless for most commercial applications.
To manufacture the lenses, they start with a substrate of silicon dioxide; not actually glass as said in the article, but quartz, like sand. This is coated by a resist, which is patterned by electron-beam lithography. The resist is "positive", meaning that the exposed part is removed when developed. A thin layer of titanium dioxide is deposited using atomic layer deposition. This is a type of thin film deposition technique that allows the deposition of a single atomic layer at a time. This is accomplished by introducing two different precursors one at a time alternately in sequence, the number of cycles determines the number of layers. With this they can essentially deposit just enough TiO2 to fill the holes left in the resist, though it also deposited on top of the unexposed resist.
The TiO2 remaining on top of the undeveloped resist is etched off and the undeveloped resist is removed, leaving just the nanofins. The nanofins have a high "aspect ratio", meaning height-to-width, which makes them challenging to produce using most semiconductor fabrication techniques. They are however quite large compared to modern semiconductors, on the order of hundreds of nanometers, which makes most things easier. Semiconductor fabrication uses photolithography, this used electron-beam lithography. While electron-beam lithography can in principle produce smaller feature sizes than photolithography (due to the smaller wavelength of electrons), that was not needed for this application; rather electron-beam lithography does not require the creation of a photomask and is consequently much more useful for small scale prototyping.
Commercially producing these lenses at scale could potentially be done with photolithography, though there would be a large upfront cost due to the need to fabricate photomasks. Monocrystalline silicon substrates are standard and silicon-dioxide-on-silicon is extremely common; I suspect the lenses could be fabricated on such a SiO2-Si substrate and the silicon on the back face removed, leaving optically transparent lenses.
You have to love pop sci headlines with phrases like "...promises possible..."
I promise you, BBC's Roland Pease, that it's possible the sun won't rise tomorrow and Linus Torvalds with announce that he will be Microsoft's next CEO.
How is this different than what can be achieved using holographic optical elements, which can routinely make optical lenses and materials using the principle of holography and can be diffraction limited, producing feature sizes that are 1/n the wavelength of light (depending on the mastering process)?
It looks like they had that in mind while they were designing it:
"But our lenses, being planar, can be fabricated in the same foundries that make computer chips. So all of a sudden the factories that make integrated circuits can make our lenses."
If this is true, I imagine old foundries could produce these since they probably don't need anything near the precision or consistency that current-gen chips require.
I suspect that demand is more likely to drive production scale than technology, but I am drawing an analogy with the economics of microprocessors and their fabrication.
I'm not sure if this is where and when metamaterials break into the mass market, but it's bound to happen sometime in the next 5-10 years, why not now?
It's an awfully important detail to completely omit from an article.