The ultracompact camera that is based on metasurface technology is greatly beneficial for implementation in hard-to-reach places
The cameras that you can easily find embedded in your smartphone are so small that they can easily fit in your pocket while also being high in megapixels.
Such high-resolution micro-sized cameras can capture highly detailed photos of nature, the internal human body, and also can be used in robotic sensing applications, all possible due to advances increased miniaturisation of electronics.
But such cameras have limited fields of view, as a result, they tend to produce fuzzy, distorted images when even greater detail is required.
To solve this, researchers at Princeton University and the University of Washington have developed an ultracompact camera the size of a coarse grain of salt that can produce crisp, full-colour images.
Despite the extremely small size, the newly developed device is powerful enough to match a much larger camera lens in terms of picture quality.
Applications range from minimally invasive endoscopy to improved imaging with the help of robots constrained by size and weight.
Traditional cameras use a curved glass/plastic lens layer for bending the incoming light rays into focus.
The new optical system instead leverages metasurface, a type of artificial 2D metamaterial that provides effective surface impedance.
Just half a millimetre wide, the metasurface is covered with nearly 1.6 million cylindrical posts, with each post having a unique geometry and function like an optical antenna.
Based on silicon nitride, the metasurfaces are compatible with standard semiconductor manufacturing methods used for computer chips, enabling mass-production at a lower cost than the lenses in conventional cameras.
Machine learning-based algorithms enhance the signal processing of these posts in natural light conditions so that high-quality images with a wide field of view get produced on interaction with light.
Integrating Better Processing
The challenge of capturing large field of view RGB images was confronted by developing a computational simulator to automate the testing of different nano-antenna configurations.
The model efficiently calculated the image production capabilities of the metasurface with high accuracy.
“Because of the number of antennas and the complexity of their interactions with light, this type of simulation can use massive amounts of memory and time,” said Shane Colburn, affiliate assistant professor at the Department of Electrical & Computer Engineering, University of Washington and Director of System Design at Tunoptix, a Seattle-based company that is commercialising metasurface imaging technologies.
In a comparison of the image results produced by the new camera system with those from earlier metasurface cameras and conventional compound optics (that used a series of six refractive lenses), the researchers found that aside from slight blurring at the frame edges, the nano-sized camera’s images were free from any other major image distortions, making them comparable to those taken by a traditional lens 500,000x larger in volume.
“It’s been a challenge to design and configure these little nano-structures to do what you want,” said Ethan Tseng, a computer science PhD student at Princeton.
“Although the approach to optical design is not new, this is the first system that uses a surface optical technology in the front end and neural-based processing in the back,” said Joseph Mait, a consultant at Mait-Optik and a former senior researcher and chief scientist at the U.S. Army Research Laboratory.
Besides optimised image quality, the researchers want the camera to perform object detection and other types of sensing for medicine and robotics.
There is also a plan to use such ultracompact imagers for creating ‘surfaces as sensors’.
“We could turn individual surfaces into cameras that have an ultra-high-resolution, so you wouldn’t need three cameras on the back of your phone anymore, but the whole back of your phone would become one giant camera. We can think of completely different ways to build devices in the future,” said Felix Heide, assistant professor of computer science at Princeton.
To read more, visit here