The new 3D camera will have 12,616 lenses

Normal cameras have only one lens and produce two-dimensional flat images whether people hold them in their hands or watch them on a computer screen. A camera with two lenses or two cameras set apart can produce interesting 3-dimensional photos.

But what if we have a digital camera that captures the world with thousands of tiny lenses that each lens itself is a camera. We will still have a 2-dimensional image, but at the same time there is also a much more precious thing: an electronic 'depth map' has a distance from the camera to each object in the photo, a super 3D.

Stanford University's electronic scientists are led by Professor Abbas El Gamal, who is developing such a camera, based on the ' lens-shaped multi-hole sensor '. They pulled the pixel on the sensor down to 0.7 microns, many times smaller than the pixel on a standard camera. They then grouped these pixels into a sequence, each consisting of 256 pixels and they were preparing to place a tiny lens on top of the pixel ranges.

Keith Fife, a doctoral student working with El Gamal and an electrical engineer lecturer H.-S. Philip Wong, said 'This is similar to many cameras on a single chip.' In fact, if their first 3-megapixel chip has all the micro-lenses, they will have a total of 12,616 'machines. image ' Letting this camera in front of a person, in addition to taking ordinary photos, this camera will record its distance to the subject's eyes, nose, ears, chin, etc. One of the possible applications Most of this technology is face recognition for security purposes.

However, there are many other applications for deep information cameras: biological photography, 3D printing, creating objects or people in virtual worlds or 3-dimensional models of buildings. This technology is expected to produce an image with nearly every detail near or far in focus. But it is also possible to selectively remove many parts of the image after shooting using the image editing software.

Knowing the exact distance to an object can give robots a better visual space than humans and allow them to perform complex tasks beyond their capabilities. Fife said 'People will come up with things they can do with this technology'. Three researchers published this work in the February issue of the Digest of Technical Papers. The multi-lens camera looks and feels like a regular camera, or is a smaller mobile phone camera. The mobile phone aspect is important when "most of the cameras in the world are now in mobile."

The main lenses (also called target lenses) of a conventional camera focus the image directly on the camera's image sensor, the image recording unit. The objective lenses of the multi-hole lens camera focus on about 40 microns (one micron is one millionth of a meter) above the image sensor arrays. As a result, any point in the image is captured with at least four mini cameras of the chip, giving overlapping views, each corner from one direction, just like the human left and right eyes see. slightly different angles.

The result is a detailed depth map, not visible in the image but stored electronically with it. It is a virtual model of the scene, which can be manipulated on a computer. Fife said 'You can choose to manipulate photos that you can't do with normal 2-dimensional photos. If you want to see the image from any distance, it will appear like that. You can also remove other details. '

Or the sensor can be arranged without any objective lens. By placing the sensor very close to the object, each micro-lens will take its own image without the need for a target lens. It is thought that a tiny probe is placed against the brain of a lab rat to monitor the location of neurological activities.

Picture 1 of The new 3D camera will have 12,616 lenses

Test stand of multi-hole lens image induction chip. (Photo: Cicero)

Other scientists also aim to similar depth maps with many different methods. Some people use smart software to explore 2-dimensional images to find differences in edges, shadows or focal points so that the objects' distances can be deduced. Other scientists have tried cameras with multiple lenses, or prisms attached to a single lens. Another method uses lasers, another method of trying to connect images from different angles, and a method of shooting from a moving camera.

But El Gamal, Fife and Wong believe that multi-lens sensors have important advantages. It is small and does not require lasers, bulky camera equipment, multiple photographs or complicated identification methods. It also gives perfect color quality. Each pixel in 256 pixels of a sequence records a color. In conventional cameras, red pixels can be located near green pixels, leading to unintended crosstalk between pixels and reducing color quality.

The sensor can take advantage of smaller pixels in a way that a digital camera often cannot do because lenses are approaching the optical limit of the smallest point they can resolve. Using a pixel smaller than that point will not yield a better quality image but with the multi-hole lens sensor, smaller pixels will provide deeper information.

This technology could support the hunt for giant photos thanks to gigapixel cameras - 140 times the number of pixels in today's 7 megapixel camera. The first benefit of Stanford's technology evolved is clear: smaller pixels mean more pixels are assembled on the chip. The second benefit relates to the chip structure. With one billion pixels on a chip, some chips will be broken and leave a dead corner. But the overlapping viewing angles thanks to the multi-hole sensor will support each other when the pixels are broken.

Scientists are studying the ability to produce optical microphones on camera chips. Finished products may be less expensive than current digital cameras because the quality of the main camera lens is no longer the most important. Fife said 'We believe that it is possible to reduce the complexity of the main lenses by shifting complexity to semiconductors.'