Andrew Myers, Stanford News, Author at The Robot Report https://www.therobotreport.com/author/amyers/ Robotics news, research and analysis Tue, 29 Mar 2022 16:43:59 +0000 en-US hourly 1 https://wordpress.org/?v=6.2 https://www.therobotreport.com/wp-content/uploads/2017/08/cropped-robot-report-site-32x32.png Andrew Myers, Stanford News, Author at The Robot Report https://www.therobotreport.com/author/amyers/ 32 32 Stanford engineers enable simple cameras to see in 3D https://www.therobotreport.com/stanford-engineers-enable-simple-cameras-to-see-in-3d/ https://www.therobotreport.com/stanford-engineers-enable-simple-cameras-to-see-in-3d/#respond Tue, 29 Mar 2022 16:43:59 +0000 https://www.therobotreport.com/?p=562271 Researchers devised a high-frequency, low-power, compact optical device that allows virtually any digital camera to perceive depth.

The post Stanford engineers enable simple cameras to see in 3D appeared first on The Robot Report.

]]>
Stanford

This lab-based prototype LiDAR system built by Stanford University researchers captured megapixel-resolution depth maps using a commercially available digital camera. | Credit: Andrew Brodhead

Standard image sensors, like the billion or so already installed in practically every smartphone in use today, capture light intensity and color. Relying on common, off-the-shelf sensor technology – known as CMOS – these cameras have grown smaller and more powerful by the year and now offer tens-of-megapixels resolution. But they’ve still seen in only two dimensions, capturing images that are flat, like a drawing – until now.

Researchers at Stanford University have created a new approach that allows standard image sensors to see light in three dimensions. That is, these common cameras could soon be used to measure the distance to objects.

The engineering possibilities are dramatic. Measuring distance between objects with light is currently possible only with specialized and expensive LiDAR – short for “light detection and ranging” – systems. If you’ve seen a self-driving car tooling around, you can spot it right off by the hunchback of technology mounted to the roof. Most of that gear is the car’s LiDAR crash-avoidance system, which uses lasers to determine distances between objects.

LiDAR is like radar, but with light instead of radio waves. By beaming a laser at objects and measuring the light that bounces back, it can tell how far away an object is, how fast it’s traveling, whether it’s moving closer or farther away and, most critically, it can calculate whether the paths of two moving objects will intersect at some point in the future.

“Existing LiDAR systems are big and bulky, but someday, if you want LiDAR capabilities in millions of autonomous drones or in lightweight robotic vehicles, you’re going to want them to be very small, very energy efficient, and offering high performance,” explained Okan Atalar, a doctoral candidate in electrical engineering at Stanford and the first author on the new paper in the journal Nature Communications that introduces this compact, energy-efficient device that can be used for LiDAR.

For engineers, the advance offers two intriguing opportunities. First, it could enable megapixel-resolution LiDAR – a threshold not possible today. Higher resolution would allow LiDAR to identify targets at greater range. An autonomous car, for example, might be able to distinguish a cyclist from a pedestrian from farther away – sooner, that is – and allow the car to more easily avoid an accident. Second, any image sensor available today, including the billions in smartphones now, could capture rich 3D images with minimal hardware additions.

Changing how machines see

One approach to adding 3D imaging to standard sensors is achieved by adding a light source (easily done) and a modulator (not so easily done) that turns the light on and off very quickly, millions of times every second. In measuring the variations in the light, engineers can calculate distance. Existing modulators can do it, too, but they require relatively large amounts of power. So large, in fact, that it makes them entirely impractical for everyday use.

The solution that the Stanford team, a collaboration between the Laboratory for Integrated Nano-Quantum Systems (LINQS) and ArbabianLab, came up with relies on a phenomenon known as acoustic resonance. The team built a simple acoustic modulator using a thin wafer of lithium niobate – a transparent crystal that is highly desirable for its electrical, acoustic and optical properties – coated with two transparent electrodes.

Critically, lithium niobate is piezoelectric. That is, when electricity is introduced through the electrodes, the crystal lattice at the heart of its atomic structure changes shape. It vibrates at very high, very predictable and very controllable frequencies. And, when it vibrates, lithium niobate strongly modulates light – with the addition of a couple polarizers, this new modulator effectively turns light on and off several million times a second.

From left to right: Amir Safavi-Naeini, Okan Atalar and Amin Arbabian, who were involved in developing a device that allows standard image sensors to see light in 3D. | Credit: William Meng)

“What’s more, the geometry of the wafers and the electrodes defines the frequency of light modulation, so we can fine-tune the frequency,” Atalar says. “Change the geometry and you change the frequency of modulation.”

In technical terms, the piezoelectric effect is creating an acoustic wave through the crystal that rotates the polarization of light in desirable, tunable and usable ways. It is this key technical departure that enabled the team’s success. Then a polarizing filter is carefully placed after the modulator that converts this rotation into intensity modulation – making the light brighter and darker – effectively turning the light on and off millions of times a second.

“While there are other ways to turn the light on and off,” Atalar says, “this acoustic approach is preferable because it is extremely energy efficient.”

Practical outcomes

Best of all, the modulator’s design is simple and integrates into a proposed system that uses off-the-shelf cameras, like those found in everyday cellphones and digital SLRs. Atalar and advisor Amin Arbabian, associate professor of electrical engineering and the project’s senior author, think it could become the basis for a new type of compact, low-cost, energy-efficient LiDAR – “standard CMOS LiDAR,” as they call it – that could find its way into drones, extraterrestrial rovers and other applications.

The impact for the proposed modulator is enormous; it has the potential to add the missing 3D dimension to any image sensor, they say. To prove it, the team built a prototype LiDAR system on a lab bench that used a commercially available digital camera as a receptor. The authors report that their prototype captured megapixel-resolution depth maps, while requiring small amounts of power to operate the optical modulator.

Better yet, with additional refinements, Atalar says the team has since further reduced the energy consumption by at least 10 times the already-low threshold reported in the paper, and they believe several-hundred-times-greater energy reduction is within reach. If that happens, a future of small-scale LiDAR with standard image sensors – and 3D smartphone cameras – could become a reality.

Editor’s Note: This article was republished from Stanford University’s News Service.

The post Stanford engineers enable simple cameras to see in 3D appeared first on The Robot Report.

]]>
https://www.therobotreport.com/stanford-engineers-enable-simple-cameras-to-see-in-3d/feed/ 0
Stanford AI Camera Offers Faster, More Efficient Image Classification https://www.therobotreport.com/stanford-ai-camera-image-classification/ https://www.therobotreport.com/stanford-ai-camera-image-classification/#comments Mon, 20 Aug 2018 19:03:51 +0000 https://www.therobotreport.com/?p=99284 The image recognition technology that underlies today’s autonomous cars and aerial drones depends on artificial intelligence: the computers essentially teach themselves to recognize objects like a dog, a pedestrian crossing the street or a stopped car. The problem is that the computers running the artificial intelligence algorithms are currently too large and slow for future…

The post Stanford AI Camera Offers Faster, More Efficient Image Classification appeared first on The Robot Report.

]]>

The image recognition technology that underlies today’s autonomous cars and aerial drones depends on artificial intelligence: the computers essentially teach themselves to recognize objects like a dog, a pedestrian crossing the street or a stopped car. The problem is that the computers running the artificial intelligence algorithms are currently too large and slow for future applications.

Now, researchers at Stanford University have devised a new type of artificially intelligent camera system that can classify images faster and more energy efficiently, and that could one day be built small enough to be embedded in the devices themselves, something that is not possible today. The work was published in the August 17 Nature Scientific Reports.

“That autonomous car you just passed has a relatively huge, relatively slow, energy intensive computer in its trunk,” said Gordon Wetzstein, an assistant professor of electrical engineering at Stanford, who led the research. Future applications will need something much faster and smaller to process the stream of images, he said.

Consumed by computation

Wetzstein and Julie Chang, a graduate student and first author on the paper, took a step toward that technology by marrying two types of computers into one, creating a hybrid optical-electrical computer designed specifically for image analysis.

The first layer of the prototype camera is a type of optical computer, which does not require the power-intensive mathematics of digital computing. The second layer is a traditional digital electronic computer.

A Stanford-designed hybrid optical-electrical computer designed for image analysis could be ideal for autonomous vehicles. (Credit: Andrey Suslov / Getty Images)

The optical computer layer operates by physically preprocessing image data, filtering it in multiple ways that an electronic computer would otherwise have to do mathematically. Since the filtering happens naturally as light passes through the custom optics, this layer operates with zero input power. This saves the hybrid system a lot of time and energy that would otherwise be consumed by computation.

“We’ve outsourced some of the math of artificial intelligence into the optics,” Chang said.

The result is profoundly fewer calculations, fewer calls to memory and far less time to complete the process. Having leapfrogged these preprocessing steps, the remaining analysis proceeds to the digital computer layer with a considerable head start.

“Millions of calculations are circumvented and it all happens at the speed of light,” Wetzstein said.

Rapid decision-making

In speed and accuracy, the prototype rivals existing electronic-only computing processors that are programmed to perform the same calculations, but with substantial computational cost savings.

While their current prototype, arranged on a lab bench, would hardly be classified as small, the researchers said their system can one day be miniaturized to fit in a handheld video camera or an aerial drone.

In both simulations and real-world experiments, the team used the system to successfully identify airplanes, automobiles, cats, dogs and more within natural image settings.

“Some future version of our system would be especially useful in rapid decision-making applications, like autonomous vehicles,” Wetzstein said.

In addition to shrinking the prototype, Wetzstein, Chang and colleagues at the Stanford Computational Imaging Lab are now looking at ways to make the optical component do even more of the preprocessing. Eventually, their smaller, faster technology could replace the trunk-size computers that now help cars, drones and other technologies learn to recognize the world around them.

Editor’s Note: This article was reprinted from Stanford News.

The post Stanford AI Camera Offers Faster, More Efficient Image Classification appeared first on The Robot Report.

]]>
https://www.therobotreport.com/stanford-ai-camera-image-classification/feed/ 1