Close your eyes and picture the iconic “bullet time” scene from “The Matrix”—the one where Neo, played by Keanu Reeves, dodges bullets in slow motion. Now imagine being able to witness the same effect, but instead of speeding bullets, you’re watching something that moves one million times faster: light itself.
Computer scientists from the University of Toronto have built an advanced camera setup that can visualize light in motion from any perspective, opening avenues for further inquiry into new types of 3D sensing techniques.
The researchers developed a sophisticated AI algorithm that can simulate what an ultra-fast scene—a pulse of light speeding through a pop bottle or bouncing off a mirror—would look like from any vantage point.
The work is published on the arXiv preprint server.
David Lindell, an assistant professor in the department of computer science in the Faculty of Arts & Science, says the feat requires the ability to generate videos where the camera appears to “fly” alongside the very photons of light as they travel.
“Our technology can capture and visualize the actual propagation of light with the same dramatic, slowed-down detail,” says Lindell. “We get a glimpse of the world at speed-of-light timescales that are normally invisible.”
The researchers believe the approach, which was recently presented at the 2024 European Conference on Computer Vision, can unlock new capabilities in several important research areas, including: advanced sensing capabilities such as non-line-of-sight imaging, a method that allows viewers to “see” around corners or behind obstacles using multiple bounces of light; imaging through scattering media, such as fog, smoke, biological tissues or turbid water; and 3D reconstruction, where understanding the behavior of light that scatters multiple times is critical.
In addition to Lindell, the research team included U of T computer science Ph.D. student Anagh Malik, fourth-year engineering science undergraduate Noah Juravsky, Professor Kyros Kutulakos and Stanford University Associate Professor Gordon Wetzstein and Ph.D. student Ryan Po.
The researchers’ key innovation lies in the AI algorithm they developed to visualize ultrafast videos from any viewpoint—a challenge known in computer vision as “novel view synthesis.”
Traditionally, novel view synthesis methods are designed for images or videos captured with regular cameras. However, the researchers extended this concept to handle data captured by an ultra-fast camera operating at speeds comparable to light, which posed unique challenges—including the need for their algorithm to account for the speed of light and model how it propagates through a scene.
Through their work, researchers observed a moving-camera visualization of light in motion, including refracting through water, bouncing off a mirror or scattering off a surface. They also demonstrated how to visualize phenomena that only occur at a significant portion of the speed of light, as predicted by Albert Einstein.
For example, they visualize the “searchlight effect” that makes objects brighter when moving toward an observer, and “length contraction,” where fast-moving objects look shorter in the direction they are traveling. The researchers were also able to create a way to see how objects would appear to contract in length when moving at such high speeds.
Discover the latest in science, tech, and space with over 100,000 subscribers who rely on Phys.org for daily insights.
Sign up for our free newsletter and get updates on breakthroughs,
innovations, and research that matter—daily or weekly.
While current algorithms for processing ultra-fast videos typically focus on analyzing a single video from a single viewpoint, the researchers say their work is the first to extend this analysis to multi-view light-in-flight videos, allowing for the study of how light propagates from multiple perspectives.
“Our multi-view light-in-flight videos serve as a powerful educational tool, offering a unique way to teach the physics of light transport,” says Malik. “By visually capturing how light behaves in real-time—whether refracting through a material or reflecting off a surface—we can get a more intuitive understanding of the motion of light through a scene.
“Additionally, our technology could inspire creative applications in the arts, such as filmmaking or interactive installations, where the beauty of light transport can be used to create new types of visual effects or immersive experiences.”
The research also holds significant potential for improving LIDAR (Light Detection and Ranging) sensor technology used in autonomous vehicles. Typically, these sensors process data to immediately create 3D images right away. But the researchers’ work suggests the potential to store the raw data, including detailed light patterns, to help create systems that perform better than conventional LIDAR to see more details, look through obstacles and understand materials better.
While the researchers’ project focused on visualizing how light moves through a scene from any direction, they note that it carries “hidden information” about the shape and appearance of everything it touches. As the researchers look to their next steps, they want to unlock this information by developing a method that uses multi-view light-in-flight videos to reconstruct the 3D geometry and appearance of the entire scene.
“This means we could potentially create incredibly detailed, three-dimensional models of objects and environments—just by watching how light travels through them,” Lindell says.
More information:
Anagh Malik et al, Flying with Photons: Rendering Novel Views of Propagating Light, arXiv (2024). DOI: 10.48550/arxiv.2404.06493
Journal information:
arXiv
Provided by
University of Toronto
Citation:
Novel AI algorithm captures photons in motion (2024, November 19)
retrieved 19 November 2024
from https://phys.org/news/2024-11-ai-algorithm-captures-photons-motion.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.