Comparing of the cascaded GHz approach with Kinect-style approaches visually represented on a key. From left to right, the original image, a Kinect-style approach, a GHz approach, and a stronger GHz approach.
For the past 10 years, the Camera Culture group at MIT's Media Lab has been developing innovative imaging systems - from a camera that can see around corners to one that can read text in closed books - by using "time of flight," an approach that gauges distance by measuring the time it takes light projected into a scene to bounce back to a sensor. In a new paper appearing in IEEE Access , members of the Camera Culture group present a new approach to time-of-flight imaging that increases its depth resolution 1,000-fold. That's the type of resolution that could make self-driving cars practical. The new approach could also enable accurate distance measurements through fog, which has proven to be a major obstacle to the development of self-driving cars. At a range of 2 meters, existing time-of-flight systems have a depth resolution of about a centimeter. That's good enough for the assisted-parking and collision-detection systems on today's cars. But as Achuta Kadambi, a joint PhD student in electrical engineering and computer science and media arts and sciences and first author on the paper, explains, "As you increase the range, your resolution goes down exponentially.
TO READ THIS ARTICLE, CREATE YOUR ACCOUNT
And extend your reading, free of charge and with no commitment.