AI model speeds up high-resolution computer vision

A machine-learning model for high-resolution computer vision could enable comput
A machine-learning model for high-resolution computer vision could enable computationally intensive vision applications, such as autonomous driving or medical image segmentation, on edge devices. Pictured is an artist’s interpretation of the autonomous driving technology. Credits : Image: MIT News
A machine-learning model for high-resolution computer vision could enable computationally intensive vision applications, such as autonomous driving or medical image segmentation, on edge devices. Pictured is an artist's interpretation of the autonomous driving technology. Credits : Image: MIT News The system could improve image quality in video streaming or help autonomous vehicles identify road hazards in real-time. An autonomous vehicle must rapidly and accurately recognize objects that it encounters, from an idling delivery truck parked at the corner to a cyclist whizzing toward an approaching intersection. To do this, the vehicle might use a powerful computer vision model to categorize every pixel in a high-resolution image of this scene, so it doesn't lose sight of objects that might be obscured in a lower-quality image. But this task, known as semantic segmentation, is complex and requires a huge amount of computation when the image has high resolution. Researchers from MIT, the MIT-IBM Watson AI Lab, and elsewhere have developed a more efficient computer vision model that vastly reduces the computational complexity of this task.
account creation

TO READ THIS ARTICLE, CREATE YOUR ACCOUNT

And extend your reading, free of charge and with no commitment.



Your Benefits

  • Access to all content
  • Receive newsmails for news and jobs
  • Post ads

myScience