Comparison between the original image (left); the image processed using non-learning computation (middle); and the image processed using the actor-model framework
Comparison between the original image ( left ); the image processed using non-learning computation ( middle ); and the image processed using the actor-model framework © EPFL CC BY SA Researchers have developed a machine learning approach to compressing image data with greater accuracy than learning-free computation methods, with applications for retinal implants and other sensory prostheses. A major challenge to developing better neural prostheses is sensory encoding: transforming information captured from the environment by sensors into neural signals that can be interpreted by the nervous system. But because the number of electrodes in a prosthesis is limited, this environmental input must be reduced in some way, while still preserving the quality of the data that is transmitted to the brain. Demetri Psaltis ( Optics Lab ) and Christophe Moser ( Laboratory of Applied Photonics Devices ) collaborated with Diego Ghezzi of the Hôpital ophtalmique Jules-Gonin - Fondation Asile des Aveugles (previously Medtronic Chair in Neuroengineering at EPFL) to apply machine learning to the problem of compressing image data with multiple dimensions, such as color, contrast, etc. In their case, the compression goal was downsampling, or reducing the number of pixels of an image to be transmitted via a retinal prosthesis. "Downsampling for retinal implants is currently done by pixel averaging, which is essentially what graphics software does when you want to reduce a file size. But at the end of the day, this is a mathematical process; there is no learning involved," Ghezzi explains.
TO READ THIS ARTICLE, CREATE YOUR ACCOUNT
And extend your reading, free of charge and with no commitment.
Your Benefits
- Access to all content
- Receive newsmails for news and jobs
- Post ads