© Thomas Hueber / GIPSA-Lab (CNRS/Université Grenoble Alpes / Grenoble INP). Example of tongue model animations of the GIPSA-Lab articulatory talking head from ultrasound images, using the Integrated Cascaded Gaussian Mixture Regression algorithm for [ata] (top) and [uku] (bottom) sequences.
A team of researchers in the GIPSA-Lab (CNRS/Université Grenoble Alpes/Grenoble INP) and at INRIA Grenoble Rhône-Alpes has developed a system that can display the movements of our own tongues in real time. Captured using an ultrasound probe placed under the jaw, these movements are processed by a machine learning algorithm that controls an “articulatory talking head. As well as the face and lips, this avatar shows the tongue, palate and teeth, which are usually hidden inside the vocal tract. This “visual biofeedback” system, which ought to be easier to understand and therefore should produce better correction of pronunciation, could be used for speech therapy and for learning foreign languages. This work is published in the October 2017 issue of Speech Communication . For a person with an articulation disorder, speech therapy partly uses repetition exercises: the practitioner qualitatively analyzes the patient's pronunciations and orally explains, using drawings, how to place articulators, particularly the tongue: something patients are generally unaware of. How effective therapy is depends on how well the patient can integrate what they are told.
TO READ THIS ARTICLE, CREATE YOUR ACCOUNT
And extend your reading, free of charge and with no commitment.
Your Benefits
- Access to all content
- Receive newsmails for news and jobs
- Post ads