Tennis anyone? Researchers serve up advances in developing motion simulation technology’s next generation

SFU computing science assistant professor Jason Peng is leading a research team that is raising motion simulation technology to the next level-and using the game of tennis to showcase just how real virtual athletes- moves can be. Alongside his colleagues, Peng has created a machine learning system capable of learning diverse, simulated tennis skills from broadcast video footage. Peng's team will present its system and corresponding research paper at the 50 SIGGRAPH conference , the premier global conference for computer graphics and interactive techniques, in Los Angeles, California, from August 6-10. When provided with a large dataset of video clips from professional tennis players, the system learns to perform complex tennis shots and realistically chains together multiple shots into extended two-player rallies, generating long-lasting matches with realistic racket and ball dynamics between two physically simulated characters. While popular sports video games typically use motion capture technologies to produce high-quality animations, their results are limited to the behaviors captured during specific recording sessions, limiting the diversity of movements a character can perform. According to new developments involving Peng and researchers from Stanford University, the University of Toronto, Vector Institute and NVIDIA, it may soon become possible for video game designers to have their characters learn to move by mimicking footage of real-life athletes, and automatically simulate new variations and responsive behaviors on the fly.
account creation

TO READ THIS ARTICLE, CREATE YOUR ACCOUNT

And extend your reading, free of charge and with no commitment.



Your Benefits

  • Access to all content
  • Receive newsmails for news and jobs
  • Post ads

myScience