Probabilistic AI that knows how well it’s working

It's more important than ever for artificial intelligence to estimate how accurately it is explaining data. Despite their enormous size and power, today's artificial intelligence systems routinely fail to distinguish between hallucination and reality. Autonomous driving systems can fail to perceive pedestrians and emergency vehicles right in front of them, with fatal consequences. Conversational AI systems confidently make up facts and, after training via reinforcement learning, often fail to give accurate estimates of their own uncertainty. Working together, researchers from MIT and the University of California at Berkeley have developed a new method for building sophisticated AI inference algorithms that simultaneously generate collections of probable explanations for data, and accurately estimate the quality of these explanations. The new method is based on a mathematical approach called sequential Monte Carlo (SMC). SMC algorithms are an established set of algorithms that have been widely used for uncertainty-calibrated AI, by proposing probable explanations of data and tracking how likely or unlikely the proposed explanations seem whenever given more information.
account creation

TO READ THIS ARTICLE, CREATE YOUR ACCOUNT

And extend your reading, free of charge and with no commitment.



Your Benefits

  • Access to all content
  • Receive newsmails for news and jobs
  • Post ads

myScience