’Trustworthy AI requires interdisciplinary collaboration’

Artificial intelligence and machine learning are key technologies for science, business, and society - but how transparent are their decisions? Andreas Krause, the 2021 recipient of the Rössler Prize and Chair of the ETH AI Center, shares his thoughts on the opportunities and challenges with trustworthy artificial intelligence. Mr Krause, you're one of Europe's leading machine learning and artificial intelligence (AI) researchers. Are there tasks that you used to do yourself a decade ago but that you now delegate to intelligent computer programs? - Behind the scenes there are actually several very useful AI and machine learning technologies that make my day-to-day work easier. Searching through academic literature is greatly supported by recommendation engines, and speech recognition and language translation can, to a valuable extent, be automated today. That wasn't yet possible ten years ago. Can artificial intelligence understand problems that humans have not yet understood? - It's hard to define what 'understanding' exactly means. Machines are capable of efficiently extracting complex statistical patterns from large data sets and utilizing them computationally. That doesn't mean in any way that they 'understand' them. Nevertheless, existing machine learning algorithms are still very useful for specialised tasks. It's still a uniquely human ability, however, to generalize knowledge across domains and to quickly grasp and solve very different types of complex problems. We are very far away from achieving this in artificial intelligence. What's your take on AI research at ETH Zurich?
account creation

TO READ THIS ARTICLE, CREATE YOUR ACCOUNT

And extend your reading, free of charge and with no commitment.



Your Benefits

  • Access to all content
  • Receive newsmails for news and jobs
  • Post ads

myScience