AI technology is increasingly used to open up new horizons for scientists and researchers. At the University of Chicago, researchers are using it for everything from scanning the skies for supernovae to finding new drugs from millions of potential combinations and developing a deeper understanding of the complex phenomena underlying the Earth’s climate.
Today’s AI commonly works by starting from massive data sets, from which it figures out its own strategies to solve a problem or make a prediction-rather than rely on humans explicitly programming it how to reach a conclusion. The results are an array of innovative applications.
"Academia has a vital role to play in the development of AI and its applications. While the tech industry is often focused on short-term returns, realizing the full potential of AI to improve our world requires long-term vision," said Rebecca Willett, professor of statistics and computer science at the University of Chicago and a leading expert on AI foundations and applications in science. "Basic research at universities and national laboratories can establish the fundamentals of artificial intelligence and machine learning approaches, explore how to apply these technologies to solve societal challenges, and use AI to boost scientific discovery across fields."
Willett is one of the featured speakers at the InnovationXLab Artificial Intelligence Summit hosted by UChicago-affiliated Argonne National Laboratory , which will soon be home to t he most powerful computer in the world-and it’s being designed with an eye toward AI-style computing. The Oct. 2-3 summit showcases the U.S. Department of Energy lab, bringing together industry, universities, and investors with lab innovators and experts.
The workshop comes as researchers around UChicago and the labs are leading new explorations into AI.
For example, say that Andrew Ferguson, an associate professor at the Pritzker School of Molecular Engineering , wants to look for a new vaccine or flexible electronic materials. New materials essentially are just different combinations of chemicals and molecules, but there are literally billions of such combinations. How do scientists pick which ones to make and test in the labs’ AI could quickly narrow down the list.
"There are many areas where the Edisonian approach-that is, having an army of assistants make and test hundreds of different options for the lightbulb-just isn’t practical," Ferguson said.
Then there’s the question of what happens if AI takes a turn at being the scientist. Some are wondering whether AI models could propose new experiments that might never have occurred to their human counterparts.
"For example, when someone programmed the rules for the game of Go into an AI, it invented strategies never seen in thousands of years of humans playing the game," said Brian Nord, an associate scientist in the Kavli Institute for Cosmological Physics and UChicago-affiliated Fermi National Accelerator Laboratory. "Maybe sometimes it will have more interesting ideas than we have."
Ferguson agreed: "If we write down the laws of physics and input those, what can AI tell us about the universe?"
But ensuring those applications are accurate, equitable, and effective requires more basic computer science research into the fundamentals of AI. UChicago scientists are exploring ways to reduce bias in model predictions, use advanced tools even when data is scarce, and developing "explainable AI" systems that will produce more actionable insights and raise trust among users of those models.
"Most AIs right now just spit out an answer without any context. But a doctor, for example, is not going to accept a cancer diagnosis unless they can see why and how the AI got there," Ferguson said.
With the right calibration, however, researchers see a world of uses for AI. To name just a few: Willett, in collaboration with scientists from Argonne and the Department of Geophysical Sciences , is using machine learning to study clouds and their effect on weather and climate. Chicago Booth economist Sendhil Mullainathan is studying ways in which machine learning technology could change the way we approach social problems, such as policies to alleviate poverty; while neurobiologist David Freedman, a professor in the University’s Division of Biological Sciences , is using machine learning to understand how brains interpret sights and sounds and make decisions.
Below are looks into three projects at the University showcasing the breadth of AI applications happening now.
The depths of the universe to the structures of atomsWe’re getting better and better at building telescopes to scan the sky and accelerators to smash particles at ever-higher energies. What comes along with that, however, is more and more data. For example, the Large Hadron Collider in Europe generates one petabyte of data per second; for perspective, in less than five minutes, that would fill up the world’s most powerful supercomputer. That’s way too much data to store. "You need to quickly pick out the interesting events to keep, and dump the rest," Nord said.
Similarly, each night hundreds of telescopes scan the sky. Existing computer programs are pretty good at picking interesting things out of them, but there’s room to improve. (After LIGO detected the gravity waves from two neutron stars crashing together in 2017, telescopes around the world had rooms full of people frantically looking through sky photos to find the point of light it created.)
Years ago, Nord was sitting and scanning telescope images to look for gravitational lensing, an effect in which large objects distort light as it passes. "We were spending all this time doing this by hand, and I thought, surely there has to be a better way," he said. In fact, the capabilities of AI were just turning a corner; Nord began writing programs to search for lensing with neural networks. Others had the same idea; the technique is now emerging as a standard approach to find gravitational lensing.
This year Nord is partnering with computer scientist Yuxin Chen to explore what they call a "self-driving telescope": a framework that could optimize when and where to point telescopes to gather the most interesting data.
"I view this collaboration between AI and science, in general, to be in a very early phase of development," Chen said. "The outcome of the research project will not only have transformative effects in advancing the basic science, but it will also allow us to use the science involved in the physical processes to inform AI development."
Disentangling style and content for art and scienceIn recent years, popular apps have sprung up that can transform photographs into different artistic forms-from generic modes such as charcoal sketches or watercolors to the specific styles of Dali, Monet and other masters. These "style transfer" apps use tools from the cutting edge of computer vision-primarily the neural networks that prove adept at image classification for applications such as image search and facial recognition.
But beyond the novelty of turning your selfie into a Picasso, these tools kick-start a deeper conversation around the nature of human perception. From a young age, humans are capable of separating the content of an image from its style; that is, recognizing that photos of an actual bear, a stuffed teddy bear, or a bear made out of LEGOs all depict the same animal. What’s simple for humans can stump today’s computer vision systems, but Assoc. Profs. Jason Salavon and Greg Shakhnarovich think the "magic trick" of style transfer could help them catch up.
"The fact that we can look at pictures that artists create and still understand what’s in them, even though they sometimes look very different from reality, seems to be closely related to the holy grail of machine perception: what makes the content of the image understandable to people," said Shakhnarovich, an associate professor at the Toyota Technological Institute of Chicago.
Salavon and Shakhnarovich are collaborating on new style transfer approaches that separate, capture and manipulate content and style, unlocking new potential for art and science. These new models could transform a headshot into a much more distorted style, such as the distinctive caricatures of The Simpsons, or teach self-driving cars to better understand road signs in different weather conditions.
"We’re in a global arms race for making cool things happen with these technologies. From what would be called practical space to cultural space, there’s a lot of action," said Salavon, an associate professor in the Department of Visual Arts at the University of Chicago and an artist who makes "semi-autonomous art". "But ultimately, the idea is to get to some computational understanding of the ’essence’ of images. That’s the rich philosophical question."
Learning nature’s rules for protein designNature is an unparalleled engineer. Millions of years of evolution have created molecular machines capable of countless functions and survival in challenging environments, like deep sea vents. Scientists have long sought to harness these design skills and decode nature’s blueprints to build custom proteins of their own for applications in medicine, energy production, environmental clean-up and more. But only recently have the computational and biochemical technologies needed to create that pipeline become possible.
Ferguson and Prof. Rama Ranganathan are bringing these pieces together in an ambitious project funded by a Center for Data and Computing seed grant. Combining recent advancements in machine learning and synthetic biology, they will build an iterative pipeline to learn nature’s rules for protein design, then remix them to create synthetic proteins with elevated or even new functions and properties.
"It’s not just rebuilding what nature built, we can push it beyond what nature has ever shown us before," said Ranganathan. "This proposal is basically the starting point for building a whole framework of data-driven molecular engineering."
"The way we think of this project is we’re trying to mimic millions of years of evolution in the lab, using computation and experiments instead of natural selection," Ferguson said.
InnovationXLab Artificial Intelligence Summit
Artificial Intelligence: Transforming science, improving lives
Oct. 2 at 8:15 a.m.: XLab Day1 AI Summit
Oct. 3 at 8:15 a.m: XLab Day2 AI Summit