New programming approach seeks to make large-scale computation more reliable

Moore's Law, the observation that integrated circuits halve in size every two years, has been good to us. Prices for computers have dropped precipitously over the last few decades, even as their power has skyrocketed. But as we approach the 50th anniversary of Moore's Law, that whole paradigm might be coming to an end: Today's circuitry is so small that it's brushing up against the limits of quantum mechanics. Future computers will need a new paradigm, argues Andrew Chien , the William Eckhardt Distinguished Service Professor of Computer Science and senior fellow in the Computation Institute, who is involved in several projects to pave the way for one. One such project is already bearing fruit, a concept called Global View Resilience —not designed so much to prevent errors as to allow a program to recover from them. The traditional assumption among hardware and software experts in large-scale scientific computation was that they could depend on their computer hardware to be reliable, Chien explained. But the more circuitry brushes up against the quantum limit, and the more complex supercomputers—and the programs they run—get, the greater the odds that somewhere along the line something will go wrong.
account creation

TO READ THIS ARTICLE, CREATE YOUR ACCOUNT

And extend your reading, free of charge and with no commitment.



Your Benefits

  • Access to all content
  • Receive newsmails for news and jobs
  • Post ads

myScience