The elusive capacity of networks

Calculating the total capacity of a data network is a notoriously difficult problem, but information theorists are beginning to make some headway. In its early years, information theory - which grew out of a landmark 1948 paper by MIT alumnus and future professor Claude Shannon - was dominated by research on error-correcting codes : How do you encode information so as to guarantee its faithful transmission, even in the presence of the corrupting influences engineers call "noise"? Recently, one of the most intriguing developments in information theory has been a different kind of coding, called network coding , in which the question is how to encode information in order to maximize the capacity of a network as a whole. For information theorists, it was natural to ask how these two types of coding might be combined: If you want to both minimize error and maximize capacity, which kind of coding do you apply where, and when do you do the decoding? What makes that question particularly hard to answer is that no one knows how to calculate the data capacity of a network as a whole - or even whether it can be calculated. Nonetheless, in the first half of a two-part paper , which was published recently in IEEE Transactions on Information Theory , MIT's Muriel Médard, California Institute of Technology's Michelle Effros and the late Ralf Koetter of the University of Technology in Munich show that in a wired network, network coding and error-correcting coding can be handled separately, without reduction in the network's capacity.
account creation

TO READ THIS ARTICLE, CREATE YOUR ACCOUNT

And extend your reading, free of charge and with no commitment.



Your Benefits

  • Access to all content
  • Receive newsmails for news and jobs
  • Post ads

myScience