Making big data manageable

A new technique devised by MIT researchers can take data sets with huge numbers
A new technique devised by MIT researchers can take data sets with huge numbers of variables and find approximations of them with far fewer variables.
One way to handle big data is to shrink it. If you can identify a small subset of your data set that preserves its salient mathematical relationships, you may be able to perform useful analyses on it that would be prohibitively time consuming on the full set. The methods for creating such 'coresets' vary according to application, however. Last week, at the Annual Conference on Neural Information Processing Systems, researchers from MIT's Computer Science and Artificial Intelligence Laboratory and the University of Haifa in Israel presented a new coreset-generation technique that's tailored to a whole family of data analysis tools with applications in natural-language processing, computer vision, signal processing, recommendation systems, weather prediction, finance, and neuroscience, among many others. 'These are all very general algorithms that are used in so many applications,' says Daniela Rus, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT and senior author on the new paper. 'They're fundamental to so many problems. By figuring out the coreset for a huge matrix for one of these tools, you can enable computations that at the moment are simply not possible.' As an example, in their paper the researchers apply their technique to a matrix - that is, a table - that maps every article on the English version of Wikipedia against every word that appears on the site.
account creation

TO READ THIS ARTICLE, CREATE YOUR ACCOUNT

And extend your reading, free of charge and with no commitment.



Your Benefits

  • Access to all content
  • Receive newsmails for news and jobs
  • Post ads

myScience