- Computer Science - Feb 20 "The robot is not the threat to our jobs"
- Computer Science - Feb 19 Robo- picker grasps and packs
- History - Feb 16 Time Machine in the running to become a FET Flagship
- Environment - Feb 15 Maximizing the environmental benefits of autonomous vehicles
- Mathematics - Feb 13 Neural networks everywhere
- Computer Science - Feb 9 Robots can go all the way to Mars, but they can’t pick up the groceries
- Computer Science - Feb 7 Crowd Workers, AI Make Conversational Agents Smarter
- Medicine - Feb 7 New drug discovery project tackles neglected tropical disease
- Innovation - Feb 6 Artificial intelligence is growing up fast: what’s next for thinking machines?
- Astronomy - Feb 5 How Elon Musk’s SpaceX will launch a Tesla into space
- Innovation - Feb 2 Preparing for the future: artificial intelligence and us
- Physics - Feb 2 Quantum ’hack’ to unleash computing power
- Innovation - Feb 1 Letter regarding the MIT Intelligence Quest
- Computer Science - Jan 31 New treatment offers hope for better stroke recovery
- Art - Jan 30 Artificial intelligence experts question if machines can ever be truly creative
- Computer Science - Jan 29 Big data project gets Â£3m funding boost

# Making big data a little smaller

When we think about digital information, we often think about size. A daily email newsletter, for example, maybe 75 to 100 kilobytes in size. But data also has dimensions, based on the numbers of variables in a piece of data. An email, for example, can be viewed as a high-dimensional vector where there’s one coordinate for each word in the dictionary and the value in that coordinate is the number of times that word is used in the email. So a 75 Kb email that is 1,000 words long would result in a vector in the millions.

This geometric view on data is useful in some applications, such as learning spam classifiers, but, the more dimensions, the longer it can take for an algorithm to run and the more memory the algorithm uses.

As data processing got more and more complex in the mid-to-late 1990s, computer scientists turned to pure mathematics to help speed up the algorithmic processing of data. In particular, researchers found a solution in a theorem proved in the 1980s by mathematics William B. Johnson and Joram Lindenstrauss working the area of functional analysis.

Known as the Johnson-Lindenstrauss lemma (JL lemma), computer scientists have used the theorem to reduce the dimensionality of data and help speed up all types of algorithms across many different fields, from streaming and search algorithms, to fast approximation algorithms for statistical and linear algebra and even algorithms for computational biology.

But as data has grown even larger and more complex, many computer scientists have asked: Is the JL lemma really the best approach to pre-process large data into a manageably low dimension for algorithmic processing?

Now, Jelani Nelson , the John L. Loeb Associate Professor of Engineering and Applied Sciences at the Harvard John A. Paulson School of Engineering and Applied Sciences , has put that debate to rest. In a paper presented this week at the annual IEEE Symposium on Foundations of Computer Science in Berkeley, California, Nelson and co-author Kasper Green Larsen, of Aarhus University in Denmark, found that the JL lemma really is the best way to reduce the dimensionality of data.

"We have proven that there are ’hard’ data sets for which dimensionality reduction beyond what’s provided by the JL lemma is impossible," said Nelson.

Essentially, the JL lemma showed that for any finite collection of points in high dimension, there is a collection of points in a much lower dimension which preserves all distances between the points, up to a small amount of distortion. Years after its original impact in functional analysis, computer scientists found that

The JL lemma can act as a preprocessing step, allowing the dimensions of data to be significantly reduced before running algorithms.

Rather than going through each and every dimension - like the hundreds of dimensions in an email - the JL lemma uses a system of geometric classification to speed things up. In this geometry, the individual dimensions don’t matter as much as the similarities between them. By mapping these similarities, the geometry of the data and the angles between data points are preserved, just in fewer dimensions.

Of course, the JL lemma has a wide range of applications that go far beyond spam filters. It is used in compressed sensing for reconstructing sparse signals using few linear measurements; clustering high-dimensional data; and DNA motif finding in computational biology.

"We still have a long way to go to understand the best dimension reduction possible for specific data sets as opposed to comparing to the worst case," said Nelson. "I think that’s a very interesting direction for future work. There are also some interesting open questions related to how quickly we can perform the dimensionality reduction, especially when faced with high-dimensional vectors that are sparse, i.e. have many coordinates equal to zero. This sparse case is very relevant in many practical applications. For example, vectors arising from e-mails are extremely sparse, since a typical email does not contain every word in the dictionary."

"The Johnson-Lindenstrauss Lemma is a fundamental result in high dimensional geometry but an annoying logarithmic gap remained between the upper and lower bounds for the minimum possible dimension required as a function of the number of points and the distortion allowed," said Noga Alon, professor of Mathematics at Tel Aviv University, who had proven the previous best lower bound for the problem. "The recent work of Jelani Nelson and Kasper Green Larsen settled the problem. It is a refreshing demonstration of the power of a clever combination of combinatorial reasoning with geometric tools in the solution of a classical problem."

These images are made available to non-commercial entities, press, and the general public under a Creative Commons Attribution Non-Commercial No Derivatives license. You may not alter the images provided, other than to crop them to size. A credit line must be used when reproducing images; if one is not provided here, credit the images to "Harvard SEAS."

**Last job offers**

- Computer Science/Telecom - 7.2

Directeur / trice des systèmes d’information - Business/Economics - 6.2

Wissensch. Mitarbeiter/in Business Information Systems (80−100 %) - Life Sciences - 1.2

Medizinische Informatiker /Bioinformatiker /Naturwissenschaftler (m/w) - Life Sciences - 1.2

Einstieg für Naturwissenschaftler / Biologen in die IT-Consulting Welt - Literature/Linguistics - 23.1

Research Associates (f/m) Natural Language Processing - Computer Science/Telecom - 20.2

211-0557 Associate Professor of On-line analysis and analysis automation - Computer Science/Telecom - 20.2

Postdoc positions in theoretical computer science - Computer Science/Telecom - 15.2

ProfessorIn für Cloud Security und angewandtes Information Risk Management