A representation of the difference between a deep and sparse neural network.
A representation of the difference between a deep and sparse neural network. The requirements of real biological neural networks are modest compared to the complex deep neural networks used in machine learning, which come with substantial memory and energy demands. The Zenke group have developed a new method in machine learning called Neural Tangent Transfer to make a sparse neural network which performs almost as well as densely connected deep neural network on various learning tasks, but at a heavily reduced computing cost. The brains of newborn babies are already highly structured, but the connectivity between neurons is very sparse. Out of some 80 billion neurons most of them only talk to about 1,000-10,000 of their peers. Yet, the architecture supports rapid learning as the child grows. To understand the principles that underlie this information processing and rapid learning in neural networks, the Zenke group use computational and theoretical approaches from deep learning - a field of machine learning that uses artificial neural networks, which learn automatically through experience - to investigate what makes biological neural networks so good at learning, despite their sparse connections.
TO READ THIS ARTICLE, CREATE YOUR ACCOUNT
And extend your reading, free of charge and with no commitment.