Pages

Saturday, August 18, 2012

Geoffrey Hinton - Brains, Sex, and Machine Learning


This Google Tech Talk features University of Toronto psychologist and AI expert Geoffrey Hinton speaking on Brains, Sex, and Machine Learning. He highlights the use of computer modeling to help us understands why the brain functions as it does (why cortical neurons send single, randomly timed spikes for signal processing rather than sending precise, rhythmic spikes). This talk demonstrates some of the ways the brain operates as a complex adaptive system - and, for me at least, how difficult it is to create AI systems that can even approximate the function of the brain.



Brains, Sex, and Machine Learning
Geoffrey Hinton, University of Toronto

Abstract:
Recent advances in machine learning cast new light on two puzzling biological phenomena. Neurons can use the precise time of a spike to communicate a real value very accurately, but it appears that cortical neurons do not do this. Instead they send single, randomly timed spikes. This seems like a clumsy way to perform signal processing, but a recent advance in machine learning shows that sending stochastic spikes actually works better than sending precise real numbers for the kind of signal processing that the brain needs to do. A closely related advance in machine learning provides strong support for a recently proposed theory of the function of sexual reproduction. Sexual reproduction breaks up large sets of co-adapted genes and this seems like a bad way to improve fitness. However, it is a very good way to make organisms robust to changes in their environment because it forces important functions to be achieved redundantly by multiple small sets of genes and some of these sets may still work when the environment changes. For artificial neural networks, complex co-adaptations between learned feature detectors give good performance on training data but not on new test data. Complex co-adaptations can be reduced by randomly omitting each feature detector with a probability of a half for each training case. This random "dropout" makes the network perform worse on the training data but the number of errors on the test data is typically decreased by about 10%. Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever and Ruslan Salakhutdinov have shown that this leads to large improvements in speech recognition and object recognition.

Bio:
Geoffrey Hinton received his BA in experimental psychology from Cambridge in 1970 and his PhD in Artificial Intelligence from Edinburgh in 1978. He spent five years as a faculty member in the Computer Science Department at Carnegie-Mellon University then moved to the Department of Computer Science at the University of Toronto where he is the director of the program on Neural Computation and Adaptive Perception which is funded by the Canadian Institute for Advanced Research. He has been awarded the David E. Rumelhart prize, the IJCAI award for research excellence, the Killam prize for Engineering and the NSERC Herzberg Gold Medal which is Canada's top award in Science and Engineering.

Geoffrey Hinton designs machine learning algorithms. His aim is to discover a learning procedure that is efficient at finding complex structure in large, high-dimensional datasets and to show that this is how the brain learns to see. He was one of the researchers who introduced the back-propagation algorithm that has been widely used for practical applications. His other contributions to neural network research include Boltzmann machines, distributed representations, time-delay neural nets, mixtures of experts, variational learning, products of experts, deep belief nets and dropout.

No comments:

Post a Comment