Could Machines Learn Like Humans?
- Speaker
-
Yann LeCunFacebook AI Research
New York University
Presidential Lectures are a series of free public colloquia spotlighting groundbreaking research across four themes: neuroscience and autism science, physics, biology, and mathematics and computer science. These curated, high-level scientific talks feature leading scientists and mathematicians and are designed to foster discussion and drive discovery within the New York City research community. We invite those interested in these topics to join us for this weekly lecture series.
Deep learning has enabled significant progress in computer perception, natural language understanding and control. However, almost all these successes largely rely on supervised learning, where the machine is required to predict human-provided annotations, or on model-free reinforcement learning, where the machine learns actions to maximize rewards. Supervised learning requires a large number of labeled samples, making it practical only for certain tasks. Reinforcement learning requires a very large number of interactions with the environment (and many failures) to learn even simple tasks. In contrast, animals and humans seem to learn vast amounts of task-independent knowledge about how the world works through mere observation and occasional interactions. Learning new tasks or skills require very few samples or interactions with the world: we learn to drive and fly planes in about 30 hours of practice with no fatal failures. What learning paradigm do humans and animal use to learn so efficiently?
In this lecture, Yann LeCun will propose the hypothesis that self-supervised learning of predictive world models is an essential missing ingredient of current approaches to AI. With such models, one can predict outcomes and plan courses of actions. One could argue that prediction is the essence of intelligence. Good predictive models may be the basis of intuition, reasoning and “common sense,” allowing us to fill in missing information: predicting the future from the past and present or inferring the state of the world from noisy percepts. After a brief presentation of the state of the art in deep learning, he will discuss some promising principles and methods for self-supervised learning.