Deep Reinforcement Learning for Robotics
Programming robots remains notoriously difficult. Equipping robots with the ability to learn would by-pass the need for what otherwise often ends up being time-consuming task specific programming. This talk will describe recent progress in deep reinforcement learning, in which robots learn through their own trial and error, and resulting capabilities in robotics. I will discuss technical advances in policy gradient methods, in learning to reinforcement learn, and in safe reinforcement learning. And I will discuss resulting robotic capabilities in manipulation, locomotion, and flight.
Pieter Abbeel (Associate Professor, UC Berkeley EECS) works in machine learning and robotics, in particular his research is on making robots learn from people (apprenticeship learning) and how to make robots learn through their own trial and error (reinforcement learning). His robots have learned: advanced helicopter aerobatics, knot-tying, basic assembly, and organizing laundry. He has won various awards, including best paper awards at ICML and ICRA, the Sloan Fellowship, the Air Force Office of Scientific Research Young Investigator Program
(AFOSR-YIP) award, the Office of Naval Research Young Investigator Program (ONR-YIP) award, the DARPA Young Faculty Award (DARPA-YFA), the National Science Foundation Faculty Early Career Development Program Award (NSF-CAREER), the Presidential Early Career Award for Scientists and Engineers (PECASE), the CRA-E Undergraduate Research Faculty Mentoring Award, the MIT TR35, the IEEE Robotics and Automation Society (RAS) Early Career Award, and the Dick Volz Best U.S. Ph.D. Thesis in Robotics and Automation Award.