Frontiers in Recurrent Neural Network Research
In the last few years, recurrent neural networks (RNNs) have become the Swiss army knife of sequence processing for machine learning. Problems involving long and complex data streams, such as speech recognition, machine translation and reinforcement learning from raw video are now routinely tackled with RNNs. However significant limitations still exist for such systems, such as their ability to retain large amounts of information in memory, and the challenges of gradient-based training on very long sequences. My talk will review some of the new architectures and training strategies currently being developed to extend the frontiers of this exciting field.
Bio: Alex Graves completed a BSc in Theoretical Physics at the University of Edinburgh, Part III Maths at the University of Cambridge, a PhD in artificial intelligence at IDSIA with Jürgen Schmidhuber, followed by postdocs at the Technical University of Munich and with Geoff Hinton at the University of Toronto. He is now a research scientist at DeepMind. His contributions include the Connectionist Temporal Classification algorithm for sequence labelling (now widely used for commercial speech and handwriting recognition), stochastic variational inference for neural network training, and the Neural Turing Machine / Differentiable Neural Computer architectures.
Talk Location: This talk will be at MaRS (101 College Street, Toronto).