Deep Learning Summer Camp, London
Event Information
Description
NVIDIA in partnership with Persontyle are excited to announce this 3-day hands-on deep learning summer camp in London from 04-06 July, 2016. Brought to you by the NVIDIA Deep Learning Institute.
Deep Learning Summer Camp (Methods, Applications and Platforms)
Practically learn everything you need to design, train, and integrate neural network-powered artificial intelligence into your applications with widely used open-source frameworks.
Overview:
Deep learning is the most powerful machine learning technique right now. Companies such as Google, Microsoft, and Facebook are actively investing in and growing deep learning teams. For the rest of us however, deep learning is still a pretty complex and difficult subject to grasp. If you have a basic understanding of what machine learning is, have familiarity with the Python programming language, and have some mathematical background then Deep Learning London Summer Camp will help you get started.
Deep learning is a hot topic in both academia and industry since it has recently dramatically improved the state-of-the-art in areas such as speech recognition, computer vision, predicting the activity of drug molecules, and many other machine learning tasks. The basic idea in deep learning is to automatically learn to represent data in multiple layers of increasing abstraction, thus helping to discover intricate structure in large datasets.
Deep Learning Summer Camp is great opportunity to learn fundamentals of deep learning, starting from definitions of neural networks and their learning criteria, specific aspects of deep networks, optimization, regularization, convolutional structures for image data, recurrent neural networks for sequence modelling, as well as supervised and unsupervised learning.
Target Audience:
This summer camp is targeted for researchers, developers, hackers, postgraduate students, data scientists, quants, or data analysts that already know about machine learning and have experience in programing.
Prerequisites:
-
Programming experience is a must (preferably in Python)
-
Basic machine learning experience
-
Basic Statistics, Linear Algebra and Calculus
Registration:
Corporate attendees: £700 + VAT, including course material as well as breakfast, lunch and coffee service on all days. Student attendees: £350 + VAT (50% Discount)
Agenda:
DAY 1
9:30-10:00 Registration and Breakfast
10:00-10:15 Welcome Note by NVIDIA and Persontyle (Jack Watts & Ali Syed)
10:15-11:15 Theory: Introduction to Deep Learning (Ole Winther)
The “deep learning revolution”, image classification, speech recognition, reinforcement learning in games and beyond, the feed-forward neural network, artificial intelligence.
11:15- 11:30 Coffee Break
11:30-12:45 Theory: Training feed-forward neural network (Ole Winther, Søren Kaae Sønderby)
Optimization, error back-propagation, regularization, tricks of the trade, getting started with programming.
12:45-13:30 Lunch
13:30-14:15 Guest Speaker, Daghan Cam, CEO Al Build
14:15-15:30 Hands-on lab 1: Getting Started with Theano for Deep Learning (Pyry Takala & Søren Kaae Sønderby)
Implement and train a feed-forward network in Theano.
15:30-15:45 Coffee Break
15:45-16:45 Theory: Convolutional neural networks (Ole Winther)
Understanding the convolutional architecture, convolutional and pooling layers, applications to images.
16:45-17:30 Hands-on lab 2: Implementing Convnet using Theano (Pyry Takala & Søren Kaae Sønderby)
Extend the feed-forward network from Lab 1 with convolutional filters which are one of the most important techniques when working with images.
17:30-17:45 Summary of the learning goals and day 1 close
DAY 2
9:00-9:30 Breakfast
9:30-10:45 Theory: Understanding the recurrent architecture, Elman, LSTM and GRU units, bi-directional, combining with convolutional and feed-forward layers, applications to speech, biological sequences and information retrieval (Ole Winther)
10:45- 11:00 Coffee Break
11:00-12:15 Hands-on lab 3: Implementing RNNs using Theano (Pyry Takala & Søren Kaae Sønderby)
Implement and train RNNs using LSTM units on a simple natural language processing task. We’ll discuss some tricks of the trade for training RNNs in practice.
12:15-13:00 Guest Speaker, Eduard Vasquez, Head of Research, Cortexica
13:00-13:45 Lunch
13:45-14:15 Theory: Getting started with Torch for deep learning (Pyry Takala & Abhishek Aggarwal)
About Torch, Benefits of Torch, Installation > Lua, Tensors > Advanced
14:15-15:15 Hands-on lab 4: Torch7 for Deep Learning (QwikLab) (Pyry Takala & Abhishek Aggarwal)
What is Torch? > Lua > CUDA Tensors > Creating a neural network > Defining a loss function > Training a network > Exercises
15:15-15:30 Coffee Break
15:30-16:45 Google TensorFlow for deep learning (Pyry Takala & Abhishek Aggarwal)
About TensorFlow, Benefits of TF, Installation > Preparing data, Training models, Evaluating models > Advanced
16:15-17:30 Hands-on lab 5: TensorFlow examples and exercise (Pyry Takala & Abhishek Aggarwal)
17:30-17:45 Summary of the learning goals and day 2 close
DAY 3
9:00-9:30 Breakfast
9:30-10:30 Unsupervised learning (Ole Winther)
Generative models for un- and semi-supervised learning, auto-coders, ladder networks, image and text applications.
10:15- 11:15 Hands-on lab 6: Unsupervised Learning (Pyry Takala & Søren Kaae Sønderby)
Unsupervised learning for images using autoencoders.
11:15-12:15 Coffee Break
11:30-12:15 Guest Speaker, Nigel Cannings, CTO, IntelligentVoice - Neil Glackin, Research Scientist, IntelligentVoice
12:15-13:00 Lunch
13:00-14:00 Frontiers in deep learning (Ole Winther)
Variational auto-encoders, extensions to semi-supervised learning and to multi-pass models with attention. Image and text applications.
14:00-15:00 Hands-on lab 7: Catch up with previous labs (Pyry Takala, Ole & Søren)
Kaggle challenge > Work on own data
15:00-15:15 Coffee Break
15:30-16:45 Hands-on lab 8: Catch up with previous labs (Pyry Takala, Ole & Søren Kaae Sønderby)
Kaggle challenge > Work on own data
17:15-17:30 Closing remarks and feedback
18:00-19:00 Social and Networking
Instructors and Lab Trainers:
Ole Winther, Professor in Data Science and Complexity at Cognitive Systems, DTU
Ole Winther received a Ph.D. degree from The Niels Bohr Institute at the University of Copenhagen (KU) in 1998. From 1998 to 2001 Ole Winther was post doc at Lund University, Sweden and at Center for Biological Sequence Analysis, Technical University of Denmark (DTU), from 2001 associate professor at DTU and from 2006 group leader in gene regulation at Bioinformatics, KU, part time. Currently, Ole Winther is a professor in Data Science and Complexity at Cognitive Systems, DTU. His main research area is machine learning. Machine learning combines statistical modelling and artificial intelligence. It draws inspiration from neuroscience for model building and often deals with large amounts of data. He also works with bioinformatics (computer-driven large scale biological data analysis), information retrieval, condition monitoring and recommender systems.
Søren Kaae Sønderby, Ph.D student, University at Copenhagen
Søren Kaae Sønderby is a Ph.D student at the university at Copenhagen supervised by Ole Winther. Working with recurrent neural networks, variational methods and co-developer of Lasagne.
Pyry Takala, Cofounder of True AI Ltd. and PhD student at Aalto University, Finland
Pyry is the co-founder of True AI, where he develops deep learning for dialogue modeling. His PhD work focuses on sequence modeling, in particular applying deep learning to textual data. He has previously worked on deep learning at Amazon, and his publications have been featured for instance in MIT Technology Review and NY Times. Pyry programs today mostly in Torch and TensorFlow.
Abhishek Aggarwal, Cofounder of True AI Ltd.
Abhishek is the co-founder of True AI, where he develops deep learning for dialogue modeling. He has coauthored with Yoshua Bengio, holds a Master in Machine Learning from UCL and has initially graduated from IIT. He has previously worked as a data scientist and machine learning researcher. Abhishek programs today mostly in Torch and TensorFlow.