With the steady improvement and availability of GPUs, and increases in available training data, neural networks have made a remarkable comeback. Rebranded as “deep learning”, neural networks with multiple hidden layers have made tremendous progress on tasks such as image recognition and machine translation which until recently were deemed impossible but now are considered “solved”.

Our introductory neural networks class is designed to be a first look at the topic for software engineers without necessarily a strong math background. It is designed to be intuition building rather than rigorously mathematical, and involves more programming than a typical introductory course on neural networks.

The course provides a theoretical exposition of neural networks alongside a simple programming task: solve the MNIST handwritten digit classification task without the use of third party machine learning libraries. Once we have solved the problem from scratch, we will also introduce libraries such as TensorFlow or Theano to experience solving the same problem with more powerful tools.

- Linear algebra and calculus refresher
- The neural network model
- Understanding gradient descent
- Understanding backpropagation
- The practice of training neural networks: hyperparameter selection, activation functions, initial weights, batch normalization and dropout
- Overview of the popular libraries such as TensorFlow and Theano

This course presumes some familiarity with linear algebra and calculus, as well as competency with Python. We can provide refresher material for both the math and Python prerequisities if required.

This course is next scheduled to run in April 2017. Apply now to be considered; exact dates and times will be finalized based on participant availability. Classes will be approximately nine hours per week for three weeks, for a total price of $1,800

hello@bradfieldcs.com

San Francisco, California

© 2016 Bradfield School of Computer Science