Neural networks and learning


January 13th Jeonghoon presenting Chapter 1 of Nielsen.
January 24th Eli and Kayden presenting Chapters 2 and 3 of Nielsen.
January 27th Qi and Avishai presenting Chapters 4 and 5 of Nielsen.
February 3rd Vinh and Angier presenting Chapter 6 of Nielsen.
February 10th Kayden presenting autoencoders; Eli presenting Hopfield networks.
February 21st Vinh presenting Boltzmann machines; Angier presenting VAEs; Jeonghoon presenting GANs
February 24th Avishai presenting neural nets as dynamical systems; Qi presenting reservoir computing.
March 9th Kayden presenting BPTT; Angier presenting FORCE learning
Week of March 16th Jeonghoon and Qi presenting reinforcement learning

Useful links and reading material

Feedforward networks:


Optimization methods:

Hopfield networks:
Chapter 42 in Mackay's book:

Neural networks as dynamical systems:
There's a variety of directions you could go. I've put some possibilities below, but feel free to skim through them and select a subset that seems of interest to the group.
Here's a nice review of dynamics in neural networks:
Chapter 7 in Dayan & Abbott's book Theoretical Neuroscience:
The Wilson / Cowan model:
Attractor networks:
Chaos in neural networks:

Reservoir computing:
The two main early papers:
Some reviews:
Reservoir computing and interesting dynamics:

Training recurrent nets:
BPTT and vanishing/exploding gradients:
FORCE learning:

Generative models:
Chapter 20 from the Goodfellow book discusses Boltzmann (and restricted Boltzmann machines), Variational Autoencoders and GANs, and is a good resource. There's a lot of stuff in there so feel free to pick and choose:

Boltzmann machines:
Chapter 43 in Mackay's book:
Introductory articles:

Variational autoencoder:
Tutorial paper:
Simpler overview:
Original papers:

Tutorial paper:
Simpler overview:
Original paper:

Suggested topics

0) Background / general concepts in learning:
Difference between unsupervised, supervised and reinforcement learning.
Generalization, overfitting, bias-variance tradeoff
What is a neural network?

1) Feedforward networks: 
Perceptrons and other models for single neurons.
Training networks with gradient descent. Structure of optimization landscapes for neural nets.
Multilayer feedforward networks. Backpropagation. 
Convolutional networks and other “deep” architectures. Why does depth help?

2) Recurrent networks:
Hopfield networks, and intro to Hebbian/correlation-based learning.
Continuous time recurrent architectures. 
Neural networks as dynamical systems. Computing with discrete and continuous attractors, sequences, and chaos. 
Random networks and reservoir computing.
Training recurrent networks. Force learning. Backprop through time.

3) Generative models:
Boltzmann machines and restricted Boltzmann machines
Variational autoencoders.
GANs (time permitting)

4) Reinforcement learning (time permitting):
Brief overview of framework and basic algorithms (MDPs; policies and value functions; model-based vs. model-free; dynamic programming, Monte Carlo methods, and TD learning)


Meeting M 1–3pm in MSB 2240. Exceptions are January 13th, when we meet 12–2pm and the weeks of 1/20 and 2/17 when we reschedule because of Monday holiday.
2 units for doing the reading, implementing code, participating in the discussions and presenting. 3 units for doing an additional small final project.
The course CRN is 62897.