January 13th Jeonghoon presenting Chapter 1 of Nielsen.
January 24th Eli and Kayden presenting Chapters 2 and 3 of Nielsen.
January 27th Qi and Avishai presenting Chapters 4 and 5 of Nielsen.
February 3rd Vinh and Angier presenting Chapter 6 of Nielsen.
February 10th Kayden presenting autoencoders; Eli presenting Hopfield networks.
February 21st Vinh presenting Boltzmann machines; Angier presenting VAEs; Jeonghoon presenting GANs
February 24th Avishai presenting neural nets as dynamical systems; Qi presenting reservoir computing.
March 9th Kayden presenting BPTT; Angier presenting FORCE learning
Week of March 16th Jeonghoon and Qi presenting reinforcement learning
Useful links and reading material
Chapter 42 in Mackay's book: http://www.inference.org.uk/mackay/itprnn/book.html
Neural networks as dynamical systems:
There's a variety of directions you could go. I've put some possibilities below, but feel free to skim through them and select a subset that seems of interest to the group.
Here's a nice review of dynamics in neural networks: https://www.annualreviews.org/doi/pdf/10.1146/annurev.neuro.28.061604.135637
Chapter 7 in Dayan & Abbott's book Theoretical Neuroscience: http://www.gatsby.ucl.ac.uk/~lmate/biblio/dayanabbott.pdf
The Wilson / Cowan model: https://www.sciencedirect.com/science/article/pii/S0006349572860685
Chaos in neural networks:
The two main early papers:
Reservoir computing and interesting dynamics:
Training recurrent nets:
BPTT and vanishing/exploding gradients: http://proceedings.mlr.press/v28/pascanu13.pdf
FORCE learning: https://www.sciencedirect.com/science/article/pii/S0896627309005479
Chapter 20 from the Goodfellow book discusses Boltzmann (and restricted Boltzmann machines), Variational Autoencoders and GANs, and is a good resource. There's a lot of stuff in there so feel free to pick and choose:
Chapter 43 in Mackay's book: http://www.inference.org.uk/mackay/itprnn/book.html
Tutorial paper: https://arxiv.org/pdf/1606.05908.pdf
Simpler overview: https://towardsdatascience.com/understanding-variational-autoencoders-vaes-f70510919f73
Tutorial paper: https://arxiv.org/pdf/1701.00160.pdf
Simpler overview: https://towardsdatascience.com/understanding-generative-adversarial-networks-gans-cd6e4651a29
Original paper: https://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf
0) Background / general concepts in learning:
Difference between unsupervised, supervised and reinforcement learning.
Generalization, overfitting, bias-variance tradeoff
What is a neural network?
1) Feedforward networks:
Perceptrons and other models for single neurons.
Training networks with gradient descent. Structure of optimization landscapes for neural nets.
Multilayer feedforward networks. Backpropagation.
Convolutional networks and other “deep” architectures. Why does depth help?
2) Recurrent networks:
Hopfield networks, and intro to Hebbian/correlation-based learning.
Continuous time recurrent architectures.
Neural networks as dynamical systems. Computing with discrete and continuous attractors, sequences, and chaos.
Random networks and reservoir computing.
Training recurrent networks. Force learning. Backprop through time.
3) Generative models:
Boltzmann machines and restricted Boltzmann machines
GANs (time permitting)
4) Reinforcement learning (time permitting):
Brief overview of framework and basic algorithms (MDPs; policies and value functions; model-based vs. model-free; dynamic programming, Monte Carlo methods, and TD learning)
Meeting M 1–3pm in MSB 2240. Exceptions are January 13th, when we meet 12–2pm and the weeks of 1/20 and 2/17 when we reschedule because of Monday holiday.
2 units for doing the reading, implementing code, participating in the discussions and presenting. 3 units for doing an additional small final project.
The course CRN is 62897.