Schedule
January 13th Jeonghoon presenting Chapter 1 of Nielsen.
January 24th Eli and Kayden presenting Chapters 2 and 3 of Nielsen.
January 27th Qi and Avishai presenting Chapters 4 and 5 of Nielsen.
February 3rd Vinh and Angier presenting Chapter 6 of Nielsen.
February 10th Kayden presenting autoencoders; Eli presenting Hopfield networks.
February 21st Vinh presenting Boltzmann machines; Angier presenting VAEs; Jeonghoon presenting GANs
February 24th Avishai presenting neural nets as dynamical systems; Qi presenting reservoir computing.
March 9th Kayden presenting BPTT; Angier presenting FORCE learning
Week of March 16th Jeonghoon and Qi presenting reinforcement learning
Useful links and reading material
Feedforward networks:
http://neuralnetworksanddeeplearning.com/
https://www.deeplearningbook.org
Autoencoders:
https://www.deeplearningbook.org/contents/autoencoders.html
https://www.jeremyjordan.me/autoencoders/
Optimization methods:
https://distill.pub/2017/momentum
http://blog.mrtz.org/2013/09/07/the-zen-of-gradient-descent.html
Hopfield networks:
https://en.wikipedia.org/wiki/Hopfield_network
https://neuronaldynamics.epfl.ch/online/Ch17.S2.html
Chapter 42 in Mackay's book: http://www.inference.org.uk/mackay/itprnn/book.html
https://www.pnas.org/content/pnas/79/8/2554.full.pdf
Neural networks as dynamical systems:
There's a variety of directions you could go. I've put some possibilities below, but feel free to skim through them and select a subset that seems of interest to the group.
Here's a nice review of dynamics in neural networks: https://www.annualreviews.org/doi/pdf/10.1146/annurev.neuro.28.061604.135637
Chapter 7 in Dayan & Abbott's book Theoretical Neuroscience: http://www.gatsby.ucl.ac.uk/~lmate/biblio/dayanabbott.pdf
The Wilson / Cowan model: https://www.sciencedirect.com/science/article/pii/S0006349572860685
Attractor networks:
http://www.scholarpedia.org/article/Attractor_network
http://www.scholarpedia.org/article/Continuous_attractor_network
Chaos in neural networks:
https://journals.aps.org/prl/pdf/10.1103/PhysRevLett.61.259
Reservoir computing:
The two main early papers:
https://www.mitpressjournals.org/doi/abs/10.1162/089976602760407955
https://science.sciencemag.org/content/304/5667/78
Some reviews:
https://biblio.ugent.be/publication/416607/file/447949
https://link.springer.com/article/10.1007/s13218-012-0204-5
http://www.scholarpedia.org/article/Echo_state_network
Reservoir computing and interesting dynamics:
https://igi-web.tugraz.at/people/maass/psfiles/eoc-nc-preprint.pdf
Training recurrent nets:
BPTT and vanishing/exploding gradients: http://proceedings.mlr.press/v28/pascanu13.pdf
FORCE learning: https://www.sciencedirect.com/science/article/pii/S0896627309005479
Generative models:
Chapter 20 from the Goodfellow book discusses Boltzmann (and restricted Boltzmann machines), Variational Autoencoders and GANs, and is a good resource. There's a lot of stuff in there so feel free to pick and choose:
https://www.deeplearningbook.org/contents/generative_models.html
Boltzmann machines:
Chapter 43 in Mackay's book: http://www.inference.org.uk/mackay/itprnn/book.html
Introductory articles:
http://www.scholarpedia.org/article/Boltzmann_machine
https://en.wikipedia.org/wiki/Boltzmann_machine
Variational autoencoder:
Tutorial paper: https://arxiv.org/pdf/1606.05908.pdf
Simpler overview: https://towardsdatascience.com/understanding-variational-autoencoders-vaes-f70510919f73
Original papers:
https://arxiv.org/pdf/1312.6114.pdf
https://arxiv.org/pdf/1401.4082.pdf
GANs:
Tutorial paper: https://arxiv.org/pdf/1701.00160.pdf
Simpler overview: https://towardsdatascience.com/understanding-generative-adversarial-networks-gans-cd6e4651a29
Original paper: https://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf
Suggested topics
0) Background / general concepts in learning:
Difference between unsupervised, supervised and reinforcement learning.
Generalization, overfitting, bias-variance tradeoff
What is a neural network?
1) Feedforward networks:
Perceptrons and other models for single neurons.
Training networks with gradient descent. Structure of optimization landscapes for neural nets.
Multilayer feedforward networks. Backpropagation.
Convolutional networks and other “deep” architectures. Why does depth help?
Autoencoders.
2) Recurrent networks:
Hopfield networks, and intro to Hebbian/correlation-based learning.
Continuous time recurrent architectures.
Neural networks as dynamical systems. Computing with discrete and continuous attractors, sequences, and chaos.
Random networks and reservoir computing.
Training recurrent networks. Force learning. Backprop through time.
3) Generative models:
Boltzmann machines and restricted Boltzmann machines
Variational autoencoders.
GANs (time permitting)
4) Reinforcement learning (time permitting):
Brief overview of framework and basic algorithms (MDPs; policies and value functions; model-based vs. model-free; dynamic programming, Monte Carlo methods, and TD learning)
Logistics
Meeting M 1–3pm in MSB 2240. Exceptions are January 13th, when we meet 12–2pm and the weeks of 1/20 and 2/17 when we reschedule because of Monday holiday.
2 units for doing the reading, implementing code, participating in the discussions and presenting. 3 units for doing an additional small final project.
The course CRN is 62897.