http://rt.dgyblog.com/ref/ref-learning-deep-learning.html
Learning Deep Learning
[NOTICE] I'm in the middle of adding NLP and Speech Recognition key references. There are lots of awesome reading lists or posts that summarized materials related to Deep Learning. So why would I commit another one? Well, the primary objective is to develop a complete reading list that allows readers to build a solid academic and practical background of Deep Learning. And this list is developed while I'm preparing my Deep Learning workshop. My research is related to Deep Neural Networks (DNNs) in general. Hence, this posts tends to summary contributions in DNNs instead of generative models. For NoviceIf you have no idea about Machine Learning and Scientific Computing, I suggest you learn the following materials while you are reading Machine Learning or Deep Learning books. You don't have to master these materials, but basic understanding is important. It's hard to open a meaningful conversation if the person has no idea about matrix or single variable calculus. Theory of Computation, Learning Theory, Neuroscience, etcFundamentals of Deep LearningTutorials, Practical Guides, and Useful SoftwareTheano by LISA Lab, University of Montreal. PyLearn2 by LISA Lab, University of Montreal. Caffe by Berkeley Vision and Learning Center (BVLC) and community contributor Yangqing Jia.
Literature in Deep Learning and Feature LearningDeep Learning is a fast-moving community. Therefore the line between "Recent Advances" and "Literature that matter" is kind of blurred. Here I collected articles that are either introducing fundamental algorithms, techniques or highly cited by the community. Recent Must-Read Advances in Deep LearningMost of papers here are produced in 2014 and after. Survey papers or review papers are not included. Maxout Networks by Ian J. Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, Yoshua Bengio.
Podcast, Talks, etc.Amazon Web Service Public AMI for Deep LearningI configured 2 types of GPU instances that are available in AWS and installed necessary software for Deep Learning practice. The first one is DGYDLGPUv4. It is a machine that provides 8-core CPU, 15GB RAM, 500GB SSD and 1 NVIDIA Grid K520 GPU, you can use it to learn Deep Learning or conduct normal size experiment. If you need even more computing resources, you can choose DGYDLGPUXv1. This new released GPU instance offered a 32-core CPU, 60GB RAM, 500GB SSD and 4 NVIDIA Grid K520 GPUs. NVIDIA Driver, CUDA Toolkit 7.0, cuDNNv2, Anaconda, Theano are preinstalled. Currently they are only available at Asia Pacific (Singapore). You can copy the instance to your close region. If you are doing analysis or experiments, I suggest you to request spot instance instead of on-demand instance. This will save you a lot of costs. - DGYDLGPUv4 (ami-ba516ee8) [Based on g2.2xlarge]
- DGYDLGPUXv1 (ami-52516e00) [Based on g2.8xlarge]
Currently, my build of Caffe in the instance is failed. You can use the AMI that is provided by Caffe community. You can get more details from here So far the instance is only available at US East (N. Virginia) - Caffe/CuDNN built 2015-05-04 (ami-763a331e) [For both g2.2xlarge and g2.8xlarge]
Practical Deep Neural Networks - GPU computing perspectiveThe following entries are materials I use in the workshop. SlidesPractical tutorialsCodes- Telauges
- A new deep learning libraries for education
- Right now implements MLP Hidden Layer, Softmax Layer, Auto-encoder, ConvNet Layer
- In the middle of transforming coding style, I suggest you explore it, but do not use it seriously.
|