Kadenze: Creative Applications of Deep Learning with TensorFlow III
This course extends our existing background in Deep Learning to state of the art techniques in audio, image and text modeling. We’ll see how dilated convolutions can be used to model long term temporal dependencies efficiently using a model called WaveNet. We’ll also see how to inspect the representations in deep networks using a deep generator network, leading to some of the strongest insights into deep networks and the representations they learn. We’ll then switch gears to one of the most exciting directions in Deep Learning thus far: Reinforcement Learning. We’ll take a brief tour of this fascinating topic and explore toolkits released by OpenAI, DeepMind, and Microsoft. Finally, we’re teaming up with Google Brain’s Magenta Lab for our last session on Music and Art Generation. We’ll explore Magenta’s libraries using RNNs and Reinforcement Learning to create generative and improvised music.
This session covers new work in generative modeling of images, sound, and text using masked and dilated convolution operations. We describe what these are and how they can be used to model various media types very efficiently.
This session covers an advanced technique for synthesizing objects resembling deep dream techniques. We show how this can be used to much more clearly understand the representations in deep networks.
This session introduces one of the most advanced techniques in texture synthesis and artistic stylization called Neural Doodle.
No Reviews found for this course.