[01]01 – History and resources.zh_en
[02]01L – Gradient descent and the backpropagation algorithm.zh_en
[03]02 – Neural nets_ rotation and squashing.zh_en
[04]02L – Modules and architectures.zh_en
[05]03 – Tools, classification with neural nets, PyTorch implementation.zh_en
[06]03L – Parameter sharing_ recurrent and convolutional nets.zh_en
[07]04L – ConvNet in practice.zh_en
[08]04.1 – Natural signals properties and the convolution.zh_en
[09]04.2 – Recurrent neural networks, vanilla and gated (LSTM).zh_en
[10]05L – Joint embedding method and (LV-EBMs).zh_en
[11]05.1 – Latent Variable Energy Based Models (LV-EBMs), inference.zh_en
[12]05.2 – But what are these EBMs used for_.zh_en
[13]06L – Latent variable EBMs for structured prediction.zh_en
[14]06 – Latent Variable Energy Based Models (LV-EBMs), training.zh_en
[15]07L – PCA, AE, K-means, Gaussian mixture model, sparse coding, intuitive VAE
[16]07 – Unsupervised learning_ autoencoding the targets.zh_en
[17]08L – Self-supervised learning and variational inference.zh_en
[18]08 – From LV-EBM to target prop to autoencoder.zh_en
[19]09L – Differentiable associative memories, attention, and transformers.zh_en
[20]09 – AE, DAE, and VAE with PyTorch; (GAN) and code
[21]10L – Self-supervised learning in computer vision.zh_en
[22]10 – Self _ cross, hard _ soft attention and the Transformer.zh_en
[23]11L – Speech recognition and Graph Transformer Networks.zh_en
[24]11 – Graph Convolutional Networks (GCNs).zh_en
[25]12L – Low resource machine translation.zh_en
[26]12 – Planning and control.zh_en
[27]13L – Optimisation for Deep Learning.zh_en
[28]13 – The Truck Backer-Upper.zh_en
[29]14L – Lagrangian backpropagation, final project winners, and Q&A session
[30]14 – Prediction and Planning Under Uncertainty.zh_en
[31]AI2S Xmas Seminar - - Energy-Based Self-Supervised Learning
[32]09P – Contrastive joint embedding methods (JEMs) for (SSL)
[33]10P – Non-contrastive joint embedding methods (JEMs) for (SSL)