Docente
|
RODOLA' EMANUELE
(programma)
- Fundamentals of ML review - Linear regression - Classification - Energy minimization - Maximum likelihood estimation - Optimization - Quasi-newton methods - Stochastic gradient descent - Automatic differentiation - Supervised, unsupervised and self-supervised learning - Representation, geometry, stability, variability - The curse of dimensionality in ML - Neural networks - Perceptron, multi-layer perceptron - Backpropagation - Properties of learnt representations - Training neural networks - Regularization - Activation functions - Weight initialization - Batch normalization - Hyperparameter optimization - Parameter updates - Dropout - Convolutional neural networks - Shift-invariance, co-variance and contra-variance - Weight sharing - Common architectures - Residual networks - Theory of deep learning - Convergence - Gradient flow - Open problems - Visualization, understanding and interpretability - Frameworks and libraries (language: Python) - Overview of DL frameworks (Keras, Tensorflow) - PyTorch - Transfer learning and domain adaptation - Recurrent networks, long-short time memory - Generative models - Autoencoders - Variational autoencoders - Generative adversarial networks - Geometric deep learning on non-Euclidean domains: - Graphs - Riemannian manifolds - Point clouds - Adversarial and universal attacks - Applications - Computer vision and graphics - Network and graph analysis (fake news detection, Netflix problem) - Audio synthesis
Data la natura altamente dinamica dell'area coperta da questo corso avanzato, non è previsto un testo unico di riferimento. Durante il corso verranno indicate e fornite di volta in volta le fonti sotto forma di articoli scientifici e capitoli di libri.
Come riferimento generale, i seguenti libri possono rivelarsi utili:
Deep Learning Ian Goodfellow, Yoshua Bengio, Aaron Courville MIT Press, 2016
Deep Learning with PyTorch Vishnu Subramanian Packt, 2018
|