On 20-24 April 2020 NETWORKS organizes the tenth Training Week for PhD Students of NETWORKS.
Lecturer: Johannes Schmidt-Hieber
Lecture 1: Survey on neural network structures and deep learning
There are many different types of neural networks that differ in complexity and the data types that can be processed. This lecture provides an overview and surveys the algorithms used to fit deep networks to data. We discuss different ideas that underly the existing approaches for a mathematical theory of deep networks.
Lecture 2: Initialisation of neural networks
To train a neural network a (random) starting point has to be chosen and the success of deep learning heavily depends on a proper initialisation scheme. Standard approaches initialise a network by drawing the parameters independently from a distribution. We discuss some known properties of such randomly initialised networks and describe the edge of chaos phenomenon.
Lecture 3: Theory for shallow networks
We start with the universal approximation theorem and discuss several proof strategies that provide some insights into functions that can be easily approximated by shallow networks. Based on this, a survey on approximation rates for shallow networks is given. It is shown how this leads to estimation rates. In the lecture, we also discuss methods that fit shallow networks to data.
Lecture 4: Statistical theory for deep networks
Why are deep networks better than shallow networks? We provide a survey of the existing ideas in the literature. In particular, we study localisation of deep networks and specific functions that can be easily approximated by deep networks. We outline the theory underlying the recent bounds on the estimation risk of deep ReLU networks. In the lecture, we discuss specific properties of the ReLU activation function. Based on this, we show how risk bounds can be obtained for sparsely connected ReLU networks. At the end, we describe important future steps needed for the further development of the mathematical theory of deep learning.
Lecturer: Leen Stougie
This mini-course intends to present various aspects in which optimization and uncertainty interact, mostly in relation to computational complexity. A prominent topic will be stochastic optimization, also called stochastic programming, in which uncertainty is modelled as distributions over problem parameters and the objective becomes optimizing the expected value. As a very special case, which can be regarded as the most explicit form of optimization under uncertainty, we will see what happens to the complexity of problems when uncertainty is expressed by a set of scenarios that may or may not occur.
This also leads to interesting robust optimization problems. On the other extreme side of optimization under uncertainty we have so-called on-line optimization, in which data of the problem are not known beforehand but become known over time, during the decision process. Typically in such models decisions made are irrevocable or, in a real-time setting, decisions made in the past cannot be undone (but waiting is an option). On the way, some randomized algorithms will be presented, another way in which uncertainty and optimization interact.
Basic stochastic programming problems: Complexity issues; Optimization under scenarios; The deterministic equivalent model; Examples of scheduling under scenarios.
Lecture 2 by Ward Romeijnders
2-stage stochastic programming with simple recourse; Convex approximations of 2-stage simple recourse; Approximation quality of convex approximate solutions.
Counting complexity; Approximation algorithms; A simple efficient randomized algorithm for stochastic programming that gives small errors with high probability.
On-line optimization; Competitive analysis; Lower bounds due to a lack of information; Examples of on-line scheduling and on-line routing.
Conferentiecentrum Kaap Doorn, Postweg 9, 3941 KA Doorn