Networks

Training week 14

On 24 - 28 October 2022 NETWORKS organizes the 14th Training Week.

 

Minicourse: Statistical theory for neural networks

johannesLecturer: Johannes Schmidt-Hieber (Twente University)

 

Lecture 1) Survey on neural network structures and deep learning

There are many different types of neural networks that differ in complexity and the data types that can be processed. This lecture provides an overview and surveys the algorithms used to fit deep networks to data. We discuss different ideas that underly the existing approaches for a mathematical theory of deep networks. Special focus will be on initialisation of neural networks. To train a neural network a (random) starting point has to be chosen and the success of deep learning heavily depends on a proper initialisation scheme. Standard approaches initialise a network by drawing the parameters independently from a distribution. We discuss some known properties of such randomly initialised networks and describe the edge of chaos phenomenon.

 

Lecture 2) Theory for shallow networks

We start with the universal approximation theorem and discuss several proof strategies that provide some insights into functions that can be easily approximated by shallow networks. Based on this, a survey on approximation rates for shallow networks is given. It is shown how this leads to estimation rates. In the lecture, we also discuss methods that fit shallow networks to data.

 

Lecture 3) Statistical theory for deep networks

Why are deep networks better than shallow networks? We provide a survey of the existing ideas in the literature. In particular, we study localisation of deep networks and specific functions that can be easily approximated by deep networks. We outline the theory underlying the recent bounds on the estimation risk of deep ReLU networks. In the lecture, we discuss specific properties of the ReLU activation function. Based on this, we show how risk bounds can be obtained for sparsely connected ReLU networks. At the end, we describe important future steps needed for the future development of the mathematical theory of deep learning.

 

Lecture 4) Tutorial

The participants should try whether they can run the provided source code prior to the tutorial. We recommend to install Anaconda and use spyder. Additional packages have then to be installed via Anaconda. Most notably Keras/tensorflow does not work with all version of pythons and it can be a bit tricky to install it. During the tutorial, we will also ask the participants to work a bit on their own with this program along some questions that we prepare.

 

Minicourse 

Partition functions: complex zeros and efficient algorithms

guusLecturer: Guus Regts (University of Amsterdam)

 

Partition functions originate in statistical physics, but also arise in several other areas. For example, the partition function of the hard core model is known as the independence polynomial in graph theory.

In this lecture series I will discuss how the absence/presence of complex zeros of partition functions is related to the computational complexity of approximately computing evaluations of these partition functions.

 

 

Location: Conferentiecentrum De Schildkamp, Leerdamseweg 44, 4147 BM Asperen

 

Programme

The programme is available as pdf.