
Luisa Andreis
Selfsustained periodic behavior in interacting systems
Living systems are characterized by the emergence of recurrent selforganized collective behav
iors in large communities, such as polarization and synchronization. The attempt of modeling these
non linear macroscopic behaviors in complex systems leads often to the choice of mean field models,
since they give analytically tractable equations that may explain features which are not displayed
at microscopic level. In particular, toy models in this class have been recently used to understand
how macroscopic rhythmic behavior may appear in systems where single units have no tendency
to behave periodically. However, the question on how to describe in full generality the class of
interactions able to give origin to oscillating behavior at the macroscopic level is still open. In this
framework, we discuss some interacting mechanisms that have been proved to generate rhythmic
behaviors in the limit and we focus on some particular examples that show also the coexistence, at
the macroscopic level, of several stable periodic orbits. This is a joint work with Daniele Tovazzi

Stein Andreas Bethuelsen
Loss of memory and mixing properties for the contact process
The contact process, or SIS (SusceptibleInfectedSusceptible) model, is a classical model for the spread of an infection in a population. In this talk we focus on this process and its evolution within a partial (and finite) subspace of the population. In particular, we will discuss some recent results on the loss of memory property for such partially observed processes which hold under minimal assumptions on the network structure as soon as the infection rate is large enough.

Asja Fischer
Towards biologically plausible deep learning
In recent years (deep) neural networks got the most prominent models for supervised machine learning tasks. They are usually trained based on stochastic gradient descent where backpropagation is used for the gradient calculation. While this leads to efficient training, it is not very plausible from a biological perspective.
We show that Langevin Markov chain Monte Carlo inference in an energybased model with latent variables has the property that the early steps of inference, starting from a stationary point, correspond to propagating error gradients into internal layers, similar to backpropagation. Backpropagated error gradients correspond to temporal derivatives with respect to the activation of hidden units. These lead to a weight update proportional to the product of the presynaptic firing rate and the temporal rate of change of the postsynaptic firing rate. Simulations and a theoretical argument suggest that this ratebased update rule is consistent with those associated with spiketimingdependent plasticity. These ideas could be an element of a theory for explaining how brains perform credit assignment in deep hierarchies as efficiently as backpropagation does, with neural computation corresponding to both approximate inference in continuousvalued latent variables and error backpropagation, at the same time.

Mareike Fischer
Darwin's inheritance  evolutionary research then and now
In my talk, I will briefly sketch the development of evolutionary research from Darwin till today. Basically, there are two main aims in phylogenetic research: The reconstruction of evolutionary relationships between different species, which are often represented by a tree, as well as the reconstruction of ancestral sequences (as normally only data for presentday species are given). Both of them are based on the idea that a common ancestor evolved into multiple species today, which assumes a combination of various random processes like speciation to form the tree and mutations along the tree. Biomathematicians try to find ways to reconstruct precisely these processes.
While Darwin was still strongly dependent on morphological data for his evolutionary research, the abundance of DNA, RNA and protein data requires better methods in order to infer the correct tree. Several methods and criteria can be used to solve this problem, and I will introduce three basic concepts in my talk and and also discuss their typical drawbacks, like statistical inconsistency. I will also present some modernday challenges like how to extend the traditional tree based methods to nontreelike data (i.e. phylogenetic networks that can represent hybridisation or horizontal gene transfer).

Nina Gantert
Ergodicity for some dynamics of DNA sequences
We discuss models for the dynamics of DNA sequences and show how they
lead to ergodicity questions for interacting particle systems.
We present several open questions in this area and a few answers.
The talk is based on joint work (in progress) with Mikael Falconnet and
Ellen Saada.

Matthias Hammer
From the symbiotic branching model to annihilating Brownian motions
The symbiotic branching model is a toy model for the evolution of a spatially structured population with two interacting types. Particles can migrate between sites and reproduce locally at a rate proportional to the product of the number of particles of both types present at a site. It includes as special cases the wellknown stepping stone model from mathematical population genetics and the mutually catalytic branching model of Dawson and Perkins.
In the talk, we will first carefully introduce the model and explain the connection to the classical stepping stone model. Then we will discuss the problem of the growth of the interface, defined as the region where particles of both types are present. Specifically, one can show under suitable assumptions that the system converges under diffusive rescaling (corresponding to an infinite branching rate), and the limit corresponds to spatially separated populations. In particular, for the stepping stone model the limiting interface dynamics can be described explicitly in terms of annihilating Brownian motions with drift. If time permits, we will also discuss some open problems.
Based on joint work with Jochen Blath (TU Berlin) and Marcel Ortgiese and Florian Völlering (both University of Bath).

Andreas Haupt
Stability of Graph Kernels
Graph kernels have been successfully applied in several classification tasks. They capture richer structure in graphs than measures such as modularity, clustering or degree distribution that are more frequently used in neuroscience applications. However, bounds on the stability of graph kernels are still to be found.
Many graph kernels can be written as linear combinations of subgraph densities. This makes their study amenable to the theory of graphons and has led us to a first stability estimate assuming graphon generative models.
In this talk, you will learn what graph kernels are; how to express many popular graph kernels as homomorphism densities; and how the counting lemma can be used in proving a stability estimate.
This is a joint project with Ngoc Mai Tran.

Benedikt Jahnel
Attractor properties for irreversible and reversible interacting particle systems
In the talk I will first present some statistical mechanics models in discrete and continuous time, on the lattice and in mean field, exhibiting rotational behavior and therefore can serve as models for neural oscillation. In order to understand the synchronization properties of such systems, I will then focus on a result about the attractor properties of a more general class of reversible and irreversible interacting particle systems on the lattice

Sándor Kolumbán
Can one hear the shape of a brain?
Discussions about the critical brain hypothesis are with us for more than half a century, yet reaching a consensus seems to be far. One of the reasons for this being the difficulty in obtaining direct empirical evidence of criticality.
Critical phenomena emerge in stochastic models where a great number of units exhibit the right mix of dynamic behavior and interaction network. Existing evidence for criticality is usually derived by focusing on one of the two aspects, either the network or the dynamic behavior. Moreover there are inherent measurement artifacts that clutter the results.
In this talk we aim at raising awareness to the caveats of MRI based analysis to prove or disprove the critical brain hypothesis by demonstrating that some of the used methodology seems to overlook measurement characteristics inherent to MRI. We will also present new ways of looking at MRI data that have the potential of capturing the dynamical aspect together with the underlying networks structure. Such integrated viewpoints offer the possibility of detecting the structural properties in the brain that researchers are looking for.

Christian Leibold
Hippocampal Sequences: Statistics and models
I will review the different types of sequencelike spike patterns that are observed in the hippocampal formation and how they are thought to be involved in memory formation, consolidation and planning. I will specifically address statistical problems in the verification of these seqeunces and present our current solution to the problem. In the last part I will present one class of mechanistic models for sequences and discuss its shortcomings.

Johannes Lengler
Rapid Formation of Sequential Memories
Each day we form hundreds of new memories, with areas CA1 and CA3 in hippocampus playing a key role. The trace of a memory is formed in a oneshot manner, with events that last for seconds or less. On the other hand, the memories persist for many hours, since they are replayed at resting or at night, to be consolidated in longterm memory. It is observed that the replays have the same sequential structure as the original events, but they are much faster (by 34 orders of magnitude), and appear to be smoothened.
In my talk I will describe a model that allows such oneshot formation of sequential memories and their replays, reproducing many biological observations. The model needs to reconcile requirements that seem contradictory, like extreme plasticity and stability at the same time, and it must operate at timescales which span many orders of magnitude. The model is based on theoretical considerations, but has been validated with exponential integrateandfire neurons with realistic parameters.
The result is joint work with Marcelo Gauy, Felix Weissenberger, Florian Meier, Hafsteinn Einarsson, Angelika Steger, and Fatih Yanik.

Eva Löcherbach
Hawkes processes modeling systems of interacting neurons
We consider systems of interacting nonlinear Hawkes processes to model spike trains of neurons. After a short discussion of the model, I will speak about mean field models and show how
oscillations may arise for large populations. If there is time left I will also discuss stability issues in the agedependent case. The talk is based on joint work with S. Ditlevsen and M. Bonde Raad

Matthias Löwe
Various forms of associative memories
Associative memories have been an active research topic over the past 35 years at
least. They are nowadays used in pattern classification, search problems and many
related topics. We will discuss are very basic form that originally goes back to
Hopfield and try to understand how its performance depends on the topology of the
underlying network and the form of the information one wants to store.

Michael Messer
Multiscale change point detection in point processes
Neuronal spike trains often show temporal changes in their firing activity such as changes in the rate or in the regularity of spike occurrences.
Such changes in the parameters are believed to have crucial relevance for information processing in the nervous system and also impact statistical analyses which require stationarity of the underlying models.
Therefore, we are interested in localizing 'change points' in spike trains, i.e., points in time where the parameters change.
Since change points are typically observed in different time scales, a multiscale procedure was proposed: in the context of stochastic point process models a multiple filter test is discussed which tests the null hypothesis of constant parameters. After rejection of the null hypothesis, change points can be localized using a multiple filter algorithm.
In this talk we focus on the detection of changes in the rate, but also touch on related questions, e.g., the detection of changes in the regularity or the asymptotic behavior of the underlying auxiliary statistics under alternative scenarios of change points. Further, we discuss recent ideas of jointly detecting both, changes in the rate as well as changes in the regularity.

Dirk Metzler
Genealogybased inference of population demography and adaptation
from genomic data
TBA

Jesper Møller
The cylindrical Kfunction and Poisson line cluster point processes
The analysis of point patterns with linear structures is of interest in many applications. To detect anisotropy in such cases, in particular in case of a columnar structure, we introduce a functional summary statistic, the cylindrical Kfunction, which is a directional Kfunction whose structuring element is a cylinder. Further we introduce a class of anisotropic Cox point processes, called Poisson line cluster point processes. The points of such a process are random displacements of Poisson point processes deﬁned on the lines of a Poisson line process. Parameter estimation based on moment methods or Bayesian inference for this model is discussed when the underlying Poisson line process is latent. To illustrate the methodologies, we analyze two and threedimensional point pattern data sets. The threedimensional data set is of particular interest as it relates to the minicolumn hypothesis in neuroscience, claiming that pyramidal and other brain cells have a columnar arrangement perpendicular to the surface of the brain.

Guido Montúfar
A Theory of Cheap Control in Embodied Systems
Given a body and an environment, what is the brain complexity needed in order to generate a desired set of behaviors? The general understanding is that the physical properties of the body and the environment correlate with the required brain complexity. More precisely, it has been pointed that naturally evolved intelligent systems tend to exploit their embodiment constraints and that this allows them to express complex behaviors with relatively concise brains. Although this principle of parsimonious control has been formulated quite some time ago, only recently one has begun to develop the formalism that is required for making quantitative statements on the sufficient brain complexity given embodiment constraints. In this talk I present a mathematical approach that links the physical and behavioral constraints of an agent to the required controller complexity. Then I present experimental results with a virtual sixlegged walking creature that provide evidence for the accuracy of the theoretical predictions.

Klaus Obermayer
Synchrony, spikerate dynamics, and control: From single neurons to networks
How the properties of single neurons and interneuronal couplings give rise to different types of functionally relevant collective dynamics can be effectively studied using population activity models. In my talk I will first present a computational framework for constructing lowdimensional models for instantaneous population spike rates from networks of adaptive spiking model neurons  retaining a direct link between microscopic (neuron biophysics) and macroscopic (network activity) quantities. I will then use this framework to discuss (1) how changes in neuronal excitability can (de)stabilize different network states, (2) how external inputs based on electric fields can modulate neural activity, and (3) how these perturbations can potentially be used to control the network dynamics. The talk will cover neural processes occurring on three spatial scales: single neurons, populations of recurrently connected neurons, and socalled whole brain networks.

Cornelia Pokalyuk & Anton Wakolbinger
Balancing selection, reinfection and host replacement in a hierarchical hostparasite model
We analyze a model for a population of hosts, each of them infected by a large number of parasites that come in two different types and reproduce randomly (and quickly) within hosts. Whenever a host dies it is replaced by a host that is infected by a single parasite type. The resulting decrease in diversity is counteracted by reinfections between hosts, together with a moderately strong balancing selection of the two parasite types within hosts. Using a graphical representation for the random genealogy we obtain a limit law for the host's type dynamics as the number of hosts becomes large, and identify the deterministic dynamical system that governs the host type frequencies. Our model is inspired by thoughts on a possible survival strategy of the human cytomegalovirus, an old herpesvirus that is carried by a substantial fraction of mankind and manages to maintain a high diversity in its coding regions.

Silke Rolles
Processes with reinforcement
Vertexreinforced jump processes are stochastic processes
in continuous time that prefer to jump to sites that have
accumulated a large local time. These processes have interesting
connections to other models, namely to linearly edgereinforced
random walks, random walks in random environment and
a supersymmetric hyperbolic sigma model. In the talk, I will
present the different models and the connections between them.
Based on joint work with Margherita Disertori, Franz Merkl, and
Pierre Tarres.