Network Dynamics - Challenges for our understanding
We combine first principles theory with data-driven analysis, prediction and design to achieve a fundamental understanding of complex dynamical systems and work towards solutions of real-world problems. Focus application areas include biological and bio-inspired computing, collective mobility and transport that is fair and efficient, energy systems as well as essential requirements underlying systemic sustainability.
(see also the upcoming website http://systainability.org)
Natural, technical and socio-economic networks typically pose a number of challenges for understanding their collective dynamics, because they exhibit several properties that complicate analysis simultaneously:
- high dimensionality, a large number of variables and parameters describing the system
- complicated topology of the interaction network
- interaction delays in the communication or interaction between units
- heterogeneities, random, undetermined or inexact properties of the units, interactions or external factors
- stochasticity, random or otherwise unpredictable aspects of the dynamics
- restructuring & adaptivity, changes of the system itself, for instance of its topology
Standard theoretical approaches often neglect many of these properties and consider only average properties. For instance, given an ensemble of networks, one might ask for an average property of those networks. One key focus of our research is to take into account the microscopic properties of individual links and individual units, individual events as well as individual realizations or distributions of networks and their properties, going far beyond simple averages.
Energy Systems and Sustainability
Energy fundamentally underlies all aspects of life; thus its sustainable generation and reliable distribution are indispensable. The drastic change from our traditional energy system based on fossil fuels to one based dominantly on renewable sources provides an extraordinary challenge for the design and robust operation of future power grids (Pepermans et al., Energy Policy 33:787, 2005; Butler, Nature 445:586, 2007) and energy systems in general. Power grids will be fed in a temporally and spatially distributed and largely unpredictable, small power generators based on renewable energies such as wind turbines or wind parks, photovoltaic arrays, biogas or water power. The distributed nature of renewable sources makes the grid less controllable by central means because non-local recurrent feedback becomes more influential. How to economically and reliably design and robustly operate such complex power networks is currently not well understood.
Our research focuses on principles of self-organization in multi-dimensional dynamical systems to characterize fundamental features of future-compliant energy systems, in particular power grids. For instance, we started to disentangle the implications of smallness from those of the distributedness of renewable energy sources on collective grid dynamics and robustness, the non-local nature of failure spreading, and identified the Braess’ paradox in oscillator networks occurring upon the addition of new transmission lines (Phys. Rev. Lett., 2012 (Editors’ Suggestion), New J. Phys. 2012, Chaos 2014, Eur. Phys. J. ST 2014). We currently investigate how to predict critical links in highly recurrent supply networks (Phys. Rev. Lett. 2016), home and the mutual feedback between economic and dynamic stability when coupling dynamical grids to markets via energy trading, as proposed for the near future. We also investigate how local perturbations dynamically spread through a network, first for power grids, more generally for network dynamical systems.
Networked Public Mobility
How to move people or things from A to B is changing rapidly. Future mobility and transport systems will be more distributed than today's, vehicles will be more often shared and use alternative propulsion systems, traffic as a whole will be more autonomous and different vehicles and modes of transport will be more mutually coupled.
These changes in mobility, with a market expected to grow to a trillion US Dollars within the next decade, raise a multitude of questions about collective dynamical phenomena that are crucial for normal traffic operation and reliable individual transport yet fundamentally unsolved today. Combinging theoretical modeling with data-driven analysis, we are investigating options for more effective, fair and transparent mobility, for instance combining the best of taxis and busses (see also here: EcoBus Research @MPIDS) and address challenges for coupled autonomous traffic.
Beyond von Neumann BioComputing
Reliable network function from unreliable and unstable elements
Biological systems in general and neuronal circuits in particular exhibit collective dynamics that is self-organized, distributed, decentrally controlled and often microscopically unreliable. Yet they typically exhibit robust and predictable functions and can reliably solve a variety of computational tasks.
How can bio-inspired dynamical systems coordinate activity and thus effectively process, route and transmit information?
Standard computational paradigms in computer science and engineering are based on the legacy of von Neumann and Turing. From a dynamical systems’ perspective, this legacy requires deterministic, reliable processes that are executed at discrete times provided by some external clock and are based on the rapid convergence to stable states (like the state changes of standard transistors). With increasing processing speeds dictated by Moore's law and decreasing component sizes soon reaching the size of one or a few molecules, elements intrinsically become unreliable and fluctuating, local processes mutually asynchronous and collective states unstable. Dynamical systems arising in nature naturally come with instabilities and intrinsic noise and mostly do not offer control options via external clocking. Yet, traditional theoretical concepts of neural and neuro inspired computations thus see irregular and transient dynamics, unstable states, and the absence of a central clock as problems to be avoided to achieve a desired computation. Over the last 15 years, several alternative, distributed ways of computation have been suggested.
Self-Organized dynamics, distributed control and Heteroclinic Computing
We are working on two major frontiers. First, we use principles of self-organization, adaptation and learning to understand and control high-dimensional and distributed systems, including versatile autonomous robot in locomotion (Nature Physics 2010) and oscillatory biological systems (Nature Communications, 2017). Second, we are exploring the concept heteroclinic computing, where dynamic switching controlled by high-dimensional input signals robustly guides system trajectories through a network of (saddle) states (Phys. Rev. Lett. 2012).
Heteroclinic computing in principle enables intrinsically parallel computations in a robust way, exhibits a scalable encoding space and offers new perspective on how neuro-inspired systems may reliably compute (Schittler-Neves et al, in prep.). Particularly interesting become hardware realizations, that after solving questions on memory, decoding and suitable hardware may enable an entirely novel form of high-dimensional analog computations.
Distributed Dynamics and Function in Neural Circuits
Spatially and temporally coordinated patterns of neural activity are key to information processing in the brain. Yet, their dynamical origin and more generally, the mechanisms underlying how distributed activity yields nonlinear computations in the brain are far from fully understood. We study how heterogeneous interaction networks, non-additive coupling and other nonlinearities act together to induce specific coordinated activity patterns in cortical networks and the primary circuits of olfactory sensory processing.
Does non-additve coupling support information transmission?
Neurophysiological experiments (starting with Ariav et al., J. Neurosci. 23:7750, 2003) found that under certain conditions the neuronal dendrites – branched projections of the neuron that transmit inputs from other neurons to the cell body (soma) – process input signals in a non-additive way: If the inputs arrive within a time window of a few milliseconds, the dendrite can actively generate a fast dendritic spike that propagates to the neuronal soma and leads to a nonlinearly amplified response. This response is temporally highly precise and supports the detection of synchronous inputs by a neuron. Their impact of these local dynamic features is a key candidate for information transmission in neural circuits to be robust and reliable.
Fundamental analytical studies of single neurons and neural circuits with non-additive dendritic interactions (PLoS Comput. Biol., 2012; Phys. Rev. X, 2012; Phys. Rev. X, 2014) reveal how non-additive dendritic interactions enable guided signal propagation and information transmission even in entirely random cortical circutis. Further simulational studies indicate a local neuron property that dynamically changes processing with input synchrony – dynamically enhances circuit function as relevant in hippocampal and other neocortical circuits. Our study thus adds a novel perspective on the dynamics of networks with nonlinear interactions in general and offers a viable route for the occurrence of patterns of precisely timed spikes in recurrent networks under physiologically plausible conditions. A theory These results may have severe consequences on how various neural circuits compute (Frontiers Comput. Neurosci. 2013, PLoS Comput. Biol., 2014 & 2015, J. Neurosci. 2015).
Selective connectivity - a universal encoder for distributed circuit function?
The antennal lobes of insects and the analogous olfactory bulbs of vertebrates constitute the first recurrent circuits for sensory odor processing in animals. As a whole, these first processing circuits exhibit a number of surprising collective features but how these emerge remains an open question. For instance, the antennal lobe is not only capable of separating sensory input representations from two different odor signals as long assumed. Our joint work with the Molecular Neurobiology group of Andre Fiala (University of Göttingen) now suggests that the same antennal lobe may also join representations of other pairs of odors (Niewalda et al., PLoS ONE, 2011). Further, studies on olfactory bulb have shown that gradual mixtures of two odors generate representations among projection neuron activities that split a large set of mixing ratios into two distinct representational classes. Finally, first olfactory processing circuits are moreover capable of separating inputs into classes of input concentration even for the same input odor. Taken together, these studies suggest that the same circuit processes information in distinctly different, nonlinear ways depending on the inputs only.
How do these nonlinear processing modes emerge in recurrent circuits?
Our current theoretical studies now actively uses the fact that the connectivity of interneurons is highly heterogeneous (Chou et al., Nature Neurosci. 13:439, 2010). Instead of assuming “average” inhibition, we take into account the concrete and thus selective realization of inhibitory connectivity that is not homogeneous across the first processing circuit. Our current hypothesis is that together with a known nonlinear local filtering property, such high wiring specificity may explain all of the above unexplained collective functions at once (e.g., Chou et al., in preparation)
Inverse Problems on Network Dynamics — Reconstruction, Control and Design
Two inverse problems of Network Dynamics:
What can we tell about network interaction structure from observing its dynamics?
How can we design networked systems to achieve desired collective dynamics?
Solving an inverse problem means to identify the elements of a system (and its input) such that either (i) the system fits experimental observations and simultaneously satisfies known constraints or (ii to enable the system to generate specific desired behavior. For instance (i) given observations about how a solid scatters incoming radiation reveals information about its structure and (ii) a robot's sensors, actuators, processors and mechanics are designed such that it can fulfill certain tasks.
Inverse problems have a long history in the natural sciences and mathematics and are implicit to many engineering problems. Yet, the vast majority of research on collective nonlinear dynamics of networks is still focusing on the “forward direction” of analysis or modeling and asks what types of collective dynamics emerge from a network of given units interacting via a given topology.
We go beyond this forward perspective and address two new directions of research, studying the relation between interaction topologies and dynamics of networked systems in complementary, inverse, ways. One is network reconstruction, revealing the existence and possibly type of network interactions from accessing the collective dynamics of the system only (e.g. Phys. Rev. Lett. 2007, New J. Phys. 2011, Frontiers Comput. Neurosci. 2011, J. Phys. A 2014 (Topical Review); Science Advances (2017)). A second is network design, setting up the interactions of a network such that it robustly exhibits a given collective dynamics and thus function (Phys. Rev. Lett. 2006; Physica D 2006, New J. Phys. 2011, IEEE Trans. Autom. Control, 2017). Interestingly, the theoretical framework we derived for network inference — one inverse problem — seems to also provide a natural viewpoint on how to design networks — a second inverse problem. These two research areas are still in its infancy and at the core of fundamental theory research in the Network Dynamics team.
Even basic questions about multi-dimensional dynamical systems are non-trivial to answer. We recently contributed by proposing a method to infer network size (the number of its dynamical variables) from recorded time series (Haehne et al., Phys. Rev. Lett. 2019)