Markov chain aggregation for agent-based models

Banisch S (2014)
Bielefeld: Universitätsbibliothek Bielefeld.

Bielefelder E-Dissertation | Englisch
 
Download
OA
Gutachter*in / Betreuer*in
Abstract / Bemerkung
This thesis introduces a Markov chain approach that allows a rigorous analysis of a class of agent-based models (ABMs). It provides a general framework of aggregation in agent-based and related computational models by making use of Markov chain aggregation and lumpability theory in order to link between the micro and the macro level of observation. The starting point is a microscopic Markov chain description of the dynamical process in complete correspondence with the dynamical behavior of the agent model, which is obtained by considering the set of all possible agent configurations as the state space of a huge Markov chain. This is referred to as micro chain, and an explicit formal representation including microscopic transition rates can be derived for a class of models by using the random mapping representation of a Markov process. The explicit micro formulation enables the application of the theory of Markov chain aggregation -- namely, lumpability -- in order to reduce the state space of the micro chain and relate microscopic descriptions to a macroscopic formulation of interest. Well-known conditions for lumpability make it possible to establish the cases where the macro model is still Markov, and in this case we obtain a complete picture of the dynamics including the transient stage, the most interesting phase in applications. For such a purpose a crucial role is played by the type of probability distribution used to implement the stochastic part of the model which defines the updating rule and governs the dynamics. Namely, if we decide to remain at a Markovian level, then the partition, or equivalently, the collective variables used to build the macro model must be compatible with the symmetries of the probability distribution ω. This underlines the theoretical importance of homogeneous or complete mixing in the analysis of »voter-like« models at use in population genetics, evolutionary game theory and social dynamics. On the other hand, if a favored level of observation is not compatible with the symmetries in ω, a certain amount of memory is introduced by the transition from the micro level to such a macro description, and this is the fingerprint of emergence in ABMs. The resulting divergence from Markovianity can be quantified using information-theoretic measures and the thesis presents a scenario in which these measures can be explicitly computed. Two simple models are used to illustrate these theoretical ideas: the voter model (VM) and an extension of it called contrarian voter model (CVM). Using these examples, the thesis shows that Markov chain theory allows for a rather precise understanding of the model dynamics in case of »simple« population structures where a tractable macro chain can be derived. Constraining the system by interaction networks with a strong local structure leads to the emergence of meta-stable states in the transient of the model. Constraints on the interaction behavior such as bounded confidence or assortative mating lead to the emergence of new absorbing states in the associated macro chain and are related to stable patterns of polarization (stable co-existence of different opinions or species). Constraints and heterogeneities in the microscopic system and complex social interactions are the basic characteristics of ABMs, and the Markov chain approach to link the micro chain to a macro level description (and likewise the failure of a Markovian link) highlights the crucial role played by those ingredients in the generation of complex macroscopic outcomes.
Stichworte
Markov chains; Agent-based models; aggregation; lumpability; complexity; voter model; emergence
Jahr
2014
Page URI
https://pub.uni-bielefeld.de/record/2690117

Zitieren

Banisch S. Markov chain aggregation for agent-based models. Bielefeld: Universitätsbibliothek Bielefeld; 2014.
Banisch, S. (2014). Markov chain aggregation for agent-based models. Bielefeld: Universitätsbibliothek Bielefeld.
Banisch, Sven. 2014. Markov chain aggregation for agent-based models. Bielefeld: Universitätsbibliothek Bielefeld.
Banisch, S. (2014). Markov chain aggregation for agent-based models. Bielefeld: Universitätsbibliothek Bielefeld.
Banisch, S., 2014. Markov chain aggregation for agent-based models, Bielefeld: Universitätsbibliothek Bielefeld.
S. Banisch, Markov chain aggregation for agent-based models, Bielefeld: Universitätsbibliothek Bielefeld, 2014.
Banisch, S.: Markov chain aggregation for agent-based models. Universitätsbibliothek Bielefeld, Bielefeld (2014).
Banisch, Sven. Markov chain aggregation for agent-based models. Bielefeld: Universitätsbibliothek Bielefeld, 2014.
Alle Dateien verfügbar unter der/den folgenden Lizenz(en):
Copyright Statement:
Dieses Objekt ist durch das Urheberrecht und/oder verwandte Schutzrechte geschützt. [...]
Volltext(e)
Access Level
OA Open Access
Zuletzt Hochgeladen
2019-09-06T09:18:25Z
MD5 Prüfsumme
8641eed905f01f28bbcf105438f0d9a3


Export

Markieren/ Markierung löschen
Markierte Publikationen

Open Data PUB

Suchen in

Google Scholar