Encyclopedia of Complexity and Systems Science

Living Edition
| Editors: Robert A. Meyers

Agent-Based Modeling, Mathematical Formalism for

  • Reinhard LaubenbacherEmail author
  • Abdul S. Jarrah
  • Henning S. Mortveit
  • S. S. Ravi
Living reference work entry
DOI: https://doi.org/10.1007/978-3-642-27737-5_10-5
  • 948 Downloads

Keywords

Cellular Automaton Turing Machine Cellular Automaton Dependency Graph Boolean Network 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Definition of the Subject

Agent-based simulations are generative or computational approaches used for analyzing “complex systems.” What is a “system?” Examples of systems include a collection of molecules in a container, the population in an urban area, and the brokers in a stock market. The entities or agents in these three systems would be molecules, individuals, and stock brokers, respectively. The agents in such systems interact in the sense that molecules collide, individuals come into contact with other individuals, and brokers trade shares. Such systems, often called multiagent systems, are not necessarily complex. The label “complex” is typically attached to a system if the number of agents is large, if the agent interactions are involved, or if there is a large degree of heterogeneity in agent character or their interactions.

This is of course not an attempt to define a complex system. Currently, there is no generally agreed upon definition of complex systems. It is not the goal of this entry to provide such a definition – for our purposes, it will be sufficient to think of a complex system as a collection of agents interacting in some manner that involves one or more of the complexity components just mentioned, that is, with a large number of agents, heterogeneity in agent character and interactions, and possibly stochastic aspects to all these parts. The global properties of complex systems, such as their global dynamics, emerge from the totality of local interactions between individual agents over time. While these local interactions are well understood in many cases, little is known about the emerging global behavior arising through interaction. Thus, it is typically difficult to construct global mathematical models such as systems of ordinary or partial differential equations, whose properties one could then analyze. Agent-based simulations are one way to create computational models of complex systems that take their place.

An agent-based simulation, sometimes also called an individual-based or interaction-based simulation (which we prefer), of a complex system is in essence a computer program that realizes some (possibly approximate) model of the system to be studied, incorporating the agents and their rules of interaction. The simulation might be deterministic (i.e., the evolution of agent states is governed by deterministic rules) or stochastic. The typical way in which such simulations are used is to initialize the computer program with a particular assignment of agent states and to run it for some time. The output is a temporal sequence of states for all agents, which is then used to draw conclusions about the complex system one is trying to understand. In other words, the computer program is the model of the complex system, and by running the program repeatedly, one expects to obtain an understanding of the characteristics of the complex system.

There are two main drawbacks to this approach. First, it is difficult to validate the model. Simulations for most systems involve quite complex software constructs that pose challenges to code validation. Second, there are essentially no rigorous tools available for an analysis of model properties and dynamics. There is also no widely applicable formalism for the comparison of models. For instance, if one agent-based simulation is a simplification of another, then one would like to be able to relate their dynamics in a rigorous fashion. We are currently lacking a mathematically rich formal framework that models agent-based simulations. This framework should have at its core a class of mathematical objects to which one can map agent-based simulations. The objects should have a sufficiently general mathematical structure to capture key features of agent-based simulations and, at the same time, should be rich enough to allow the derivation of substantial mathematical results. This entry presents one such framework, namely, the class of time-discrete dynamical systems over finite state sets.

The building blocks of these systems consist of a collection of variables (mapping to agents), a graph that captures the dependence relations of agents on other agents, a local update function for each agent that encapsulates the rules by which the state of each agent evolves over time, and an update discipline for the variables (e.g., parallel or sequential). We will show that this class of mathematical objects is appropriate for the representation of agent-based simulations and, therefore, complex systems, and is rich enough to pose and answer relevant mathematical questions. This class is sufficiently rich to be of mathematical interest in its own right and much work remains to be done in studying it. We also remark that many other frameworks such as probabilistic Boolean networks (Shmulevich et al. 2002a) fit inside the framework described here.

Introduction

Computer simulations have become an integral part of today’s research and analysis methodologies. The ever-increasing demands arising from the complexity and sheer size of the phenomena studied constantly push computational boundaries, challenge existing computational methodologies, and motivate the development of new theories to improve our understanding of the potential and limitations of computer simulation. Interaction-based simulations are being used to simulate a variety of biological systems such as ecosystems and the immune system, social systems such as urban populations and markets, and infrastructure systems such as communication networks and power grids.

To model or describe a given system, one typically has several choices in the construction and design of agent-based models and representations. When agents are chosen to be simple, the simulation may not capture the behavior of the real system. On the other hand, the use of highly sophisticated agents can quickly lead to complex behavior and dynamics. Also, use of sophisticated agents may lead to a system that scales poorly. That is, a linear increase in the number of agents in the system may require a nonlinear (e.g., quadratic, cubic, or exponential) increase in the computational resources needed for the simulation.

Two common methods, namely, discrete-event simulation and time-stepped simulation, are often used to implement agent-based models (Bagrodia 1998; Jefferson 1985; Nance 1993). In the discrete-event simulation method, each event that occurs in the system is assigned a time of occurrence. The collection of events is kept in increasing order of their occurrence times. (Note that an event occurring at a certain time may give rise to events which occur later.) When all the events that occur at a particular time instant have been carried out, the simulation clock is advanced to the next time instant in the order. Thus, the differences between successive values of the simulation clock may not be uniform. Discrete-event simulation is typically used in contexts such as queuing systems (Misra 1986). In the time-stepped method of simulation, the simulation clock is always advanced by the same amount. For each value of the simulation clock, the states of the system components are computed using equations that model the system. This method of simulation is commonly used for studying, e.g., fluid flows or chemical reactions. The choice of model (discrete event vs. time stepped) is typically guided by an analysis of the computational speed they can offer, which in turn depends on the nature of the system being modeled (see, e.g., Guo et al. 2000).

Toolkits for general purpose agent-based simulations include Swarm (Ebeling and Schweitzer 2001; Minar et al. 1996) and Repast (North et al. 2006). Such toolkits allow one to specify more complex agents and interactions than would be possible using, e.g., ordinary differential equations models. In general, it is difficult to develop a software package that is capable of supporting the simulation of a wide variety of physical, biological, and social systems.

Standard or classical approaches to modeling are often based on continuous techniques and frameworks such as ordinary differential equations (ODEs) and partial differential equations (PDEs). For example, there are PDE-based models for studying traffic flow (Gupta and Katiyar 2005; Keyfitz 2004; Whitham 1999). These can accurately model the emergence of traffic jams for simple road/intersection configurations through, for example, the formation of shocks. However, these models fail to scale to the size and the specifications required to accurately represent large urban areas. Even if they hypothetically were to scale to the required size, the answers they provide (e.g., car density on a road as a function of position and time) cannot answer questions pertaining to specific travelers or cars. Questions of this type can be naturally described and answered through agent-based models. An example of such a system is TRANSIMS (see section “TRANSIMS (Transportation Analysis and Simulation System”), where an agent-based simulation scheme is implemented through a cellular automaton model. Another well-known example of the change in modeling paradigms from continuous to discrete is given by lattice gas automata (Frish et al. 1986) in the context of fluid dynamics. Stochastic elements are inherent in many systems, and this usually is reflected in the resulting models used to describe them. A stochastic framework is a natural approach in the modeling of, for example, noise over a channel in a simulation of telecommunication networks (Barrett et al. 2002). In an economic market or a game theoretic setting with competing players, a player may sometimes decide to provide incorrect information. The state of such a player may therefore be viewed and modeled by a random variable. A player may make certain probabilistic assumptions about other players’ environment. In biological systems, certain features and properties may only be known up to the level of probability distributions. It is only natural to incorporate this stochasticity into models of such systems.

Since applications of stochastic discrete models are common, it is desirable to obtain a better understanding of these simulations both from an application point of view (reliability, validation) and from a mathematical point of view. However, an important disadvantage of agent-based models is that there are few mathematical tools available at this time for the analysis of their dynamics.

Examples of Agent-Based Simulations

In order to provide the reader with some concrete examples that can also be used later on to illustrate theoretical concepts, we describe here three examples of agent-based descriptions of complex systems, ranging from traffic networks to the immune system and voting schemes.

TRANSIMS (Transportation, Analysis and Simulation System)

TRANSIMS is a large-scale computer simulation of traffic on a road network (Nagel and Wagner 2006; Nagel et al. 1997; Rickert et al. 1996). The simulation works at the resolution level of individual travelers and has been used to study large US metropolitan areas such as Portland, OR, Washington, DC, and Dallas/Fort Worth. A TRANSIMS-based analysis of an urban area requires (i) a population, (ii) a location-based activity plan for each individual for the duration of the analysis period, and (iii) a network representation of all transportation pathways of the given area. The data required for (i) and (ii) are generated based on, e.g., extensive surveys and other information sources. The network representation is typically very close to a complete description of the real transportation network of the given urban area.

TRANSIMS consists of two main modules: the router and the cellular automaton-based micro-simulator. The router maps each activity plan for each individual (obtained typically from activity surveys) into a travel route. The micro-simulator executes the travel routes and sends each individual through the transportation network so that its activity plan is carried out. This is done in such a way that all constraints imposed on individuals from traffic driving rules, road signals, fellow travelers, and public transportation schedules are respected. The time scale is typically 1 s.

The micro-simulator is the part of TRANSIMS responsible for the detailed traffic dynamics. Its implementation is based on cellular automata which are described in more detail in section “Cellular Automata.” Here, for simplicity, we focus on the situation where each individual travels by car. The road network representation is in terms of links (e.g., road segments) and nodes (e.g., intersections). The network description is turned into a cell-network description by discretizing each lane of every link into cells. A cell corresponds to a 7.5 m lane segment and can have up to four neighbor cells (front, back, left, and right).

The vehicle dynamics is specified as follows. Vehicles travel with discrete velocities 0, 1, 2, 3, 4, or 5 which are constant between time steps. Each update time step brings the simulation one time unit forward. If the time unit is 1 s, then the maximum speed of v max = 5 cells per time unit corresponds to an actual speed of 5 × 7.5 m/s = 37.5 m/s which is 135 km/h or approximately 83.9 mph.

Ignoring intersection dynamics, the micro-simulator executes three functions for each vehicle in every update: (a) lane-changing, (b) acceleration, and (c) movement. These functions can be implemented through four cellular automata, one each for lane change decision and execution, one for acceleration, and one for movement. For instance, the acceleration automaton works as follows. A vehicle in TRANSIMS can increase its speed by at most 1 cell per second, but if the road ahead is blocked, the vehicle can come to a complete stop in the same time. The function that is applied to each cell that has a car in it uses the gap ahead and the maximal speed to determine if the car will increase or decrease its velocity. Additionally, a car may have its velocity decreased one unit as determined by a certain deceleration probability. The random deceleration is an important element of producing realistic traffic flow. A major advantage of this representation is that it leads to very lightweight agents, a feature that is critical for achieving efficient scaling.

CImmSim

Next, we discuss an interaction-based simulation that models certain aspects of the human immune system. Comprised of a large number of interacting cells whose motion is constrained by the body’s anatomy, the immune system lends itself very well to agent-based simulation. In particular, these models can take into account three-dimensional anatomical variations as well as small-scale variability in cell distributions. For instance, while the number of T cells in the human body is astronomical, the number of antigen-specific T cells, for a specific antigen, can be quite small, thereby creating many spatial inhomogeneities. Also, little is known about the global structure of the system to be modeled.

The first discrete model to incorporate a useful level of complexity was ImmSim (Celada and Seiden 1992a, b), developed by Seiden and Celada as a stochastic cellular automaton. The system includes B cells, T cells, antigen-presenting cells (APCs), antibodies, antigens, and antibody-antigen complexes. Receptors on cells are represented by bit strings, and antibodies use bit strings to represent their epitopes and peptides. Specificity and affinity are defined by using bit string similarity. The bit string approach was initially introduced in Farmer et al. (1986). The model is implemented on a regular two-dimensional grid, which can be thought of as a slice of a lymph node, for instance. It has been used to study various phenomena, including the optimal number of human leukocyte antigens in human beings (Celada and Seiden 1992a), the autoimmunity and T lymphocyte selection in the thymus (Morpurgo et al. 1995), antibody selection and hyper-mutation (Celada and Seiden 1996), and the dependence of the selection and maturation of the immune response on the antigen-to-receptor’s affinity (Bernaschi et al. 2000). The computational limitations of the Seiden-Celada model have been overcome by a modified model, CImmSim (Castiglione et al. 1997), implemented on a parallel architecture. Its complexity is several orders of magnitude larger than that of its predecessor. It has been used to model hypersensitivity to chemotherapy (Castiglione and Agur 2003) and the selection of escape mutants from immune recognition during HIV infection (Bernaschi and Castiglione 2002). In Castiglione et al. (2007), the CImmSim framework was applied to the study of mechanisms that contribute to the persistence of infection with the Epstein-Barr virus.

A Voting Game

The following example describes a hypothetical voting scheme. The voting system is constructed from a collection of voters. For simplicity, it is assumed that only two candidates, represented by 0 and 1, contest in the election. There are N voters represented by the set {v 1, v 2, …, v N }. Each voter has a candidate preference or a state. We denote the state of voter v i by x i . Moreover, each voter knows the preferences or states of some of his or her friends (fellow voters). This friendship relation is captured by the dependency graph which we describe later in section “Definitions, Background, and Examples.” Informally, the dependency graph has as vertices the voters with an edge between each pair of voters that are friends.

Starting from an initial configuration of preferences, the voters cast their votes in some order. The candidate that receives the most votes is the winner. A number of rules can be formulated to decide how each voter chooses a candidate. We will provide examples of such rules later, and as will be seen, the outcome of the election is governed by the order in which the votes are cast as well as the structure of the dependency graph.

Existing Mathematical Frameworks

The field of agent-based simulation currently places heavy emphasis on implementation and computation rather than on the derivation of formal results. Computation is no doubt a very useful way to discover potentially interesting behavior and phenomena. However, unless the simulation has been set up very carefully, its outcome does not formally validate or guarantee the observed phenomenon. It could simply be caused by an artifact of the system model, an implementation error, or some other uncertainty.

A first step in a theory of agent-based simulation is the introduction of a formal framework that, on the one hand, is precise and computationally powerful and, on the other hand, is natural in the sense that it can be used to effectively describe large classes of both deterministic and stochastic systems. Apart from providing a common basis and a language for describing the model using a sound formalism, such a framework has many advantages. At a first level, it helps to clarify the key structure in a system. Domain-specific knowledge is crucial to deriving good models of complex systems, but domain specificity is often confounded by domain conventions and terminology that eventually obfuscate the real structure.

A formal, context independent framework also makes it easier to take advantage of existing general theory and algorithms. Having a model formulated in such a framework also makes it easier to establish results. Additionally, expressing the model using a general framework is more likely to produce results that are widely applicable. This type of framework also supports implementation and validation. Modeling languages like UML (Booch et al. 2005) serve a similar purpose but tend to focus solely on software implementation and validation issues and very little on mathematical or computational analysis.

Cellular Automata

In this section, we discuss several existing frameworks for describing agent-based simulations. Cellular automata (CA) were introduced by Ulam and von Neumann (von Neumann and Burks 1966) as biologically motivated models of computation. Early research addressed questions about the computational power of these devices. Since then, their properties have been studied in the context of dynamical systems (Hedlund 1969), language theory (Lindgren et al. 1998), and ergodic theory (Lind 1984), to mention just a few areas. Cellular automata were popularized by Conway (Gardner 1970) (Game of Life) and by Wolfram (Martin et al. 1984; Wolfram 1983, 2002). Cellular automata (both deterministic and stochastic) have been used as models for phenomena ranging from lattice gases (Frish et al. 1986) and flows in porous media (Rothman 1988) to traffic analysis (Fukś 2004; Nagel and Schreckenberg 1992; Nagel et al. 1995).

A cellular automaton is typically defined over a regular grid. An example is a two-dimensional grid such as Z 2. Each grid point (i, j) is referred to as a site or node. Each site has a state x i, j (t) which is often taken to be binary. Here, t denotes the time step. Furthermore, there is a notion of a neighborhood for each site. The neighborhood N of a site is the collection of sites that can influence the future state of the given site. Based on its current state x i, j (t) and the current states of the sites in its neighborhood N, a function f i, j is used to compute the next state x i, j (t + 1) of the site (i, j). Specifically, we have
$$ {x}_{i,j}\left(t+1\right)={f}_{i,j}\left({\overline{x}}_{i,j}(t)\right), $$
(1)
where \( {\overline{x}}_{i,j}(t) \) denotes the tuple consisting of all the states \( {x}_{i^{\mathit{\prime}}\kern-0.15em,{j}^{\mathit{\prime}}}(t) \) with (i′, j′) ∈ N. The tuple consisting of the states of all the sites is the CA configuration and is denoted x(t) = (x i, j (t)) i,j . Equation 1 is used to map the configuration x(t) to x(t + 1). The cellular automaton map or dynamical system is the map Φ that sends x(t) to x(t + 1).

A central part of CA research is to understand how configurations evolve under iteration of the map Φ and what types of dynamical behavior can be generated. A general introduction to CA can be found in Ilachinsky (2001).

Hopfield Networks

Hopfield networks were proposed as a simple model of associative memories (Hopfield 1982). A discrete Hopfield neural network consists of an undirected graph Y(V, E). At any time t, each node v i V has a state x i (t)∈{ +1, −1}. Further, each node v i V has an associated threshold τ i R. Each edge {v i , v j }∈E has an associated weight w i,j R. For each node v i , the neighborhood N i of v i includes v i and the set of nodes that are adjacent to v i in Y. Formally,
$$ {N}_i=\left\{{v}_i\right\}\cup \left\{{v}_j\in V:\left\{{v}_i,{v}_j\right\}\in E\right\}. $$
States of nodes are updated as follows. At time t, node v i computes the function f i defined by
$$ {f}_i(t)=\mathrm{sgn}\left(-{\tau}_i+{\displaystyle \sum_{v_j\in {N}_i}{w}_{i,j}{x}_j(t)}\right), $$
where sgn is the map from R to { +1, −1}, defined by
$$ \mathrm{sgn}(x)=\left\{\begin{array}{ll}1,\hfill & \mathrm{if}\ x\ge 0\kern1em \mathrm{and}\hfill \\ {}-1,\hfill & \mathrm{otherwise}\kern0.33em.\hfill \end{array}\right. $$
Now, the state of v i at time t + 1 is
$$ {x}_i\left(t+1\right)={f}_i(t). $$

Many references on Hopfield networks (see, e.g., Hopfield 1982; Russell and Norwig 2003) assume that the underlying undirected graph is complete; that is, there is an edge between pairs of nodes. In the definition presented above, the graph need not be complete. However, this does not cause any difficulties since the missing edges can be assigned weight 0. As a consequence, such edges will not play any role in determining the dynamics of the system. Both synchronous and asynchronous update models of Hopfield neural networks have been considered in the literature. For theoretical results concerning Hopfield networks, see Orponen (1994, 1996) and the references cited therein. Reference Russell and Norwig (2003) presents a number of applications of neural networks. In Macy et al. (2003), a Hopfield model is used to study polarization in dynamic networks.

Communicating Finite-State Machines

The model of communicating finite-state machines (CFSM) was proposed to analyze protocols used in computer networks. In some of the literature, this model is also referred to as “concurrent transition systems” (Gouda and Chang 1986).

In the CFSM model, each agent is a process executing on some node of a distributed computing system. Although there are minor differences among the various CFSM models proposed in the literature (Brand and Zafiropulo 1983; Gouda and Chang 1986), the basic setup models each process as a finite-state machine (FSM). Thus, each agent is in a certain state at any time instant t. For each pair of agents, there is a bidirectional channel through which they can communicate. The state of an agent at time t + 1 is a function of the current state and the input (if any) on one or more of the channels. When an agent (FSM) undergoes a transition from one state to another, it may also choose to send a message to another agent or receive a message from an agent. In general, such systems can be synchronous or asynchronous. As can be seen, CFSMs are a natural formalism for studying protocols used in computer networks. The CFSM model has been used extensively to prove properties (e.g., deadlock freedom, fairness) of a number of protocols used in practice (see Brand and Zafiropulo 1983; Gouda and Chang 1986 and the references cited therein).

Other frameworks include interacting particle systems (Liggett 2005) and Petri nets (Moncion et al. 2006). There is a vast literature on both, but space limitations prevent a discussion here.

Finite Dynamical Systems

Another, quite general, modeling framework that has been proposed is that of finite dynamical systems, both synchronous and asynchronous. Here, the proposed mathematical object representing an agent-based simulation is a time-discrete dynamical system on a finite state set. The description of the systems is modeled after the key components of an agent-based simulation, namely, agents, the dependency graph, local update functions, and an update order. This makes a mapping to agent-based simulations natural. In the remainder of this entry, we will show that finite dynamical systems satisfy our criteria for a good mathematical framework in that they are general enough to serve as a broad computing tool and mathematically rich enough to allow the derivation of formal results.

Definitions, Background, and Examples

Let x 1, …, x n be a collection of variables, which take values in a finite set X. (As will be seen, the variables represent the entities in the system being modeled and the elements of X represent their states.) Each variable x i has associated to it a “local update function” f i : X n X, where “local” refers to the fact that f i takes inputs from the variables in the “neighborhood” of x i , in a sense to be made precise below. By abuse of notation, we also let f i denote the function X n X which changes the ith coordinate and leaves the other coordinates unchanged. This allows for the sequential composition of the local update functions. These functions assemble to a dynamical system
$$ \Phi =\left({f}_1,\dots, {f}_n\right):{X}^n-\to {X}^n\kern0.33em, $$
with the dynamics generated by iteration of Φ. As an example, if X = {0, 1} with the standard Boolean operators AND and OR, then Φ is a Boolean network.
The assembly of Φ from the local functions f i can be done in one of several ways. One can update each of the variables simultaneously, that is,
$$ \Phi \left({x}_1,\dots, {x}_n\right)=\left({f}_1\left({x}_1,\dots, {x}_n\right),\dots, {f}_n\left({x}_1,\dots, {x}_n\right)\right)\kern0.33em. $$

In this case, one obtains a parallel dynamical system.

Alternatively, one can choose to update the states of the variables according to some fixed update order, for example, a permutation (π 1, π 2, …, π n ) of the set {1, …, n}. More generally, one could use a word on the set {1, …, n}, that is, π = (π 1, …, π t ) where t is the length of the word. The function composition
$$ {\Phi}_{\pi }={f}_{\pi_t}\circ {f}_{\pi_{t-1}}\circ \cdots \circ {f}_{\pi_1}\kern0.33em, $$
(2)
is called a sequential dynamical system (SDS), and as before, the dynamics of Φ π is generated by iteration. The case when π is a permutation on {1, …, n} has been studied extensively (Barrett and Reidys 1999; Barrett et al. 2000, 2001b, 2003c). It is clear that using a different permutation or word σ may result in a different dynamical system Φ σ . Using a word rather than a permutation allows one to capture the case where some vertices have states that are updated more frequently than others.

Remark 1

Notice that for a fixed π, the function Φ π is a parallel dynamical system: once the update order π is chosen and the local update functions are composed according to π, that is, the function Φ π has been computed, then Φ π (x 1, …, x n ) = g(x 1, …, x n ) where g is a parallel update dynamical system. However, the maps g i are not local functions.

The dynamics of Φ is usually represented as a directed graph on the vertex set X n , called the phase space of Φ. There is a directed edge from vX n to wX n if and only if Φ(v) = w. A second graph that is usually associated with a finite dynamical system is its dependency graph Y(V, E). In general, this is a directed graph, and its vertex set is V = {1, …, n}. There is a directed edge from i to j if and only if x i appears in the function f j . In many situations, the interaction relationship between pairs of variables is symmetric; that is, variable x i appears in f j if and only if x j appears in f i . In such cases, the dependency graph can be thought of as an undirected graph. We recall that the dependency graphs mentioned in the context of the voting game (see section “A Voting Game”) and Hopfield networks (see section “Hopfield Networks”) are undirected graphs. The dependency graph plays an important role in the study of finite dynamical systems and is sometimes listed explicitly as part of the specification of Φ.

Example 2

Let X = {0, 1} (the Boolean case). Suppose we have four variables and the local Boolean update functions are
$$ \begin{array}{l}{f}_1={x}_1+{x}_2+{x}_3+{x}_4,\\ {}{f}_2={x}_1+{x}_2,\\ {}{f}_3={x}_1+{x}_3,\\ {}{f}_4={x}_1+{x}_4,\end{array} $$
where “+” represents sum modulo 2. The dynamics of the function Φ = ( f 1, …, f 4) : X 4X 4 is the directed graph in Fig. 1a, while the dependency graph is in Fig. 1b.
Fig. 1

The phase space of the parallel system Φ (a) and dependency graph of the Boolean functions from Example 2 (b)

Example 3

Consider the local functions in Example 2 and let π = (2, 1, 3, 4). Then
$$ {\Phi}_{\pi }={f}_4\circ {f}_3\circ {f}_1\circ {f}_2:{X}^4-\to {X}^4\kern0.33em. $$
The phase space of Φ π is the directed graph in Fig. 2a, while the phase space of Φ γ , where γ = id, is in Fig. 2b.
Fig. 2

The phase spaces from Example 3: Φ π (a) and Φ id (b)

Notice that the phase space of any function is a directed graph in which every vertex has out-degree one; this is a characteristic property of deterministic functions.

Making use of Boolean arithmetic is a powerful tool in studying Boolean networks, which is not available in general. In order to have an enhanced set of tools available, it is often natural to make an additional assumption regarding X, namely, that it is a finite number system, a finite field (Lidl and Niederreiter 1997). This amounts to the assumption that there are “addition” and “multiplication” operations defined on X that satisfy the same rules as ordinary addition and multiplication of numbers. Examples include Z p , the integers modulo a prime p. This assumption can be thought of as the discrete analog of imposing a coordinate system on an affine space.

When X is a finite field, it is easy to show that for any local function g, there exists a polynomial h such that g(x 1, …, x n ) = h(x 1, …, x n ) for all (x 1, …, x n ) ∈ X n . To be precise, suppose X is a finite field with q elements. Then
$$ \kern13.4em g\left({x}_1,\dots, {x}_n\right)={\displaystyle \sum_{\left({c}_1,\dots, {c}_n\right)\in {X}^n}g\left({c}_1,\dots, {c}_n\right)}{\displaystyle \prod_{i=1}^n\Big(1}-{\left({x}_i-{c}_i\right)}^{q-1}\Big). $$
(3)

This observation has many useful consequences, since polynomial functions have been studied extensively and many analytical tools are available.

Notice that cellular automata and Boolean networks, both parallel and sequential, are special classes of polynomial dynamical systems. In fact, it is straightforward to see that
$$ \begin{array}{l}x\wedge y=x\cdot y\kern1.33em x\vee y=x+y+ xy\\ {}\mathrm{and}\kern1em \neg x=x+1\kern0.33em.\end{array} $$
(4)

Therefore, the modeling framework of finite dynamical systems includes that of cellular automata discussed earlier. Also, since a Hopfield network is a function X n X n , which can be represented through its local constituent functions, it follows that Hopfield networks also are special cases of finite dynamical systems.

Stochastic Finite Dynamical Systems

The deterministic framework of finite dynamical systems can be made stochastic in several different ways, making one or more of the system’s defining data stochastic. For example, one could use one or both of the following criteria.
  • Assume that each variable has a nonempty set of local functions assigned to it, together with a probability distribution on this set, and each time a variable is updated, one of these local functions is chosen at random to update its state. We call such systems probabilistic finite dynamical systems (PFDS), a generalization of probabilistic Boolean networks (Shmulevich et al. 2002b).

  • Fix a subset of permutations TS n together with a probability distribution. When it is time for the system to update its state, a permutation πT is chosen at random, and the agents are updated sequentially using π. We call such systems stochastic finite dynamical systems (SFDS).

Remark 4

By Remark, each system Φ π is a parallel system. Hence a SFDS is nothing but a set of parallel dynamical systems {Φ π : πT}, together with a probability distribution. When it is time for the system to update its state, a system Φπ is chosen at random and used for the next iteration.

To describe the phase space of a stochastic finite dynamical system, a general method is as follows. Let Ω be a finite collection of systems Φ1, …, Φ t , where Φ i : X n X n for all i, and consider the probabilities p 1, …, p t which sum to 1. We obtain the stochastic phase space
$$ {\Gamma}_{\Omega}={p}_1{\Gamma}_1+{p}_2{\Gamma}_2+\cdots +{p}_t{\Gamma}_t, $$
(5)
where Γ i is the phase space of Φ i . The associated probability space is ℱ = (Ω, 2Ω, μ), where the probability measure μ is induced by the probabilities p i . It is clear that the stochastic phase space can be viewed as a Markov chain over the state space X n . The adjacency matrix of ΓΩ directly encodes the Markov transition matrix. This is of course not new and has been done in, e.g., (Dawson 1974; Shmulevich et al. 2002b; Vasershtein 1969). But we emphasize the point that even though SFDS give rise to Markov chains, our study of SFDS is greatly facilitated by the rich additional structure available in these systems. To understand the effect of structural components such as the topology of the dependency graph or the stochastic nature of the update, it is important to study them not as Markov chains but as SFDS.
Example 5. Consider Φ π and Φ γ from Example 3 and let Γ π and Γ γ be their phase spaces as shown in Fig. 2. Let p 1 = p 2 = 1/2. The phase space (1/2)Γ π + (1/2)Γ γ of the stochastic sequential dynamical system obtained from Φ π and Φ γ (with equal probabilities) is presented in Fig. 3.
Fig. 3

The stochastic phase space for Example 5 induced by the two deterministic phase spaces of Φπ and Φγ from Fig. 2. For simplicity, the weights of the edges have been omitted

Agent-Based Simulations as Finite Dynamical Systems

In the following, we describe the generic structure of the systems typically modeled and studied through agent-based simulations. The central notion is naturally that of an agent.

Each agent carries a state that may encode its preferences, internal configuration, perception of its environment, and so on. In the case of TRANSIMS, for instance, the agents are the cells making up the road network. The cell state contains information about whether or not the cell is occupied by a vehicle as well as the velocity of the vehicle. One may assume that each cell takes on states from the same set of possible states, which may be chosen to support the structure of a finite field.

The agents interact with each other, but typically an agent only interacts with a small subset of agents, its neighbors. Through such an interaction, an agent may decide to change its state based on the states (or other aspects) of the agents with which it interacts. We will refer to the process where an agent modifies its state through interaction as an agent update. The precise way in which an agent modifies its state is governed by the nature of the particular agent. In TRANSIMS, the neighbors of a cell are the adjacent road network cells. From this adjacency relation, one obtains a dependency graph of the agents. The local update function for a given agent can be obtained from the rules governing traffic flow between cells.

The updates of all the agents may be scheduled in different ways. Classical approaches include synchronous, asynchronous, and event-driven schemes. The choice will depend on system properties or particular considerations about the simulation implementation.

In the case of CImmSim, the situation is somewhat more complicated. Here, the agents are also the spatial units of the system, each representing a small volume of lymph tissue. The total volume is represented as a two-dimensional CA, in which every agent has four neighbors, so that the dependency graph is a regular two-dimensional grid. The state of each agent is a collection of counts for the various immune cells and pathogens that are present in this particular agent (volume). Movement between cells is implemented as diffusion. Immune cells can interact with each other and with pathogens while they reside in the same volume. Thus, the local update function for a given cell of the simulation is made up of the two components of movement between cells and interactions within a cell. For instance, a B cell could interact with the Epstein-Barr virus in a given volume and perform a transition from uninfected to infected by the next time step. Interactions as well as movement are stochastic, resulting in a stochastic finite dynamical system. The update order is parallel.

Example 6 The Voting Game (see section “A Voting Game”)

The following scenario is constructed to illustrate how implementation choices for the system components have a clear and direct bearing on the dynamics and simulation outcomes.

Let the voter dependency graph be the star graph on 5 vertices with center vertex a and surrounding vertices b, c, d, and e. Furthermore, assume that everybody votes opportunistically using the majority rule: the vote cast by an individual is equal to the preference of the majority of his/her friends with the person’s own preference included. For simplicity, assume candidate 1 is preferred in the case of a tie.

If the initial preference is x a = 1 and x b = x c = x d = x e = 0, then if voter a goes first, he/she will vote for candidate 0 since that is the choice of the majority of the neighbor voters. However, if b and c go first, they only know a’s preference. Voter b therefore casts his/her vote for candidate 1 as does c. Note that this is a tie situation with an equal number of preferences for candidate 1 (a) and for candidate 2 (b). If voter a goes next, then the situation has changed: the preference of b and c has already changed to 1. Consequently, voter a picks candidate 1. At the end of the day, candidate 1 is the election winner, and the choice of update order has tipped the election!

This example is of course constructed to illustrate our point. However, in real systems, it can be much more difficult to detect the presence of such sensitivities and their implications. A solid mathematical framework can be very helpful in detecting such effects.

Finite Dynamical Systems as Theoretical and Computational Tools

If finite dynamical systems are to be useful as a modeling paradigm for agent-based simulations, it is necessary that they can serve as a fairly universal model of computation. We discuss here how such dynamical systems can mimic Turing machines (TMs), a standard universal model for computation. For a more thorough exposition, we refer the reader to the series of papers by Barrett et al. (2003a, b, 2006, 2007a, b). To make the discussion reasonably self-contained, we provide a brief and informal discussion of the TM model. Additional information on TMs can be found in any standard text on the theory of computation (e.g., Sipser 1997).

A Computational View of Finite Dynamical Systems: Definitions

In order to understand the relationship of finite dynamical systems to TMs, it is important to view such systems from a computational stand point. Recall that a finite dynamical system Φ : X n X n , where X is a finite set, has an underlying dependency graph Y(V, E). From a computational point of view, the nodes of the dependency graph (the agents in the simulation) are thought of as devices that compute appropriate functions. For simplicity, we will assume in this section that the dependency graph is undirected, that is, all dependency relations are symmetric. At any time, the state value of each node v i V is from the specified domain X. The inputs to f i are the current state of v i and the states of the neighbors of v i as specified by Y. The output of f i , which is also a member of X, becomes the state of v i at the next time instant. The discussion in this section will focus on sequentially updated systems (SDS), but all results discussed apply to parallel systems as well.

Each step of the computation carried out by an SDS can be thought as consisting of n “mini steps”; in each mini step, the value of the local transition function at a node is computed and the state of the node is changed to the computed value. Given an SDS Φ, a configuration C of Φ is a vector (c 1, c 2, …, c n )∈X n . It can be seen that each computational step of an SDS causes a transition from one configuration to another.

Configuration Reachability Problem for SDSs

Based on the computational view, a number of different problems can be defined for SDSs (see for, e.g., Barrett et al. 2001a, 2006, 2007b). To illustrate how SDSs can model TM computations, we will focus on one such problem, namely, the configuration reachability (CR) problem: Given an SDS Φ, an initial configuration C and another configuration C′, will Φ, starting from C, ever reach configuration C′? The problem can also be expressed in terms of the phase space of Φ. Since configurations such as C and C′ are represented by nodes in the phase space, the CR problem boils down to the question of whether there is a directed path in the phase space from C to C′. This abstract problem can be mapped to several problems in the simulation of multiagent systems. Consider, for example, the TRANSIMS context. Here, the initial configuration C may represent the state of the system early in the day (when the traffic is very light) and C′ may represent an “undesirable” state of the system (such as heavy traffic congestion). Similarly, in the context of modeling an infectious disease, C may represent the initial onset of the disease (when only a small number of people are infected), and C′ may represent a situation where a large percentage of the population is infected. The purpose of studying computational problems such as CR is to determine whether one can efficiently predict the occurrence of certain events in the system from a description of the system. If computational analysis shows that the system can indeed reach undesirable configurations as it evolves, then one can try to identify steps needed to deal with such situations.

Turing Machines: A Brief Overview

A Turing machine (TM) is a simple and commonly used model for general purpose computational devices. Since our goal is to point out how SDSs can also serve as computational devices, we will present an informal overview of the TM model. Readers interested in a more formal description may consult (Sipser 1997).

A TM consists of a set Q of states, a one-way infinite input tape and a read/write head that can read and modify symbols on the input tape. The input tape is divided into cells and each cell contains a symbol from a special finite alphabet. An input consisting of n symbols is written on the leftmost n cells of the input tape. (The other cells are assumed to contain a special symbol called blank.) One of the states in Q, denoted by q s, is the designated start state. Q also includes two other special states, denoted by q a (the accepting state) and q r (the rejecting state). At any time, the machine is in one of the states in Q. The transition function for the TM specifies for each combination of the current state and the current symbol under the head, a new state, a new symbol for the current cell (which is under the head), and a movement (i.e., left or right by one cell) for the head. The machine starts in state q s with the head on the first cell of the input tape. Each step of the machine is carried out in accordance with the transition function. If the machine ever reaches either the accepting or the rejecting state, it halts with the corresponding decision; otherwise, the machine runs forever.

A configuration of a TM consists of its current state, the current tape contents, and the position of the head. Note that the transition function of a TM specifies how a new configuration is obtained from the current configuration.

The above description is for the basic TM model (also called single-tape TM model). For convenience in describing some computations, several variants of the above basic model have been proposed. For example, in a multi-tape TM, there are one or more work tapes in addition to the input tape. The work tapes can be used to store intermediate results. Each work tape has its own read/write head, and the definitions of configuration and transition function can be suitably modified to accommodate work tapes. While such an enhancement to the basic TM model makes it easier to carry out certain computations, it does not add to the machine’s computational power. In other words, any computation that can be carried out using the enhanced model can also be carried out using the basic model.

As in the case of dynamical systems, one can define a configuration reachability (CR) problem for TMs: Given a TM M, an initial configuration ℐ M and another configuration C M , will the TM starting from ℐ M ever reach C M ? We refer to the CR problem in the context of TMs as CR-TM. In fact, it is this problem for TMs that captures the essence of what can be effectively computed. In particular, by choosing the state component of C M to be one of the halting states (q a or q r), the problem of determining whether a function is computable is transformed into an appropriate CR-TM problem. By imposing appropriate restrictions on the resources used by a TM (e.g., the number of steps, the number of cells on the work tapes), one obtains different versions of the CR-TM problem which characterize different computational complexity classes (Sipser 1997).

How SDSs Can Mimic Turing Machines

The above discussion points out an important similarity between SDSs and TMs. Under both of these models, each computational step causes a transition from one configuration to another. It is this similarity that allows one to construct a discrete dynamical system Φ that can simulate a TM. Typically, each step of a TM is simulated by a short sequence of successive iterations Φ. As part of the construction, one also identifies a suitable mapping between the configurations of the TM being simulated and those of the dynamical system. This is done in such a way that the answer to CR-TM problem is “yes” if and only if the answer to the CR problem for the dynamical system is also “yes.”

To illustrate the basic ideas, we will informally sketch a construction from (Barrett et al. 2006). For simplicity, this construction produces an SDS Φ that simulates a restricted version of TMs; the restriction being that for any input containing n symbols, the number of work tape cells that the machine may use is bounded by a linear function of n. Such a TM is called a linear bounded automaton (LBA) (Sipser 1997). Let M denote the given LBA and let n denote the length of the input to M. The domain X for the SDS Φ is chosen to be a finite set based on the allowed symbols in the input to the TM. The dependency graph is chosen to be a simple path on n nodes, where each node serves as a representative for a cell on the input tape. The initial and final configurations C and C′ for Φ are constructed from the corresponding configurations of M. The local transition function for each node of the SDS is constructed from the given transition function for M in such a way that each step of M corresponds to exactly one step of Φ. Thus, there is a simple bijection between the sequence of configurations that M goes through during its computation and the sequence of states that Φ goes through as it evolves. Using this bijection, it is shown in Barrett et al. (2006) that the answer to the CR-TM problem is “yes” if and only if Φ reaches C′ starting from C. Reference (Barrett et al. 2006) also presents a number of sophisticated constructions where the resulting dynamical system is very simple; for example, it is shown that an LBA can be simulated by an SDS in which X is the Boolean field, the dependency graph is d-regular for some constant d, and the local transition functions at all the nodes are identical. Such results point out that one does not need complicated dynamical systems to model TM computations.

References (Barrett et al. 2003a, b, 2006) present constructions that show how more general models of TMs can also be simulated by appropriate SDSs. As one would expect, these constructions lead to SDSs with more complex local transition functions.

Barrett et al. (2007a) have also considered stochastic SDS (SSDS), where the local transition function at each node is stochastic. For each combination of inputs, a stochastic local transition function specifies a probability distribution over the domain of state values. It is shown in Barrett et al. (2007a) that SSDSs can effectively simulate computations carried out by probabilistic TMs (i.e., TMs whose transition functions are stochastic).

TRANSIMS-Related Questions

Section “TRANSIMS (Transportation, Analysis, Simulation System)” gave an overview of some aspects of the TRANSIMS model. The micro-simulator module is specified as a functional composition of four cellular automata of the form Δ4 ∘ Δ3 ∘ Δ2 ∘ Δ1. (We only described Δ3 which corresponds to velocity updates.) Such a formulation has several advantages. First, it has created an abstraction of the essence of the system in a precise mathematical way without burying it in contextual domain details. An immediate advantage of this specification is that it makes the whole implementation process more straightforward and transparent. While the local update functions for the cells are typically quite simple, for any realistic study of an urban area, the problem size would typically require a sequential implementation, raising a number of issues that are best addressed within a mathematical framework like the one considered here.

Mathematical Results on Finite Dynamical Systems

In this section, we outline a collection of mathematical results about finite dynamical systems that is representative of the available knowledge. The majority of these results are about deterministic systems, as the theory of stochastic systems of this type is still in its infancy. We will first consider synchronously updated systems.

Throughout this section, we make the assumption that the state set X carries the algebraic structure of a finite field. Accordingly, we use the notation k instead of X. It is a standard result that in this case the number q of elements in k has the form q = p t for some prime p and t ≥ 1. The reader may keep the simplest case k = {0, 1} in mind, in which case we are effectively considering Boolean networks.

Recall Eq. 4. That is, any function g:k n k can be represented by a multivariate polynomial with coefficients in k. If we require that the exponent of each variable be less than q, then this representation is unique. In particular, Eq. 4 implies that every Boolean function can be represented uniquely as a polynomial function.

Parallel Update Systems

Certain classes of finite dynamical systems have been studied extensively, in particular cellular automata and Boolean networks where the state set is {0,1}. Many of these studies have been experimental in nature, however. Some more general mathematical results about cellular automata can be found in the papers of Wolfram and collaborators (Wolfram 1986). The results there focus primarily on one-dimensional Boolean cellular automata with a particular fixed initial state. Here, we collect a sampling of more general results.

We first consider linear and affine systems
$$ \Phi =\left({f}_1,\dots, {f}_n\right):{k}^{{}_n}-\to {k}^n. $$
That is, we consider systems for which the coordinate functions f i are linear, resp. affine, polynomials. (In the Boolean case, this includes functions constructed from XOR (sum modulo 2) and negation.) When each f i is a linear polynomial of the form f i (x 1, …, x n ) = a i1 x 1 + ⋅ ⋅ ⋅ + a in x n , the map Φ is nothing but a linear transformation on k n over k, and, by using the standard basis, Φ has the matrix representation
$$ \Phi \left(\left[\begin{array}{c}\hfill {x}_1\hfill \\ {}\hfill \vdots \hfill \\ {}\hfill {x}_n\hfill \end{array}\right]\right)=\left[\begin{array}{c}\hfill {a}_{11}\hfill \\ {}\hfill \vdots \hfill \\ {}\hfill {a}_{n1}\hfill \end{array}\kern0.24em \begin{array}{c}\hfill \cdots \hfill \\ {}\hfill \ddots \hfill \\ {}\hfill \cdots \hfill \end{array}\kern0.24em \begin{array}{c}\hfill {a}_{1n}\hfill \\ {}\hfill \vdots \hfill \\ {}\hfill {a}_{nn}\hfill \end{array}\right]\;\left[\begin{array}{c}\hfill {x}_1\hfill \\ {}\hfill \vdots \hfill \\ {}\hfill {x}_n\hfill \end{array}\right], $$
where a ij k for all i, j.

Linear finite dynamical systems were first studied by Elspas (1959). His motivation came from studying feedback shift register networks and their applications to radar and communication systems and automatic error correction circuits. For linear systems over finite fields of prime cardinality, that is, q is a prime number, Elspas showed that the exact number and length of each limit cycle can be determined from the elementary divisors of the matrix A. Recently, Hernández-Toledo (2005) rediscovered Elspas’ results and generalized them to arbitrary finite fields. Furthermore, he showed that the structure of the tree of transients at each node of each limit cycle is the same and can be completely determined from the nilpotent elementary divisors of the form x a . For affine Boolean networks (i.e., finite dynamical systems over the Boolean field with two elements, whose local functions are linear polynomials which might have constant terms), a method to analyze their cycle length has been developed in Milligan and Wilson (1993). After embedding the matrix of the transition function, which is of dimension n × (n + 1), into a square matrix of dimension n + 1, the problem is then reduced to the linear case. A fast algorithm based on (Hernández-Toledo 2005) has been implemented in Jarrah et al. (2007), using the symbolic computation package Macaulay2.

It is not surprising that the phase space structure of Φ should depend on invariants of the matrix A = (a ij ). The rational canonical form of A is a block-diagonal matrix, and one can recover the structure of the phase space of A from that of the blocks in the rational form of A. Each block represents either an invertible or a nilpotent linear transformation. Consider an invertible block B. If μ(x) is the minimal polynomial of B, then there exists s such that μ(x) divides x s − 1. Hence B s I = 0 which implies that B s v = v. That is, every state vector v in the phase space of B is in a cycle whose length is a divisor of s.

Definition 7

For any polynomial λ(x) in k[x], the order of λ(x) is the least integer s such that λ(x) divides x s − 1.

The cycle structure of the phase space of Φ can be completely determined from the orders of the irreducible factors of the minimal polynomial of Φ. The computation of these orders involves in particular the factorization of numbers of the form q r − 1, which makes the computation of the order of a polynomial potentially quite costly. The nilpotent blocks in the decomposition of A determine the tree structure at the nodes of the limit cycles. It turns out that all trees at all periodic nodes are identical. This generalizes a result in Martin et al. (1984) for additive cellular automata over the field with two elements.

While the fact that the structure of the phase space of a linear system can be determined from the invariants associated with its matrix may not be unexpected, it is a beautiful example of how the right mathematical viewpoint provides powerful tools to completely solve the problem of relating the structure of the local functions to the resulting (or emerging) dynamics. Linear and affine systems have been studied extensively in several different contexts and from several different points of view, in particular the case of cellular automata. For instance, additive cellular automata over more general rings as state sets have been studied, e.g., in Chaudhuri (1997). Further results on additive CAs can also be found there. One important focus in Chaudhuri (1997) is on the problem of finding CAs with limit cycles of maximal length for the purpose of constructing pseudo random number generators.

Unfortunately, the situation is more complicated for nonlinear systems. For the special class of Boolean synchronous systems whose local update functions consist of monomials, there is a polynomial time algorithm that determines whether a given monomial system has only fixed points as periodic points (Colón-Reyes et al. 2004). This question was motivated by applications to the modeling of biochemical networks. The criterion is given in terms of invariants of the dependency graph Y. For a strongly connected directed graph Y (i.e., there is a directed path between any pairs of vertices), its loop number is the greatest common divisor of all directed loops at a particular vertex. (This number is independent of the vertex chosen.)

Theorem 8 (Colón-Reyes et al. 2004)

A Boolean monomial system has only fixed points as periodic points if and only if the loop number of every strongly connected component of its dependency graph is equal to 1.

In Colón-Reyes et al. (2006), it is shown that the problem for general finite fields can be reduced to that of a Boolean system and a linear system over rings of the form Z/p r Z, p prime. Boolean monomial systems have been studied before in the cellular automaton context (Bartlett and Garzon 1993).

Sequential Update Systems

The update order in a sequential dynamical system has been studied using combinatorial and algebraic techniques. A natural question to ask here is how the system depends on the update schedule. In Barrett et al. (2000, 2001b), Mortveit and Reidys (2001), Reidys (1998), this was answered on several levels for the special case where the update schedule is a permutation. We describe these results in some detail. Results about the more general case of update orders described by words on the indices of the local update functions can be found in Garcia et al. (2006).

Given local update functions f i :k n k and permutation update orders σ, π, a natural question is when the two resulting SDS Φ σ and Φ π are identical and, more generally, how many different systems one obtains by varying the update order over all permutations. Both questions can be answered in great generality. The answer involves invariants of two graphs, namely, the acyclic orientations of the dependency graph Y of the local update functions and the update graph of Y. The update graph U(Y) of Y is the graph whose vertex set consists of all permutations of the vertex set of Y (Reidys 1998). There is an (undirected) edge between two permutations σ = (σ 1, …, σ n ) and τ = (τ 1, …, τ n ) if they differ by a transposition of two adjacent entries σ i and σ i +1 such that there is no edge in Y between σ i and σ i +1.

The update graph encodes the fact that one can commute two local update functions f i and f j without affecting the end result Φ if i and j are not connected by an edge in Y. That is, ⋯ f i f j ⋯ = ⋯ f j f i ⋯ if and only if i and j are not connected by an edge in Y.

All permutations belonging to the same connected component in U(Y) give identical SDS maps. The number of (connected) components in U(Y) is therefore an upper bound for the number of functionally inequivalent SDS that can be generated by just changing the update order. It is convenient to introduce an equivalence relation ∼ Y on S Y by π Y σ if π and σ belong to the same connected component in the graph U(Y). It is then clear that if π Y σ, then corresponding sequential dynamical systems are identical as maps. This can also be characterized in terms of acyclic orientations of the graph Y: Each component in the update graph induces a unique acyclic orientation of the graph Y. Moreover, we have the following result:

Proposition 9 (Reidys 1998)

There is a bijection
$$ {F}_Y:{S}_Y/{}_{\tilde{Y}}-\to \mathrm{Acyc}(Y), $$
where \( {S}_Y/{}_{\tilde{Y}} \) denotes the set of equivalence classes of \( \tilde{Y} \) and Acyc(Y) denotes the set of acyclic orientations of Y.

This upper bound on the number of functionally different systems has been shown in Reidys (1998) to be sharp for Boolean systems, in the sense that for a given Y one constructs this number of different systems, using appropriate combinations of NOR functions.

For two permutations σ and τ, it is easy to determine if they give identical SDS maps: One can just compare their induced acyclic orientations. The number of acyclic orientations of the graph Y tells how many functionally different SDS maps one can obtain for a fixed graph and fixed vertex functions. The work of Cartier and Foata (1969) on partially commutative monoids studies a similar question, but their work is not concerned with finite dynamical systems.

Note that permutation update orders have been studied sporadically in the context of cellular automata on circle graphs (Park et al. 1986) but not in a systematic way, typically using the order (1, 2, …, n) or the even-odd/odd-even orders. As a side note, we remark that this work also confirms our findings that switching from a parallel update order to a sequential order turns the complex behavior found in Wolfram’s “class III and IV” automata into much more regular or mundane dynamics (see, e.g., Schönfisch and de Roos 1999).

The work on functional equivalence was extended to dynamical equivalence (topological conjugation) in Barrett et al. (2001b), Mortveit and Reidys (2001). The automorphism group of the graph Y can be made to act on the components of the update graph U(Y) and therefore also on the acyclic orientations of Y. All permutations contained in components of an orbit under Aut(Y) give rise to dynamically equivalent sequential dynamical systems, that is, to isomorphic phase spaces. However, here one needs some more technical assumptions, i.e., the local functions must be symmetric and induced (see Barrett et al. 2003c). This of course also leads to a bound for the number of dynamically inequivalent systems that can be generated by varying the update order alone. Again, this was first done for permutation update orders. The theory was extended to words over the vertex set of Y in Garcia et al. (2006), Reidys (2006).

The structure of the graph Y influences the dynamics of the system. As an example, graph invariants such as the independent sets of Y turn out to be in a bijective correspondence with the invariant set of sequential systems over the Boolean field k = {0, 1} that have nor t : k t k given by nor t : (x 1, …, x t ) = (1 + x 1) ⋯ (1 + x t ) as local functions (Reidys 2001). This can be extended to other classes such as those with order independent invariant sets as in Hansson et al. (2005). We have already seen how the automorphisms of a graph give rise to equivalence (Mortveit and Reidys 2001). Also, if the graph Y has nontrivial covering maps, we can derive simplified or reduced (in an appropriate sense) versions of the original SDS over the image graphs of Y (see, e.g., Mortveit and Reidys 2004; Reidys 2005).

Parallel and sequential dynamical systems differ when it comes to invertibility. Whereas it is generally computationally intractable to determine if a CA over Z d is invertible for d ≥ 2 (Kari 2005), it is straightforward to determine this for a sequential dynamical system (Mortveit and Reidys 2001). For example, it turns out that the only invertible Boolean sequential dynamical systems with symmetric local functions are the ones where the local functions are either the parity function or the logical complement of the parity function (Barrett et al. 2001b).

Some classes of sequential dynamical systems such as the ones induced by the nor -function have desirable stability properties (Hansson et al. 2005). These systems have minimal invariant sets (i.e., periodic states) that do not depend on the update order. Additionally, these invariant sets are stable with respect to configuration perturbations.

If a state c is perturbed to a state c′ that is not periodic, this state will evolve to a periodic state c″ in one step; that is, the system will quickly return to the invariant set. However, the states c and c″ may not necessarily be in the same periodic orbit.

The Category of Sequential Dynamical Systems

As a general principle, in order to study a given class of mathematical objects, it is useful to study transformations between them. In order to provide a good basis for a mathematical analysis, the objects and transformations together should form a category, that is, the class of transformations between two objects should satisfy certain reasonable properties (see, e.g., Mac Lane 1998). Several proposed definitions of a transformation of SDS have been published, notably in Laubenbacher and Pareigis (2003) and Reidys (2005). One possible interpretation of a transformation of SDS from the point of view of agent-based simulation is that the transformation represents the approximation of one simulation by another or the embedding/projection of one simulation into/onto another. These concepts have obvious utility when considering different simulations of the same complex system.

One can take different points of view in defining a transformation of SDS. One approach is to require that a transformation is compatible with the defining structural elements of an SDS, that is, with the dependency graph, the local update functions, and the update schedule. If this is done properly, then one should expect to be able to prove that the resulting transformation induces a transformation at the level of phase spaces. That is, transformations between SDS should preserve the local and global dynamic behavior. This implies that transformations between SDS lead to transformations between the associated global update functions.

Since the point of view of SDS is that global dynamics emerges from system properties that are defined locally, the notion of SDS transformation should focus on the local structure. This is the point of view taken in Laubenbacher and Pareigis (2003). The definition given there is rather technical and the details are beyond the scope of this entry. The basic idea is as follows. Let Φ π = f π(n) ∘ ⋯ ∘ f π(1) and Φ σ = g σ(m) ∘ ⋯ ∘ g σ(1) with the dependency graphs Y π and Y γ , respectively. A transformation F π → Φ σ is determined by the following:
  • A graph mapping φ:Y π Y γ (reverse direction)

  • A family of maps k ϕ(v)k v with vY π

  • An order preserving map σπ of update schedules

These maps are required to satisfy the property that they “locally” assemble to a coherent transformation. Using this definition of transformation, it is shown (Theorem 2.6 in Laubenbacher and Pareigis 2003) that the class of SDS forms a category. One of the requirements, for instance, is that the composition of two transformations is again a transformation. Furthermore, it is shown (Theorem 3.2 in Laubenbacher and Pareigis 2003) that a transformation of SDS induces a map of directed graphs on the phase spaces of the two systems. That is, a transformation of the local structural elements of SDS induces a transformation of global dynamics. One of the results proven in Laubenbacher and Pareigis (2003) is that every SDS can be decomposed uniquely into a direct product (in the categorical sense) of indecomposable SDS.

Another possible point of view is that a transformation
$$ F:\left({\Phi}_{\pi }:{k}^n\to {k}^n\right)-\to \left({\Phi}_{\gamma }:{k}^m\to {k}^m\right) $$
is a function F:k n k m such that F ∘ Φ π = Φ γ F, without requiring specific structural properties. This is the approach in Reidys (2005). This definition also results in a category and a collection of mathematical results. Whatever definition chosen, much work remains to be done in studying these categories and their properties.

Future Directions

Agent-based computer simulation is an important method for modeling many complex systems, whose global dynamics emerges from the interaction of many local entities. Sometimes this is the only feasible approach, especially when available information is not enough to construct global dynamic models. The size of many realistic systems, however, leads to computer models that are themselves highly complex, even if they are constructed from simple software entities. As a result, it becomes challenging to carry out verification, validation, and analysis of the models, since these consist in essence of complex computer programs. This entry argues that the appropriate approach is to provide a formal mathematical foundation by introducing a class of mathematical objects to which one can map agent-based simulations. These objects should capture the key features of an agent-based simulation and should be mathematically rich enough to allow the derivation of general results and techniques. The mathematical setting of dynamical systems is a natural choice for this purpose.

The class of finite dynamical systems over a state set X which carries the structure of a finite field satisfies all these criteria. Parallel, sequential, and stochastic versions of these are rich enough to serve as the mathematical basis for models of a broad range of complex systems. While finite dynamical systems have been studied extensively from an experimental point of view, their mathematical theory should be considered to be in its infancy, providing a fruitful area of research at the interface of mathematics, computer science, and complex systems theory.

Bibliography

Primary Literature

  1. Bagrodia RL (1998) Parallel languages for discrete-event simulation models. IEEE Comput Sci Eng 5(2):27–38CrossRefGoogle Scholar
  2. Barrett CL, Reidys CM (1999) Elements of a theory of simulation I: sequential CA over random graphs. Appl Math Comput 98:241–259CrossRefzbMATHMathSciNetGoogle Scholar
  3. Barrett CL, Mortveit HS, Reidys CM (2000) Elements of a theory of simulation II: sequential dynamical systems. Appl Math Comput 107(2–3):121–136CrossRefzbMATHMathSciNetGoogle Scholar
  4. Barrett CL, Hunt III HB, Marathe MV, Ravi SS, Rosenkrantz DJ, Stearns RE, Tosic P (2001) Garden of eden and fixed point configurations in sequential dynamical systems. In: Proceedings of international conference on combinatorics, computation and geometry DM-CCG’. Paris, France, pp 95–110Google Scholar
  5. Barrett CL, Mortveit HS, Reidys CM (2001b) Elements of a theory of simulation III: equivalence of SDS. Appl Math Comput 122:325–340CrossRefzbMATHMathSciNetGoogle Scholar
  6. Barrett CL, Marathe MV, Smith JP, Ravi SS (2002) A mobility and traffic generation framework for modeling and simulating ad hoc communication networks. In: SAC’02. ACM, Madrid, pp 122–126Google Scholar
  7. Barrett CL, Hunt HB III, Marathe MV, Ravi SS, Rosenkrantz DJ, Stearns RE (2003a) On some special classes of sequential dynamical systems. Ann Comb 7(4):381–408CrossRefzbMATHMathSciNetGoogle Scholar
  8. Barrett CL, Hunt HB III, Marathe MV, Ravi SS, Rosenkrantz DJ, Stearns RE (2003b) Reachability problems for sequential dynamical systems with threshold functions. Theor Comput Sci 295(1–3):41–64CrossRefzbMATHMathSciNetGoogle Scholar
  9. Barrett CL, Mortveit HS, Reidys CM (2003c) Elements of a theory of computer simulation. IV. Sequential dynamical systems: fixed points, invertibility and equivalence. Appl Math Comput 134(1):153–171CrossRefzbMATHMathSciNetGoogle Scholar
  10. Barrett CL, Hunt HB III, Marathe MV, Ravi SS, Rosenkrantz DJ, Stearns RE (2006) Complexity of reachability problems for finite discrete sequential dynamical systems. J Comput Syst Sci 72:1317–1345CrossRefzbMATHMathSciNetGoogle Scholar
  11. Barrett CL, Hunt III HB, Marathe MV, Ravi SS, Rosenkrantz DJ, Stearns RE, Thakur M (2007) Computational aspects of analyzing social network dynamics. In: Proceedings of international joint conference on artificial intelligence IJCAI. Paris, France, pp 2268–2273Google Scholar
  12. Barrett CL, Hunt HB III, Marathe MV, Ravi SS, Rosenkrantz DJ, Stearns RE, Thakur M (2007b) Predecessor existence problems for finite discrete dynamical systems. Theor Comput Sci 1–2:3–37CrossRefMathSciNetGoogle Scholar
  13. Bartlett R, Garzon M (1993) Monomial cellular automata. Complex Syst 7(5):367–388zbMATHMathSciNetGoogle Scholar
  14. Bernaschi M, Castiglione F (2002) Selection of escape mutants from immune recognition during hiv infection. Immunol Cell Biol 80:307–313CrossRefGoogle Scholar
  15. Bernaschi M, Succi S, Castiglione F (2000) Large-scale cellular automata simulations of the immune system response. Phys Rev E 61:1851–1854ADSCrossRefGoogle Scholar
  16. Booch G, Rumbaugh J, Jacobson I (2005) Unified modeling language user guide, 2nd edn. Addison-Wesley, ReadingGoogle Scholar
  17. Brand D, Zafiropulo P (1983) On communicating finite-state machines. J ACM 30:323–342CrossRefzbMATHMathSciNetGoogle Scholar
  18. Cartier P, Foata D (1969) Problemes combinatoires de commutation et reárrangements, vol 85, Lecture Notes in Mathematics. Springer, BerlinzbMATHGoogle Scholar
  19. Castiglione F, Agur Z (2003) Analyzing hypersensitivity to chemotherapy in a cellular automata model of the immune system. In: Preziosi L (ed) Cancer modeling and simulation. Chapman and Hall/CRC, LondonGoogle Scholar
  20. Castiglione F, Bernaschi M, Succi S (1997) Simulating the immune response on a distributed parallel computer. Int J Mod Phys C 8:527–545. doi:10.1142/S0129183197000424ADSCrossRefGoogle Scholar
  21. Castiglione F, Duca K, Jarrah A, Laubenbacher R, Hochberg D, Thorley-Lawson D (2007) Simulating Epstein-Barr virus infection with C-ImmSim. Bioinformatics 23(11):1371–1377CrossRefGoogle Scholar
  22. Celada F, Seiden P (1992a) A computer model of cellular interactions in the immune system. Immunol Today 13(2):56–62CrossRefGoogle Scholar
  23. Celada F, Seiden P (1992b) A model for simulating cognate recognition and response in the immune system. J Theor Biol 158:235–270Google Scholar
  24. Celada F, Seiden P (1996) Affinity maturation and hypermutation in a simulation of the humoral immune response. Eur J Immunol 26(6):1350–1358CrossRefGoogle Scholar
  25. Chaudhuri PP (1997) Additive cellular automata. Theory and applications, vol 1. IEEE Computer Society Press, Washington, DCGoogle Scholar
  26. Colón-Reyes O, Laubenbacher R, Pareigis B (2004) Boolean monomial dynamical systems. Ann Comb 8:425–439CrossRefzbMATHMathSciNetGoogle Scholar
  27. Colón-Reyes O, Jarrah A, Laubenbacher R, Sturmfels B (2006) Monomial dynamical systems over finite fields. Complex Syst 16(4):333–342Google Scholar
  28. Dawson D (1974) Synchronous and asynchronous reversible Markov systems. Canad Math Bull 17(5):633–649CrossRefMathSciNetGoogle Scholar
  29. Ebeling W, Schweitzer F (2001) Swarms of particle agents with harmonic interactions. Theor Biosci 120–3(4):207–224CrossRefGoogle Scholar
  30. Elspas B (1959) The theory of autonomous linear sequential networks. IRE Trans Circuit Theor 1:45–60CrossRefGoogle Scholar
  31. Farmer J, Packard N, Perelson A (1986) The immune system, adaptation, and machine learning. Phys D 2(1–3):187–204CrossRefMathSciNetGoogle Scholar
  32. Frish U, Hasslacher B, Pomeau Y (1986) Lattice-gas automata for the Navier-Stokes equations. Phys Rev Lett 56:1505–1508ADSCrossRefGoogle Scholar
  33. Fukś H (2004) Probabilistic cellular automata with conserved quantities. Nonlinearity 17:159–173ADSCrossRefzbMATHMathSciNetGoogle Scholar
  34. Garcia LD, Jarrah AS, Laubenbacher R (2006) Sequential dynamical systems over words. Appl Math Comput 174(1):500–510CrossRefzbMATHMathSciNetGoogle Scholar
  35. Gardner M (1970) The fantastic combinations of John Conway’s new solitaire game “life”. Sci Am 223:120–123CrossRefGoogle Scholar
  36. Gouda M, Chang C (1986) Proving liveness for networks of communicating finite-state machines. ACM Trans Program Lang Syst 8:154–182CrossRefzbMATHGoogle Scholar
  37. Guo Y, Gong W, Towsley D (2000) Time-stepped hybrid simulation (TSHS) for large scale networks. In: INFOCOM 2000. Proceedings of nineteenth annual joint conference of the IEEE computer and communications societies, vol 2. IEEE, Washington, DC, pp 441–450Google Scholar
  38. Gupta A, Katiyar V (2005) Analyses of shock waves and jams in traffic flow. J Phys A 38:4069–4083ADSCrossRefzbMATHMathSciNetGoogle Scholar
  39. Hansson AÅ, Mortveit HS, Reidys CM (2005) On asynchronous cellular automata. Adv Complex Syst 8(4):521–538CrossRefzbMATHMathSciNetGoogle Scholar
  40. Hedlund G (1969) Endomorphisms and automorphisms of the shift dynamical system. Math Syst Theory 3:320–375CrossRefzbMATHMathSciNetGoogle Scholar
  41. Hernández-Toledo A (2005) Linear finite dynamical systems. Commun Algebra 33(9):2977–2989CrossRefzbMATHGoogle Scholar
  42. Hopcroft JE, Motwani R, Ullman JD (2000) Automata theory, languages and computation. Addison Wesley, ReadingGoogle Scholar
  43. Hopfield J (1982) Neural networks and physical systems with emergent collective computational properties. Proc Natl Acad Sci U S A 79:2554–2588ADSCrossRefMathSciNetGoogle Scholar
  44. Ilachinsky A (2001) Cellular automata: a discrete universe. World Scientific, SingaporeGoogle Scholar
  45. Jarrah A, Laubenbacher R, Stillman M, Vera-Licona P (2007) An efficient algorithm for the phase space structure of linear dynamical systems over finite fields (submitted)Google Scholar
  46. Jefferson DR (1985) Virtual time. ACM Trans Program Lang Syst 7(3):404–425CrossRefMathSciNetGoogle Scholar
  47. Kari J (2005) Theory of cellular automata: a survey. Theory Comput Sci 334:3–33CrossRefzbMATHMathSciNetGoogle Scholar
  48. Keyfitz BL (2004) Hold that light! Modeling of traffic flow by differential equations. Stud Math Libr 26:127–153MathSciNetGoogle Scholar
  49. Kozen DC (1997) Automata and computability. Springer, New YorkCrossRefzbMATHGoogle Scholar
  50. Laubenbacher R, Pareigis B (2003) Decomposition and simulation of sequential dynamical systems. Adv Appl Math 30:655–678CrossRefzbMATHMathSciNetGoogle Scholar
  51. Lidl R, Niederreiter H (1997) Finite fields. Cambridge University Press, CambridgeGoogle Scholar
  52. Liggett TM (2005) Interacting particle systems. Classics in mathematics. Springer, Berlin, Reprint of the 1985 originalGoogle Scholar
  53. Lind DA (1984) Applications of ergodic theory and sofic systems to cellular automata. Phys D 10D:36–44ADSCrossRefMathSciNetGoogle Scholar
  54. Lindgren K, Moore C, Nordahl M (1998) Complexity of two-dimensional patterns. J Stat Phys 91(5–6):909–951CrossRefzbMATHMathSciNetGoogle Scholar
  55. Mac Lane S (1998) Category theory for the working mathematician, 2nd edn. Springer, New York, No 5. in GTMGoogle Scholar
  56. Macy MW, Kitts JA, Flache A (2003) Polarization in dynamic networks: a Hopfield model of emergent structure. In: Dynamic social network modeling and analysis. The National Academies Press, Washington, DC, pp 162–173Google Scholar
  57. Martin O, Odlyzko A, Wolfram S (1984) Algebraic properties of cellular automata. Commun Math Phys 93:219–258ADSCrossRefzbMATHMathSciNetGoogle Scholar
  58. Milligan D, Wilson M (1993) The behavior of affine Boolean sequential networks. Connect Sci 5(2):153–167CrossRefGoogle Scholar
  59. Minar N, Burkhart R, Langton C, Manor A (1996) The swarm simulation system: a toolkit for building multi-agent simulations. Santa Fe Institute preprint series. http://www.santafe.edu/research/publications/wpabstract/199606042. Accessed 11 Aug 2008
  60. Misra J (1986) Distributed discrete-event simulation. ACM Comput Surv 18(1):39–65CrossRefGoogle Scholar
  61. Moncion T, Hutzler G, Amar P (2006) Verification of biochemical agent-based models using petri nets. In: Robert T (ed) International symposium on agent based modeling and simulation, ABModSim’06. Austrian Society for Cybernetics Studies, pp 695–700. http://www.ibisc.univ-evry.fr/pub/basilic/OUT/Publications/2006/MHA06
  62. Morpurgo D, Serentha R, Seiden P, Celada F (1995) Modelling thymic functions in a cellular automaton. Int Immunol 7:505–516CrossRefGoogle Scholar
  63. Mortveit HS, Reidys CM (2001) Discrete, sequential dynamical systems. Discret Math 226:281–295CrossRefzbMATHMathSciNetGoogle Scholar
  64. Mortveit HS, Reidys CM (2004) Reduction of discrete dynamical systems over graphs. Adv Complex Syst 7(1):1–20CrossRefzbMATHMathSciNetGoogle Scholar
  65. Nagel K, Schreckenberg M (1992) A cellular automaton model for freeway traffic. J Phys I 2:2221–2229Google Scholar
  66. Nagel K, Wagner P (2006) Traffic flow: approaches to modelling and control. Wiley, Hoboken, NJGoogle Scholar
  67. Nagel K, Schreckenberg M, Schadschneider A, Ito N (1995) Discrete stochastic models for traffic flow. Phys Rev E 51:2939–2949ADSCrossRefGoogle Scholar
  68. Nagel K, Rickert M, Barrett CL (1997) Large-scale traffic simulation, vol 1215, Lecture notes in computer science. Springer, Berlin, pp 380–402CrossRefGoogle Scholar
  69. Nance RE (1993) A history of discrete event simulation programming languages. ACM SIGPLAN Not 28:149–175CrossRefGoogle Scholar
  70. North MJ, Collier NT, Vos JR (2006) Experiences creating three implementations of the repast agent modeling toolkit. ACM Trans Model Comput Simul 16:1–25CrossRefGoogle Scholar
  71. Orponen P (1994) Computational complexity of neural networks: a survey. Nord J Comput 1:94–110MathSciNetGoogle Scholar
  72. Orponen P (1996) The computational power of discrete hopfield networks with hidden units. Neural Comput 8:403–415CrossRefGoogle Scholar
  73. Park JK, Steiglitz K, Thruston WP (1986) Soliton-like behavior in automata. Phys D 19D:423–432ADSCrossRefGoogle Scholar
  74. Reidys C (1998) Acyclic orientations of random graphs. Adv Appl Math 21:181–192CrossRefzbMATHMathSciNetGoogle Scholar
  75. Reidys CM (2001) On acyclic orientations and sequential dynamical systems. Adv Appl Math 27:790–804CrossRefzbMATHMathSciNetGoogle Scholar
  76. Reidys CM (2005) On certain morphisms of sequential dynamical systems. Discret Math 296(2–3):245–257CrossRefzbMATHMathSciNetGoogle Scholar
  77. Reidys CM (2006) Sequential dynamical systems over words. Ann Comb 10(4):481–498CrossRefzbMATHMathSciNetGoogle Scholar
  78. Rickert M, Nagel K, Schreckenberg M, Latour A (1996) Two lane traffic simulations using cellular automata. Phys A 231:534–550CrossRefGoogle Scholar
  79. Rothman DH (1988) Cellular-automaton fluids: a model for flow in porous media. Geophysics 53:509–518ADSCrossRefGoogle Scholar
  80. Russell S, Norwig P (2003) Artificial intelligence: a modern approach. Prentice-Hall, Upper Saddle RiverGoogle Scholar
  81. Schönfisch B, de Roos A (1999) Synchronous and asynchronous updating in cellular automata. BioSystems 51:123–143CrossRefGoogle Scholar
  82. Shmulevich I, Dougherty ER, Kim S, Zhang W (2002a) Probabilistic boolean networks: a rule-based uncertainty model for gene regulatory networks. Bioinformatics 18(2):261–274CrossRefGoogle Scholar
  83. Shmulevich I, Dougherty ER, Zhang W (2002b) From boolean to probabilistic boolean networks as models of genetic regulatory networks. Proc IEEE 90(11):1778–1792CrossRefGoogle Scholar
  84. Sipser M (1997) Introduction to the theory of computation. PWS Publishing Co, BostonzbMATHGoogle Scholar
  85. Vasershtein L (1969) Markov processes over denumerable products of spaces describing large system of automata. Probl Peredachi Inf 5(3):64–72MathSciNetGoogle Scholar
  86. von Neumann J, Burks AW (eds) (1966) Theory of self-reproducing automata. University of Illinois Press, ChampaignGoogle Scholar
  87. Whitham G (1999) Linear and nonlinear waves, reprint edition edn. Pure and applied mathematics: a Wiley-Interscience series of texts, monographs and tracts. Wiley-Interscience, New YorkCrossRefGoogle Scholar
  88. Wolfram S (1983) Statistical mechanics of cellular automata. Rev Mod Phys 55:601–644ADSCrossRefzbMATHMathSciNetGoogle Scholar
  89. Wolfram S (1986) Theory and applications of cellular automata, vol 1, Advanced series on complex systems. World Scientific Publishing Company, SingaporezbMATHGoogle Scholar
  90. Wolfram S (2002) A new kind of science. Wolfram Media, Champaign Books and Reviews, ChampaignzbMATHGoogle Scholar
  91. Wooldridge M (2002) Introduction to multiagent systems. Wiley, ChichesterGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2013

Authors and Affiliations

  • Reinhard Laubenbacher
    • 1
    Email author
  • Abdul S. Jarrah
    • 2
  • Henning S. Mortveit
    • 1
  • S. S. Ravi
    • 3
  1. 1.Virginia Bioinformatics InstituteVirginia Polytechnic Institute and State UniversityVirginiaUSA
  2. 2.Department of Mathematics and StatisticsAmerican University of SharjahSharjahUnited Arab Emirates
  3. 3.Department of Computer ScienceUniversity at Albany – State University of New YorkNew YorkUSA