Encyclopedia of Complexity and Systems Science

Living Edition
| Editors: Robert A. Meyers

Agent-Based Modeling and Artificial Life

  • Charles M. MacalEmail author
Living reference work entry
DOI: https://doi.org/10.1007/978-3-642-27737-5_7-5


Cellular Automaton Cellular Automaton Complex Adaptive System Artificial Life Pheromone Trail 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Definition of the Subject

Agent-based modeling began as the computational arm of artificial life some 20 years ago. Artificial life is concerned with the emergence of order in nature. How do systems self-organize themselves and spontaneously achieve a higher-ordered state? Agent-based modeling, then, is concerned with exploring and understanding the processes that lead to the emergence of order through computational means. The essential features of artificial life models are translated into computational algorithms through agent-based modeling. With its historical roots in artificial life, agent-based modeling has become a distinctive form of modeling and simulation. Agent-based modeling is a bottom-up approach to modeling complex systems by explicitly representing the behaviors of large numbers of agents and the processes by which they interact. These essential features are all that is needed to produce at least rudimentary forms of emergent behavior at the systems level. To understand the current state of agent-based modeling and where the field aspires to be in the future, it is necessary to understand the origins of agent-based modeling in artificial life.


The field of artificial life, or “ALife,” is intimately connected to agent-based modeling, or “ABM.” Although one can easily enumerate some of life’s distinctive properties, such as reproduction, respiration, adaptation, emergence, etc., a precise definition of life remains elusive.

Artificial life had its inception as a coherent and sustainable field of investigation at a workshop in the late 1980s (Langton 1989a). This workshop drew together specialists from diverse fields who had been working on related problems in different guises, using different vocabularies suited to their fields.

At about the same time, the introduction of the personal computer suddenly made computing accessible, convenient, inexpensive, and compelling as an experimental tool. The future seemed to have almost unlimited possibilities for the development of ALife computer programs to explore life and its possibilities. Thus, several ALife software programs emerged that sought to encapsulate the essential elements of life through incorporation of ALife-related algorithms into easily usable software packages that could be widely distributed. Computational programs for modeling populations of digital organisms, such as Tierra, Avida, and Echo, were developed along with more general purpose agent-based simulators such as Swarm.

Yet, the purpose of ALife was never restricted to understanding or recreating life as it exists today. According to Langton:

Artificial systems which exhibit lifelike behaviors are worthy of investigation on their own right, whether or not we think that the processes that they mimic have played a role in the development or mechanics of life as we know it to be. Such systems can help us expand our understanding of life as it could be. (p. xvi in Langton 1989a)

The field of ALife addresses lifelike properties of systems at an abstract level by focusing on the information content of such systems independent of the medium in which they exist, whether it be biological, chemical, physical, or in silico. This means that computation, modeling, and simulation play a central role in ALife investigations.

The relationship between ALife and ABM is complex. A case can be made that the emergence of ALife as a field was essential to the creation of agent-based modeling. Computational tools were both required and became possible in the 1980s for developing sophisticated models of digital organisms and general purpose artificial life simulators. Likewise, a case can be made that the possibility for creating agent-based models was essential to making ALife a promising and productive endeavor. ABM made it possible to understand the logical outcomes and implications of ALife models and lifelike processes. Traditional analytical means, although valuable in establishing baseline information, were limited in their capabilities to include essential features of ALife. Many threads of ALife are still intertwined with developments in ABM and vice verse. Agent-based models demonstrate the emergence of lifelike features using ALife frameworks; ALife algorithms are widely used in agent-based models to represent agent behaviors. These threads are explored in this entry. In ALife terminology, one could say that ALife and ABM have coevolved to their present states. In all likelihood, they will continue to do so.

This entry covers in a necessarily brief and perhaps superficial, but broad, way these relationships between ABM and ALife and extrapolates to future possibilities. This entry is organized as follows. Section “Artificial Life” introduces artificial life, its essential elements, and its relationship to computing and agent-based modeling. Section “ALife in Agent-Based Modeling” describes several examples of ABM applications spanning many scales. Section “Future Directions” concludes with future directions for ABM and ALife. A bibliography is included for further reading.

Artificial Life

Artificial life was initially motivated by the need to model biological systems and brought with it the need for computation. The field of ALife has always been multidisciplinary and continues to encompass a broad research agenda covering a variety of topics from a number of disciplines, including:
  • Essential elements of life and artificial life

  • Origins of life and self-organization

  • Evolutionary dynamics

  • Replication and development processes

  • Learning and evolution

  • Emergence

  • Computation of living systems

  • Simulation systems for studying ALife

  • Many others

Each of these topics has threads leading into agent-based modeling.

The Essence of ALife

The essence of artificial life is summed up by Langton (p. xxii in Langton (1989a)) with a list of essential characteristics:
  • Lifelike behavior on the part of man-made systems

  • Semiautonomous entities whose local interactions with one another are governed by a set of simple rules

  • Populations, rather than individuals

  • Simple rather than complex specifications

  • Local rather than global control

  • Bottom-up rather than top-down modeling

  • Emergent rather than prespecified behaviors

Langton observes that complex high-level dynamics and structures often emerge (in living and artificial systems), developing over time out of the local interactions among low-level primitives. Agent-based modeling has grown up around the need to model the essentials of ALife.

Self-Replication and Cellular Automata

Artificial life traces its beginnings to the work of John von Neumann in the 1940s and investigations into the theoretical possibilities for developing a self-replicating machine (Taub 1961). Such a self-replicating machine carries instructions not only for its operations but also for its replication. The issue is concerned with how to replicate such a machine that contained the instructions for its operation along with the instructions for its replication. Did a machine to replicate such a machine need to contain both the instructions for the machine’s operation and replication, as well as instructions for replicating the instructions on how to replicate the original machine? (see Fig. 1.) Von Neumann used the abstract mathematical construct of cellular automata, originally conceived in discussions with Stanislaw Ulam, to prove that such a machine could be designed, at least in theory. Von Neumann was never able to build such a machine due to the lack of sophisticated computers that existed at the time.
Fig. 1

Von Neumann’s self-replication problem

Cellular automata (CA) have been central to the development of computing artificial life models. Virtually all of the early agent-based models that required agents to be spatially located were in the form of von Neumann’s original cellular automata. A cellular automaton is a finite-state machine in which time and space are treated as discrete rather than continuous, as would be the case, for example, in differential equation models. A typical CA is a two-dimensional grid or lattice consisting of cells. Each cell assumes one of a finite number of states at any time. A cell’s neighborhood is the set of cells surrounding a cell, typically a five-cell neighborhood (von Neumann neighborhood) or a nine-cell neighborhood (Moore neighborhood), as in Fig. 2.
Fig. 2

Cellular automata neighborhoods

A set of simple state transition rules determines the value of each cell based on the cell’s state and the states of neighboring cells. Every cell is updated at each time according to the transition rules. Each cell is identical in terms of its update rules. Cells differ only in their initial states. A CA is deterministic in the sense that the same state for a cell and its set of neighbors always results in the same updated state for the cell. Typically, CAs are set up with periodic boundary conditions, meaning that the set of cells on one edge of the grid boundary is the neighbor cells to the cells on the opposite edge of the grid boundary. The space of the CA grid forms a surface on a toroid, or donut-shape, so there is no boundary per se. It is straightforward to extend the notion of cellular automata to two, three, or more dimensions.

Von Neumann solved the self-replication problem by developing a cellular automaton in which each cell had 29 possible states and five neighbors (including the updated cell itself). In the von Neumann neighborhood, neighbor cells are in the north, south, east, and west directions from the updated cell.

The Game of Life
Conway’s Game of Life, or Life, developed in the 1970s, is an important example of a CA (Berlekamp et al. 2003; Gardner 1970; Poundstone 1985). The simplest way to illustrate some of the basic ideas of agent-based modeling is through a CA. The Game of Life is a two-state, nine-neighbor cellular automaton with three rules that determine the state (either On, i.e., shaded, or Off, i.e., white) of each cell:
  1. 1.

    A cell will be On in the next generation if exactly three of its eight neighboring cells are currently On.

  2. 2.

    A cell will retain its current state if exactly two of its neighbors are On.

  3. 3.

    A cell will be Off otherwise.


Initially, a small set of On cells is randomly distributed over the grid. The three rules are then applied repeatedly to all cells in the grid.

After several updates of all cells on the grid, distinctive patterns emerge, and in some cases these patterns can sustain themselves indefinitely throughout the simulation (Fig. 3). The state of each cell is based only on the current state of the cell and the cells touching it in its immediate neighborhood. The nine-neighbor per neighborhood assumption built into Life determines the scope of the locally available information for each cell to update its state.
Fig. 3

Game of life

Conway showed that, at least in theory, the structures and patterns that can result during a Life computation are complex enough to be the basis for a fully functional computer that is complex enough to spontaneously generate self-replicating structures (see the section below on universal computation). Two observations are important about the Life rules:
  • As simple as the state transition rules are, by using only local information, structures of arbitrarily high complexity can emerge in a CA.

  • The specific patterns that emerge are extremely sensitive to the specific rules used. For example, changing Rule 1 above to “A cell will be On in the next generation if exactly four of its eight neighboring cells are currently On” results in the development of completely different patterns.

  • The Game of Life provides insights into the role of information in fundamental life processes.

Cellular Automata Classes
Wolfram investigated the possibilities for complexity in cellular automata across the full range of transition rules and initial states, using one-dimensional cellular automata (Wolfram 1984). He categorized four distinct classes for the resulting patterns produced by a CA as it is solved repeatedly over time. These are:
  • Class I: homogeneous state

  • Class II: simple stable or periodic structure

  • Class III: chaotic (non-repeating) pattern

  • Class IV: complex patterns of localized structures

The most interesting of these is Class IV cellular automata, in which very complex patterns of non-repeating localized structures emerge that are often long lived. Wolfram showed that these Class IV structures were also complex enough to support universal computation (Wolfram 2002). Langton (1992) coined the term “life at the edge of chaos” to describe the idea that Class IV systems are situated in a thin region between Class II and Class III systems. Agent-based models often yield Class I, Class II, and Class III behaviors.

Other experiments with CAs investigated the simplest representations that could replicate themselves and produce emergent structures. Langton’s loop is a self-replicating two-dimensional cellular automaton, much simpler than von Neumann’s (Langton 1984). Although not complex enough to be a universal computer, Langton’s loop was the simplest known structure that could reproduce itself. Langton’s ant is a two-dimensional CA with a simple set of rules, but complicated emergent behavior. Following a simple set of rules for moving from cell to cell, a simulated ant displays unexpectedly complex behavior. After an initial period of chaotic movements in the vicinity of its initial location, the ant begins to build a recurrent pattern of regular structures that repeats itself indefinitely (Langton 1986). Langton’s ant has behaviors complex enough to be a universal computer.

Genotype/Phenotype Distinction

Biologists distinguish between the genotype and the phenotype as hallmarks of biological systems. The genotype is the template – the set of instructions, the specification, and the blueprint – for an organism. DNA is the genotype for living organisms, for example. A DNA strand contains the complete instructions for the replication and development of the organism. The phenotype is the organism – the machine, the product, and the result – that develops from the instructions in the genotype (Fig. 4).
Fig. 4

Genotype and phenotype relations

Morphogenesis is the developmental process by which the phenotype develops in accord with the genotype, through interactions with and resources obtained from its environment. In a famous paper, Turing (1952) modeled the dynamics of morphogenesis and, more generally, the problem of how patterns self-organize spontaneously in nature. Turing used differential equations to model a simple set of reaction-diffusion chemical reactions. Turing demonstrated that only a few assumptions were necessary to bring about the emergence of wave patterns and gradients of chemical concentration, suggestive of morphological patterns that commonly occur in nature. Reaction-diffusion systems are characterized by the simultaneous processes of attraction and repulsion and are the basis for the agent behavioral rules (attraction and repulsion) in many social agent-based models.

More recently, Bonabeau extended Turing’s treatment of morphogenesis to a theory of pattern formation based on agent-based modeling. Bonabeau (1997) states the reason for relying on ABM: “because pattern-forming systems based on agents are (relatively) more easily amenable to experimental observations.”

Information Processes

One approach to building systems from a genotype specification is based on the methodology of recursively generated objects. Such recursive systems are compact in their specification, and their repeated application can result in complex structures, as demonstrated by cellular automata.

Recursive systems are logic systems in which strings of symbols are recursively rewritten based on a minimum set of instructions. Recursive systems, or term replacement systems, as they have been called, can result in complex structures. Examples of recursive systems include cellular automata, as described above, and Lindenmayer systems, called L-systems (Le Novere and Shimizu 2001). An L-system consists of a formal grammar, which is a set of rules for rewriting strings of symbols. L-systems have been used extensively for modeling living systems, for example, plant growth and development, producing highly realistic renderings of plants, with intricate morphologies and branching structures.

Wolfram (1999) used symbolic recursion as a basis for developing Mathematica, the computational mathematics system based on symbolic processing and term replacement. Unlike numeric programming languages, a symbolic programming language allows a variable to be a basic object and does not require a variable to be assigned a value before it is used in a program.

Any agent-based model is essentially a recursive system. Time is simulated by the repeated application of the agent updating rules. The genotype is the set of rules for the agent behaviors. The phenotype is a set of the patterns and structures that emerge from the computation. As in cellular automata and recursive systems, extremely complex structures emerge in agent-based models that are often unpredictable from examination of the agent rules.


One of the primary motivations for the field of ALife is to understand emergent processes, that is, the processes by which life emerges from its constituent elements. Langton writes: “The ‘key’ concept in ALife, is emergent behavior” (p. 2 in Langton 1989b). Complex systems exhibit patterns of emergence that are not predictable from inspection of the individual elements. Emergence is described as unexpected, unpredictable, or otherwise surprising. That is, the modeled system exhibits behaviors that are not explicitly built into the model. Unpredictability is due to the nonlinear effects that result from the interactions of entities having simple behaviors. Emergence by these definitions is something of a subjective process.

In biological systems, emergence is a central issue whether it be the emergence of the phenotype from the genotype, the emergence of protein complexes from genomic information networks (Kauffman 1993), or the emergence of consciousness from networks of millions of brain cells.

One of the motivations for agent-based modeling is to explore the emergent behaviors exhibited by the simulated system. In general, agent-based models often exhibit patterns and relationships that emerge from agent interactions. An example is the observed formation of groups of agents that collectively act in coherent and coordinated patterns. Complex adaptive systems, widely investigated by Holland in his agent-based model Echo (Holland 1995), are often structured in hierarchies of emergent structures. Emergent structures can collectively form higher-order structures, using the lower-level structures as building blocks. An emergent structure itself can take on new emergent behaviors. These structures in turn affect the agents from which the structure has emerged in a process called downward causation (Gilbert 2002). For example, in the real world, people organize and identify with groups, institutions, nations, etc. They create norms, laws, and protocols that in turn act on the individuals comprising the group.

From the perspective of agent-based modeling, emergence has some interesting challenges for modeling:
  • How does one operationally define emergence with respect to agent-based modeling?

  • How does one automatically identify and measure the emergence of entities in a model?

  • How do agents that comprise an emergent entity perceived by an observer recognize that they are part of that entity?

Artificial Chemistry

Artificial chemistry is a subfield of ALife. One of the original goals of artificial chemistry was to understand how life could originate from prebiotic chemical processes. Artificial chemistry studies self-organization in chemical reaction networks by simulating chemical reactions between artificial molecules. Artificial chemistry specifies well-understood chemical reactions and other information such as reaction rates, relative molecular concentrations, probabilities of reaction, etc. These form a network of possibilities. The artificial substances and the networks of chemical reactions that emerge from the possibilities are studied through computation. Reactions are specified as recursive algebras and activated as term replacement systems (Fontana 1992).


The emergence of autocatalytic sets, or hypercycles, has been a prime focus of artificial chemistry (Eigen and Schuster 1979). A hypercycle is a self-contained system of molecules and a self-replicating, and thereby self-sustaining, cyclic linkage of chemical reactions. Hypercycles evolve through a process by which self-replicating entities compete for selection.

The hypercycle model illustrates how an ALife process can be adopted to the agent-based modeling domain. Inspired by the hypercycle model, Padgett et al. (2003) developed an agent-based model of the coevolution of economic production and economic firms, focusing on skills. Padgett used the model to establish three principles of social organization that provide foundations for the evolution of technological complexity:
  • Structured topology (how interaction networks form)

  • Altruistic learning (how cooperation and exchange emerge)

  • Stigmergy (how agent communication is facilitated by using the environment as a means of information exchange among agents)

Digital Organisms The widespread availability of personal computers spurred the development of ALife programs used to study evolutionary processes in silico. Tierra was the first system devised in which computer programs were successfully able to evolve and adapt (Ray 1991). Avida extended Tierra to account for the spatial distribution of organisms and other features (Ofria and Wilke 2004; Wilke and Adami 2002). Echo is a simulation framework for implementing models to investigate mechanisms that regulate diversity and information processing in complex adaptive systems (CAS), systems comprised of many interacting adaptive agents (Holland 1975, 1995). In implementations of Echo, populations evolve interaction networks, resembling species communities in ecological systems, which regulate the flow of resources.

Systems such as Tierra, Avida, and Echo simulate populations of digital organisms, based on the genotype/phenotype schema. They employ computational algorithms to mutate and evolve populations of organisms living in a simulated computer environment. Organisms are represented as strings of symbols, or agent attributes, in computer memory. The environment provides them with resources (computation time) they need to survive, compete, and reproduce. Digital organisms interact in various ways and develop strategies to ensure survival in resource-limited environments.

Digital organisms are extended to agent-based modeling by implementing individual-based models of food webs in a system called DOVE (Wilke and Chow 2006). Agent-based models allow a more complete representation of agent behaviors and their evolutionary adaptability at both the individual and population levels.

ALife and Computing

Creating lifelike forms through computation is central to artificial life. Is it possible to create life through computation? The capabilities and limitations of computation constrain the types of artificial life that can be created. The history of ALife has close ties with important events in the history of computation.

Alan Turing (1938) investigated the limitations of computation by developing an abstract and idealized computer, called a universal Turing machine (UTM). A UTM has an infinite tape (memory) and is therefore an idealization of any actual computer that may be realized. A UTM is capable of computing anything that is computable, that is, anything that can be derived via a logical, deductive series of statements. Are the algorithms used in today’s computers, and in ALife calculations and agent-based models in particular, as powerful as universal computers?

Any system that can effectively simulate a small set of logical operations (such as AND and NOT) can effectively produce any possible computation. Simple rule systems in cellular automata were shown to be equivalent to universal computers (von Neumann 1966; Wolfram 2002) and in principal able to compute anything that is computable – perhaps, even life!

Some have argued that life, in particular human consciousness, is not the result of a logical-deductive or algorithmic process and therefore not computable by a universal Turing machine. This problem is more generally referred to as the mind-body problem (Lucas 1961). Dreyfus (1979) argues against the assumption often made in the field of artificial intelligence that human minds function like general purpose symbol manipulation machines. Penrose (1989) argues that the rational processes of the human mind transcend formal logic systems. In a somewhat different view, biological naturalism contends (Searle 1990) that human behavior might be able to be simulated, but human consciousness is outside the bounds of computation.

Such philosophical debates are as relevant to agent-based modeling as they are to artificial intelligence, for they are the basis of answering the question of what kind of systems and processes agent-based models will ultimately be able, or unable, to simulate.

Artificial Life Algorithms

ALife use several biologically inspired computational algorithms (Olariu and Zomaya 2006). Bioinspired algorithms include those based on Darwinian evolution, such as evolutionary algorithms; those based on neural structures, such as neural networks; and those based on decentralized decision-making behaviors observed in nature. These algorithms are commonly used to model adaptation and learning in agent-based modeling or to optimize the behaviors of whole systems.

Evolutionary Computing

Evolutionary computing includes a family of related algorithms and programming solution techniques inspired by evolutionary processes, especially the genetic processes of DNA replication and cell division (Eiben and Smith 2007). These techniques are known as evolutionary algorithms and include the following (Back 1996):
  • Genetic algorithms (Goldberg 1989, 1994; Holland 1975; Holland et al. 2000; Mitchell and Forrest 1994)

  • Evolution strategies (Rechenberg 1973)

  • Learning classifier systems (Holland et al. 2000)

  • Genetic programming (Koza 1992)

  • Evolutionary programming (Fogel et al. 1966)

Genetic algorithms (GA) model the dynamic processes by which populations of individuals evolve to improved levels of fitness for their particular environment over repeated generations. GAs illustrate how evolutionary algorithms process a population and apply the genetic operations of mutation and crossover (see Fig. 5). Each behavior is represented as a chromosome consisting of a series of symbols, for example, as a series of 0s and 1s. The encoding process establishing correspondence between behaviors and their chromosomal representations is part of the modeling process.
Fig. 5

Genetic algorithm

The general steps in a genetic algorithm are as follows:
  1. 1.

    Initialization: Generate an initial population of individuals. The individuals are unique and include specific encoding of attributes in chromosomes that represents the characteristics of the individuals.

  2. 2.

    Evaluation: Calculate the fitness of all individuals according to a specified fitness function.

  3. 3.

    Checking: If any of the individuals has achieved an acceptable level of fitness, stop; the problem is solved. Otherwise, continue with selection.

  4. 4.

    Selection: Select the best pair of individuals in the population for reproduction according to their high fitness levels.

  5. 5.

    Crossover: Combine the chromosomes for the two best individuals through a crossover operation, and produce a pair of offspring.

  6. 6.

    Mutation: Randomly mutate the chromosomes for the offspring.

  7. 7.

    Replacement: Replace the least fit individuals in the population with the offspring.

  8. 8.

    Continue at Step 2.


Steps 5 and 6 above, the operations of crossover and mutation, comprise the set of genetic operators inspired by nature. This series of steps for a GA comprise a basic framework rather than a specific implementation. Actual GA implementations include numerous variations and alternative implementations in several of the GA steps.

Evolution strategies (ES) are similar to genetic algorithms but rely on mutation as its primary genetic operator.

Learning classifier systems (LCS) build on genetic algorithms and adaptively assign relative weights to sensor-action sets that result in the most positive outcomes relative to a goal.

Genetic programming (GP) has similar features to genetic algorithms, but instead of using 0s and 1s or other symbols for comprising chromosomes, GPs combine logical operations and directives in a tree structure. In effect, chromosomes in GPs represent whole computer programs that perform a variety of functions with varying degrees of success and efficiencies. GP chromosomes are evaluated against fitness or performance measures and recombined. Better-performing chromosomes are maintained and expand their representation in the population. For example, an application of a GP is to evolve a better-performing rule set that represents an agent’s behavior.

Evolutionary programming (EP) is a similar technique to genetic programming, but relies on mutation as its primary genetic operator.

Biologically Inspired Computing

Artificial neural networks (ANN) are another type of commonly used biologically inspired algorithm (Mehrotra et al. 1996). An artificial neural network uses mathematical models based on the structures observed in neural systems. An artificial neuron contains a stimulus-response model of neuron activation based on thresholds of stimulation. In modeling terms, neural networks are equivalent to nonlinear, statistical data modeling techniques. Artificial neural networks can be used to model complex relationships between inputs and outputs and to find patterns in data that are dynamically changing. An ANN is adaptive in that changes in its structure are based on external or internal information that flows through the network. The adaptive capability makes ANN an important technique in agent-based models.

Swarm intelligence refers to problem-solving techniques, usually applied to solving optimization problems that are based on decentralized problem-solving strategies that have been observed in nature. These include:
  • Ant colony optimization (Dorigo and Stützle 2004)

  • Particle swarm optimization (Clerc 2006)

Swarm intelligence algorithms simulate the movement and interactions of large numbers of ants or particles over a search space. In terms of agent-based modeling, the ants or particles are the agents, and the search space is the environment. Agents have position and state as attributes. In the case of particle swarm optimization, agents also have velocity.

Ant colony optimization (ACO) mimics techniques that ants use to forage and find food efficiently (Bonabeau et al. 1999; Engelbrecht 2006). The general idea of ant colony optimization algorithms is as follows:
  1. 1.

    In a typical ant colony, ants search randomly until one of them finds food.

  2. 2.

    Then they return to their colony and lay down a chemical pheromone trail along the way.

  3. 3.

    When other ants find such a pheromone trail, they are more likely to follow the trail rather than to continue to search randomly.

  4. 4.

    As other ants find the same food source, they return to the nest, reinforcing the original pheromone trail as they return.

  5. 5.

    As more and more ants find the food source, the ants eventually lay down a strong pheromone trail to the point that virtually all the ants are directed to the food source.

  6. 6.

    As the food source is depleted, fewer ants are able to find the food, and fewer ants lay down a reinforcing pheromone trail; the pheromone naturally evaporates, and eventually, no ants proceed to the food source, as the ants shift their attention to searching for new food sources.


In an ant colony optimization computational model, the optimization problem is represented as a graph, with nodes representing places and links representing possible paths. An ant colony algorithm mimics ant behavior with simulated ants moving from node to node in the graph, laying down pheromone trails, etc. The process by which ants communicate indirectly by using the environment as an intermediary is known as stigmergy (Bonabeau et al. 1999) and is commonly used in agent-based modeling.

Particle swarm optimization (PSO) is another decentralized problem-solving technique in which a swarm of particles is simulated as it moves over a search space in search of a global optimum. A particle stores its best position found so far in its memory and is aware of the best positions obtained by its neighboring particles. The velocity of each particle adapts over time based on the locations of the best global and local solutions obtained so far, incorporating a degree of stochastic variation in the updating of the particle positions at each iteration.

Artificial Life Algorithms and Agent-Based Modeling

Biologically inspired algorithms are often used with agent-based models. For example, an agent’s behavior and its capacity to learn from experience or to adapt to changing conditions can be modeled abstractly through the use of genetic algorithms or neural networks. In the case of a GA, a chromosome effectively represents a single agent action (output) given a specific condition or environmental stimulus (input). Behaviors that are acted on and enable the agent to respond better to environmental challenges are reinforced and acquire a greater share of the chromosome pool. Behaviors that fail to improve the organism’s fitness diminish in their representation in the population.

Evolutionary programming can be used to directly evolve programs that represent agent behaviors. For example, Manson (2006) develops a bounded rationality model using evolutionary programming to solve an agent multi-criteria optimization problem.

Artificial neural networks have also been applied to modeling adaptive agent behaviors, in which an agent derives a statistical relationship between the environmental conditions it faces, its history, and its actions, based on feedback on the success or failures of its actions and the actions of others. For example, an agent may need to develop a strategy for bidding in a market, based on the success of its own and other’s previous bids and outcomes.

Finally, swarm intelligence approaches are agent based in their basic structure, as described above. They can also be used for system optimization through the selection of appropriate parameters for agent behaviors.

ALife Summary

Based on the previous discussion, the essential features of an ALife program can be summarized as follows:
  • Population: A population of organisms or individuals is considered. The population may be diversified, and individuals may vary their characteristics, behaviors, and accumulated resources, in both time and space.

  • Interaction: Interaction requires sensing of the immediate locale, or neighborhood, on the part of an individual. An organism can simply become “aware” of other organisms in its vicinity, or it may have a richer set of interactions with them. The individual also interacts with its (non-agent) environment in its immediate locale. This requirement introduces spatial aspects into the problem, as organisms must negotiate the search for resources through time and space.

  • Sustainment and renewal: Sustainment and renewal require the acquisition of resources. An organism needs to sense, find, ingest, and metabolize resources or nourishment as an energy source for processing into other forms of nutrients. Resources may be provided by the environment, i.e., outside of the agents themselves, or by other agents. The need for sustainment leads to competition for resources among organisms. Competition could also be a precursor to cooperation and more complex emergent social structures if this proves to be a more effective strategy for survival.

  • Self-reproduction and replacement: Organisms reproduce by following instructions at least partially embedded within themselves and interacting with the environment and other agents. Passing on traits to the next generation implies a requirement for trait transmission. Trait transmission requires encoding an organism’s traits in a reduced form, that is, a form that contains less than the total information representing the entire organism. It also requires a process for transforming the organism’s traits into a viable set of possible new traits for a new organism. Mutation and crossover operators enter into such a process. Organisms also leave the population and are replaced by other organisms, possibly with different traits. The organisms can be transformed through changes in their attributes and behaviors, as in, for example, learning or aging. The populations of organisms can be transformed through the introduction of new organisms and replacement, as in evolutionary adaptation.

As we will see in the section that follows, many of the essential aspects of ALife have been incorporated into the development of agent-based models.

ALife in Agent-Based Modeling

This section briefly touches on the ways in which ALife has motivated agent-based modeling. The form of agent-based models, in terms of their structure and appearance, is directly based on early models from the field of ALife. Several application disciplines in agent-based modeling have been spawned and infused by ALife concepts. Two are covered here. These are how agent-based modeling is applied to social and biological systems.

Agent-Based Modeling Topologies

Agent-based modeling owes much to artificial life in both form and substance. Modeling a population of heterogeneous agents with a diverse set of characteristics is a hallmark of agent-based modeling. The agent perspective is unique among simulation approaches, unlike the process perspective or the state-variable approach taken by other simulation approaches.

As we have seen, agents interact with a small set of neighbor agents in a local area. Agent neighborhoods are defined by how agents are connected, the agent interaction topology. Cellular automata represent agent neighborhoods by using a grid in which the agents exist in the cells, one agent per cell, or as the nodes of the lattice of the grid. The cells immediately surrounding an agent comprise the agent’s neighborhood, and the agents that reside in the neighborhood cells comprise the neighbors. Many agent-based models have been based on this cellular automata spatial representation. The transition from a cellular automaton, such as the Game of Life, to an agent-based model is accomplished by allowing agents to be distinct from the cells on which they reside and allowing the agents to move from cell to cell across the grid. Agents move according to the dictates of their behaviors, interacting with other agents that happen to be in their local neighborhoods along the way.

Agent interaction topologies have been extended beyond cellular automata to include networks, either predefined and static, as in the case of autocatalytic chemical networks, or endogenous and dynamic, according to the results of agent interactions that occur in the model. Networks allow an agent’s neighborhood to be defined more generally and flexibly and, in the case of social agents, more accurately describe social agents’ interaction patterns. In addition to cellular automata grids and networks, agent interaction topologies have also been extended across a variety of domains. In summary, agent interaction topologies include:
  • Cellular automata grids (agents are cells or are within cells) or lattices (agents are grid points)

  • Networks, in which agents of vertices and agent relationships are edges

  • Continuous space, in one, two, or three dimensions

  • Aspatial random interactions, in which pairs of agents are randomly selected

  • Geographical information systems (GIS), in which agents move over geographically defined patches, relaxing the one agent per cell restriction.

Social Agent-Based Modeling

Early social agent-based models were based on ALife’s cellular automata approach. In applications of agent-based modeling to social processes, agents represent people or groups of people, and agent relationships represent processes of social interaction (Gilbert and Troitzsch 1999).

Social Agents

Sakoda (1971) formulated one of the first social agent-based models, the checkerboard model, which had some of the key features of a cellular automaton. Following a similar approach, Schelling developed a model of housing segregation in which agents represent homeowners and neighbors, and agent interactions represent agents’ perceptions of their neighbors (Schelling 1971). Schelling studied housing segregation patterns and posed the question of whether it is possible to get highly segregated settlement patterns even if most individuals are, in fact, “color blind.” The Schelling model demonstrated that segregated housing areas can develop spontaneously in the sense that system-level patterns can emerge that are not necessarily implied by or consistent with the objectives of the individual agents (Fig. 6). In the model, agents operated according to a fixed set of rules and were not adaptive.
Fig. 6

Schelling housing segregation model

Identifying the social interaction mechanisms for how cooperative behavior emerges among individuals and groups has been addressed using agent-based modeling and evolutionary game theory. Evolutionary game theory accounts for how the repeated interactions of players in a game-theoretic framework affect the development and evolution of the players’ strategies. Axelrod showed, using a cellular automata approach, in which agents on the grid employed a variety of different strategies, that a simple tit-for-tat strategy of reciprocal behavior toward individuals is enough to establish sustainable cooperative behavior (Axelrod 1984, 1997). In addition, Axelrod investigated strategies that were self-sustaining and robust in that they reduced the possibility of invasion by agents having other strategies.

Epstein and Axtell introduced the notion of an external environment that agents interact with in addition to other agents. In their groundbreaking Sugarscape model of artificial societies, agents interacted with their environment depending on their location in the grid (Epstein and Axtell 1996). This allowed agents to access environmental variables, extract resources, etc., based on the location. In numerous computational experiments, Sugarscape agents emerged with a variety of characteristics and behaviors, highly suggestive of a realistic, although rudimentary and abstract, society (Fig. 7). They observed emergent processes that they interpreted as death, disease, trade, wealth, sex and reproduction, culture, conflict, and war, as well as externalities such as pollution. As agents interacted with their neighbors as they moved around the grid, the interactions resulted in a contact network, that is, a network consisting of nodes and links. The nodes are agents, and the links indicate the agents that have been neighbors at some point in the course of their movements over the grid. Contact networks were the basis for studying contagion and epidemics in the Sugarscape model. Understanding the agent rules that govern how networks are structured and grow, how quickly information is communicated through networks, and the kinds of relationships that networks embody is an important aspect of modeling agents.
Fig. 7

Sugarscape artificial society simulation in the Repast agent-based modeling toolkit

Culture and Generative Social Science

Dawkins, who has written extensively on aspects of Darwinian evolution, coined the term meme as the smallest element of culture that is transmissible between individuals, similar to the notion of the gene as being the primary unit of transmitting genetic information (Dawkins 1989). Several social agent-based models are based on a meme representation of culture as shared or collective agent attributes.

In the broadest terms, social agent-based simulation is concerned with social interaction and social processes. Emergence enters into social simulation through generative social science whose goal is to model social processes as emergent processes and their emergence as the result of social interactions. Epstein has argued that social processes are not fully understood unless one is able to theorize how they work at a deep level and have social processes emerge as part of a computational model (Epstein 2007). More recent work has treated culture as a fluid and dynamic process subject to interpretation of individual agents, more complex than the genotype/phenotype framework would suggest.

ALife and Biology

ALife research has motivated many agent-based computational models of biological systems, and at all scales, ranging from the cellular level, or even the subcellular molecular level, as the basic unit of agency, to complex organisms embedded in larger structures such as food webs or complex ecosystems.

From Cellular Automata to Cells

Cellular automata are a natural application to modeling cellular systems (Alber et al. 2003; Ermentrout and Edelstein-Keshet 1993). One approach uses the cellular automata grid and cells to model structures of stationary cells comprising a tissue matrix. Each cell is a tissue agent. Mobile cells consisting of pathogens and antibodies are also modeled as agents. Mobile agents diffuse through tissue and interact with tissue and other colocated mobile cells. This approach is the basis for agent-based models of the immune system. Celada and Seiden (1992) used bit strings to model the cell receptors in a cellular automaton model of the immune system called IMMSIM. This approach was extended to a more general agent-based model and implemented to maximize the number of cells that could be modeled in the CIMMSIM and ParImm systems (Bernaschi and Castiglione 2001). The Basic Immune Simulator uses a general agent-based framework (the Repast agent-based modeling toolkit) to model the interactions between the cells of the innate and adaptive immune system (Folcik et al. 2007). These approaches for modeling the immune system have inspired several agent-based models of intrusion detection for computer networks (see, e.g., Azzedine et al. 2007) and have found use in modeling the development and spread of cancer (Preziosi 2003).

At the more macroscopic level, agent-based epidemic models have been developed using network topologies. These models include people and some representation of pathogens as individual agents for natural (Bobashev et al. 2007) and potentially man-made (Carley et al. 2006) epidemics.

Modeling bacteria and their associated behaviors in their natural environments are another direction of agent-based modeling. Expanding beyond the basic cellular automata structure into continuous space and network topologies, Emonet et al. (2005) developed AgentCell, a multi-scale agent-based model of E. coli bacteria motility (Fig. 8). In this multi-scale agent-based simulation, molecules within a cell are modeled as individual agents. The molecular reactions comprising the signal transduction network for chemotaxis are modeled using an embedded stochastic simulator, StochSim (Le Novere and Shimizu 2001). This multi-scale approach allows the motile (macroscopic) behavior of colonies of bacteria to be modeled as a direct result of the modeled microlevel processes of protein production within the cells, which are based on individual molecular interactions.
Fig. 8

AgentCell multi-scale agent-based model of bacterial chemotaxis

Artificial Ecologies

Early models of ecosystems used approaches adapted from physical modeling, especially models of idealized gases based on statistical mechanics. More recently, individual-based models have been developed to represent the full range of individual diversity by explicitly modeling individual attributes or behaviors and aggregating across individuals for an entire population (DeAngelis and Gross 1992). Agent-based approaches model a diverse set of agents and their interactions based on their relationships, incorporating adaptive behaviors as appropriate. For example, food webs represent the complex, hierarchical network of agent relationships in local ecosystems (Peacor et al. 2006). Agents are individuals or species representatives. Adaptation and learning for agents in such food webs can be modeled to explore diversity, relative population sizes, and resiliency to environmental insult.

Adaptation and Learning in Agent-Based Models

Biologists consider adaptation to be an essential part of the process of evolutionary change. Adaptation occurs at two levels: the individual level and the population level. In parallel with these notions, agents in an ABM adapt by changing their individual behaviors or by changing their proportional representation in the population. Agents adapt their behaviors at the individual level through learning from experience in their modeled environment.

With respect to agent-based modeling, theories of learning by individual agents or collectives of agents, as well as algorithms for how to model learning, become important. Machine learning is a field consisting of algorithms for recognizing patterns in data (such as data mining) through techniques such as supervised learning, unsupervised learning, and reinforcement learning (Alpaydın 2004; Bishop 2007). Genetic algorithms (Goldberg 1989) and related techniques such as learning classifier systems (Holland et al. 2000) are commonly used to represent agent learning in agent-based models. In ABM applications, agents learn through interactions with the simulated environment in which they are embedded as the simulation precedes through time, and agents modify their behaviors accordingly.

Agents may also adapt collectively at the population level. Those agents having behavioral rules better suited to their environments survive and thrive, and those agents not so well suited are gradually eliminated from the population.

Future Directions

Agent-based modeling continues to be inspired by ALife – in the fundamental questions it is trying to answer, in the algorithms that it employs to model agent behaviors and solve agent-based models, and in the computational architectures that are employed to implement agent-based models. The future of the fields of both ALife and ABM will continue to be intertwined in essential ways in the coming years.

Computational advances will continue at an ever-increasing pace, opening new vistas for computational possibilities in terms of expanding the scale of models that are possible. Computational advances will take several forms, including advances in computer hardware including new chip designs, multi-core processors, and advanced integrated hardware architectures. Software that take advantage of these designs and in particular computational algorithms and modeling techniques and approaches will continue to provide opportunities for advancing the scale of applications and allow more features to be included in agent-based models as well as ALife applications. These will be opportunities for advancing applications of ABM to ALife in both the realms of scientific research and in policy analysis.

Real-world optimization problems routinely solved by business and industry will continue to be solved by ALife-inspired algorithms. The use of ALife-inspired agent-based algorithms for solving optimization problems will become more widespread because of their natural implementation and ability to handle ill-defined problems.

Emergence is a key theme of ALife. ABM offers the capability to model the emergence of order in a variety of complex and complex adaptive systems. Inspired by ALife, identifying the fundamental mechanisms responsible for higher-order emergence and exploring these with agent-based modeling will be an important and promising research area.

Advancing social sciences beyond the genotype/phenotype framework to address the generative nature of social systems in their full complexity is a requirement for advancing computational social models. Recent work has treated culture as a fluid and dynamic process subject to interpretation of individual agents, more complex in many ways than that provided by the genotype/phenotype framework.

Agent-based modeling will continue to be the avenue for exploring new constructs in ALife. If true artificial life is ever developed in silico, it will most likely be done using the methods and tools of agent-based modeling.


Primary Literature

  1. Adami C (1998) Introduction to artificial life. TELOS, Santa ClarazbMATHCrossRefGoogle Scholar
  2. Alber MS, Kiskowski MA, Glazier JA, Jiang Y (2003) On cellular automaton approaches to modeling biological cells. In: Rosenthal J, Gilliam DS (eds) Mathematical systems theory in biology, communication, and finance, IMA volume. Springer, New York, pp 1–39CrossRefGoogle Scholar
  3. Alpaydın E (2004) Introduction to machine learning. MIT Press, CambridgeGoogle Scholar
  4. Axelrod R (1984) The evolution of cooperation. Basic Books, New YorkGoogle Scholar
  5. Axelrod R (1997) The complexity of cooperation: agent-based models of competition and collaboration. Princeton University Press, PrincetonGoogle Scholar
  6. Azzedine B, Renato BM, Kathia RLJ, Joao Bosco MS, Mirela SMAN (2007) An agent based and biological inspired real-time intrusion detection and security model for computer network operations. Comput Commun 30(13):2649–2660CrossRefGoogle Scholar
  7. Back T (1996) Evolutionary algorithms in theory and practice: evolution strategies, evolutionary programming, genetic algorithms. Oxford University Press, New YorkGoogle Scholar
  8. Berlekamp ER, Conway JH, Guy RK (2003) Winning ways for your mathematical plays, 2nd edn. AK Peters, NatickGoogle Scholar
  9. Bernaschi M, Castiglione F (2001) Design and implementation of an immune system simulator. Comput Biol Med 31(5):303–331CrossRefGoogle Scholar
  10. Bishop CM (2007) Pattern recognition and machine learning. Springer, New YorkGoogle Scholar
  11. Bobashev GV, Goedecke DM, Yu F, Epstein JM (2007) A hybrid epidemic model: combining the advantages of agent-based and equation-based approaches. In: Henderson SG, Biller B, Hsieh M-H, Shortle J, Tew JD, Barton RR (eds) Proceeding 2007 winter simulation conference, Washington, pp 1532–1537Google Scholar
  12. Bonabeau E (1997) From classical models of morphogenesis to agent-based models of pattern formation. Artif Life 3:191–211CrossRefGoogle Scholar
  13. Bonabeau E, Dorigo M, Theraulaz G (1999) Swarm intelligence: from natural to artificial systems. Oxford University Press, New YorkzbMATHGoogle Scholar
  14. Carley KM, Fridsma DB, Casman E, Yahja A, Altman N, Chen LC, Kaminsky B, Nave D (2006) Biowar: scalable agent-based model of bioattacks. IEEE Trans Syst Man Cybern Part A: Syst Hum 36(2):252–265CrossRefGoogle Scholar
  15. Celada F, Seiden PE (1992) A computer model of cellular interactions in the immune system. Immunol Today 13(2):56–62CrossRefGoogle Scholar
  16. Clerc M (2006) Particle swarm optimization. ISTE Publishing, LondonzbMATHCrossRefGoogle Scholar
  17. Dawkins R (1989) The selfish gene, 2nd edn. Oxford University Press, OxfordGoogle Scholar
  18. DeAngelis DL, Gross LJ (eds) (1992) Individual-based models and approaches in ecology: populations, communities and ecosystems. Proceedings of a symposium/workshop, Knoxville, 16–19 May 1990. Chapman & Hall, New York. ISBN 0-412-03171-XGoogle Scholar
  19. Dorigo M, Stützle T (2004) Ant colony optimization. MIT Press, CambridgezbMATHCrossRefGoogle Scholar
  20. Dreyfus HL (1979) What computers can’t do: the limits of artificial intelligence. Harper & Row, New YorkGoogle Scholar
  21. Eiben AE, Smith JE (2007) Introduction to evolutionary computing, 2nd edn. Springer, New YorkGoogle Scholar
  22. Eigen M, Schuster P (1979) The hypercycle: a principle of natural self-organization. Springer, BerlinCrossRefGoogle Scholar
  23. Emonet T, Macal CM, North MJ, Wickersham CE, Cluzel P (2005) AgentCell: a digital single-cell assay for bacterial chemotaxis. Bioinformatics 21(11):2714–2721CrossRefGoogle Scholar
  24. Engelbrecht AP (2006) Fundamentals of computational swarm intelligence. Wiley, HobokenGoogle Scholar
  25. Epstein JM (2007) Generative social science: studies in agent-based computational modeling. Princeton University Press, PrincetonGoogle Scholar
  26. Epstein JM, Axtell R (1996) Growing artificial societies: social science from the bottom up. MIT Press, CambridgeGoogle Scholar
  27. Ermentrout GB, Edelstein-Keshet L (1993) Cellular automata approaches to biological modeling. J Theor Biol 160(1):97–133CrossRefGoogle Scholar
  28. Fogel LJ, Owens AJ, Walsh MJ (1966) Artificial intelligence through simulated evolution. Wiley, HobokenzbMATHGoogle Scholar
  29. Folcik VA, An GC, Orosz CG (2007) The basic immune simulator: an agent-based model to study the interactions between innate and adaptive immunity. Theor Biol Med Model 4(39):1–18. http://www.tbiomed.com/content/4/1/39
  30. Fontana W (1992) Algorithmic chemistry. In: Langton CG, Taylor C, Farmer JD, Rasmussen S (eds) Artificial life II: proceedings of the workshop on artificial life, Santa Fe, Feb 1990, Santa Fe Institute studies in the sciences of the complexity, vol X. Addison-Wesley, Reading, pp 159–209Google Scholar
  31. Gardner M (1970) The fantastic combinations of John Conway’s new solitaire game life. Sci Am 223:120–123CrossRefGoogle Scholar
  32. Gilbert N (2002) Varieties of emergence. In: Macal C, Sallach D (eds) Proceedings of the agent 2002 conference on social agents: ecology, exchange and evolution, Chicago, 11–12 Oct 2002, pp 1–11. Available on CD and at www.agent2007.anl.gov
  33. Gilbert N, Troitzsch KG (1999) Simulation for the social scientist. Open University Press, BuckinghamGoogle Scholar
  34. Goldberg DE (1989) Genetic algorithms in search, optimization, and machine learning. Addison-Wesley, ReadingzbMATHGoogle Scholar
  35. Goldberg DE (1994) Genetic and evolutionary algorithms come of age. Commun ACM 37(3):113–119CrossRefGoogle Scholar
  36. Holland JH (1975) Adaptation in natural and artificial systems. University of Michigan, Ann ArborGoogle Scholar
  37. Holland J (1995) Hidden order: how adaptation builds complexity. Addison-Wesley, ReadingGoogle Scholar
  38. Holland JH, Booker LB, Colombetti M, Dorigo M, Goldberg DE, Forrest S, Riolo RL, Smith RE, Lanzi PL, Stolzmann W, Wilson SW (2000) What is a learning classifier system? In: Lanzi PL, Stolzmann W, Wilson SW (eds) Learning classifier systems, from foundations to applications. Springer, London, pp 3–32CrossRefGoogle Scholar
  39. Kauffman SA (1993) The origins of order: self-organization and selection in evolution. Oxford University Press, OxfordGoogle Scholar
  40. Koza JR (1992) Genetic programming: on the programming of computers by means of natural selection. MIT Press, Cambridge, 840 ppzbMATHGoogle Scholar
  41. Langton CG (1984) Self-reproduction in cellular automata. Physica D 10:135–144CrossRefADSGoogle Scholar
  42. Langton CG (1986) Studying artificial life with cellular automata. Physica D 22:120–149MathSciNetCrossRefADSGoogle Scholar
  43. Langton CG (1989a) Preface. In: Langton CG (ed) Artificial life: proceedings of an interdisciplinary workshop on the synthesis and simulation of living systems, Los Alamos, Sept 1987, Addison-Wesley, Reading, pp xv–xxviGoogle Scholar
  44. Langton CG (1989b) Artificial life. In: Langton CG (ed) Artificial life: the proceedings of an interdisciplinary workshop on the synthesis and simulation of living systems, Los Alamos, Sept 1987, Santa Fe Institute studies in the sciences of complexity, vol VI. Addison-Wesley, Reading, pp 1–47Google Scholar
  45. Langton CG (1992) Life at the edge of chaos. In: Langton CG, Taylor C, Farmer JD, Rasmussen S (eds) Artificial life II: proceedings of the workshop on artificial life, Santa Fe, Feb 1990, Santa Fe Institute studies in the sciences of the complexity, vol X. Addison-Wesley, Reading, pp 41–91Google Scholar
  46. Le Novere N, Shimizu TS (2001) Stochsim: modelling of stochastic biomolecular processes. Bioinformatics 17(6):575–576CrossRefGoogle Scholar
  47. Lindenmeyer A (1968) Mathematical models for cellular interaction in development. J Theor Biol 18:280–315CrossRefGoogle Scholar
  48. Lucas JR (1961) Minds, machines and godel. Philosophy 36(137):112–127CrossRefGoogle Scholar
  49. Manson SM (2006) Bounded rationality in agent-based models: experiments with evolutionary programs. Int J Geogr Inf Sci 20(9):991–1012CrossRefGoogle Scholar
  50. Mehrotra K, Mohan CK, Ranka S (1996) Elements of artificial neural networks. MIT Press, CambridgeGoogle Scholar
  51. Mitchell M, Forrest S (1994) Genetic algorithms and artificial life. Artif Life 1(3):267–289CrossRefGoogle Scholar
  52. Ofria C, Wilke CO (2004) Avida: a software platform for research in computational evolutionary biology. Artif Life 10(2):191–229CrossRefGoogle Scholar
  53. Olariu S, Zomaya AY (eds) (2006) Handbook of bioinspired algorithms and applications. Chapman, Boca Raton, p 679zbMATHGoogle Scholar
  54. Padgett JF, Lee D, Collier N (2003) Economic production as chemistry. Ind Corp Chang 12(4):843–877CrossRefGoogle Scholar
  55. Peacor SD, Riolo RL, Pascual M (2006) Plasticity and species coexistence: modeling food webs as complex adaptive systems. In: Pascual M, Dunne JA (eds) Ecological networks: linking structure to dynamics in food webs. Oxford University Press, New York, pp 245–270Google Scholar
  56. Penrose R (1989) The emperor’s new mind: concerning computers, minds, and the laws of physics. Oxford University Press, OxfordGoogle Scholar
  57. Poundstone W (1985) The recursive universe. Contemporary Books, Chicago, 252 ppGoogle Scholar
  58. Preziosi L (ed) (2003) Cancer modelling and simulation. Chapman, Boca RatonzbMATHGoogle Scholar
  59. Ray TS (1991) An approach to the synthesis of life (tierra simulator). In: Langton CG, Taylor C, Farmer JD, Rasmussen S (eds) Artificial life Ii: proceedings of the workshop on artificial life. Wesley, Redwood City, pp 371–408Google Scholar
  60. Rechenberg I (1973) Evolutionsstrategie: optimierung Technischer Systeme Nach Prinzipien Der Biologischen evolution. Frommann-Holzboog, StuttgartGoogle Scholar
  61. Sakoda JM (1971) The checkerboard model of social interaction. J Math Soc 1:119–132CrossRefGoogle Scholar
  62. Schelling TC (1971) Dynamic models of segregation. J Math Soc 1:143–186CrossRefGoogle Scholar
  63. Searle JR (1990) Is the brain a digital computer? Presidential Address to the American Philosophical AssociationGoogle Scholar
  64. Taub AH (ed) (1961) John Von Neumann: collected works. vol V: Design of computers, theory of automata and numerical analysis (Delivered at the Hixon Symposium, Pasadena, Sept 1948). Pergamon Press, OxfordGoogle Scholar
  65. Turing AM (1938) On computable numbers with an application to the entscheidungsproblem. Process Lond Math Soc 2(42):230–265MathSciNetGoogle Scholar
  66. Turing AM (1952) The chemical basis of morphogenesis. Philos Trans Royal Soc B 237:37–72CrossRefADSGoogle Scholar
  67. von Neumann J (1966) In: Burks AW (ed) Theory of self-reproducing automata. University of Illinois Press, ChampaignGoogle Scholar
  68. Wilke CO, Adami C (2002) The biology of digital organisms. Trends Ecol Evol 17(11):528–532CrossRefGoogle Scholar
  69. Wilke CO, Chow SS (2006) Exploring the evolution of ecosystems with digital organisms. In: Pascual M, Dunne JA (eds) Ecological networks: linking structure to dynamics in food webs. Oxford University Press, New York, pp 271–286Google Scholar
  70. Wolfram S (1984) Universality and complexity in cellular automata. Physica D 1–35Google Scholar
  71. Wolfram S (1999) The mathematica book, 4th edn. Wolfram Media/Cambridge University Press, ChampaignzbMATHGoogle Scholar
  72. Wolfram S (2002) A new kind of science. Wolfram Media, ChampaignzbMATHGoogle Scholar

Books and Reviews

  1. Artificial Life (journal) web page (2008) http://www.mitpressjournals.org/loi/artl. Accessed 8 Mar 2008
  2. Banks ER (1971) Information processing and transmission in cellular automata. PhD dissertation, Massachusetts Institute of TechnologyGoogle Scholar
  3. Batty M (2007) Cities and complexity: understanding cities with cellular automata, agent-based models, and fractals. MIT Press, CambridgeGoogle Scholar
  4. Bedau MA (2002) The scientific and philosophical scope of artificial life. Leonardo 35:395–400CrossRefGoogle Scholar
  5. Bedau MA (2003) Artificial life: organization, adaptation and complexity from the bottom up. TRENDS Cognit Sci 7(11):505–512CrossRefGoogle Scholar
  6. Copeland BJ (2004) The essential turing. Oxford University Press, Oxford, 613 ppzbMATHGoogle Scholar
  7. Ganguly N, Sikdar BK, Deutsch A, Canright G, Chaudhuri PP (2008) A survey on cellular automata. www.cs.unibo.it/bison/publications/CAsurvey.pdf
  8. Griffeath D, Moore C (eds) (2003) New constructions in cellular automata, Santa Fe Institute studies in the sciences of complexity proceedings. Oxford University Press, New York, 360 ppzbMATHGoogle Scholar
  9. Gutowitz H (ed) (1991) Cellular automata: theory and experiment. Special issue of Physica D. 499 ppGoogle Scholar
  10. Hraber T, Jones PT, Forrest S (1997) The ecology of echo. Artif Life 3:165–190CrossRefGoogle Scholar
  11. International Society for Artificial Life web page (2008) www.alife.org. Accessed 8 Mar 2008
  12. Jacob C (2001) Illustrating evolutionary computation with mathematica. Academic, San Diego, 578 ppGoogle Scholar
  13. Michael CF, Fred WG, Jay A (2005) Simulation optimization: a review, new developments, and applications. In: Proceedings of the 37th conference on Winter simulation, OrlandoGoogle Scholar
  14. Miller JH, Page SE (2007) Complex adaptive systems: an introduction to computational models of social life. Princeton University Press, PrincetonGoogle Scholar
  15. North MJ, Macal CM (2007) Managing business complexity: discovering strategic solutions with agent-based modeling and simulation. Oxford University Press, New YorkCrossRefGoogle Scholar
  16. Pascual M, Dunne JA (eds) (2006) Ecological networks: linking structure to dynamics in food webs, Santa Fe Institute studies on the sciences of complexity. Oxford University Press, New YorkGoogle Scholar
  17. Simon H (2001) The sciences of the artificial. MIT Press, CambridgeGoogle Scholar
  18. Sims K (1991) Artificial evolution for computer graphics. ACM SIGGRAPH ′91, Las Vegas, July 1991, pp 319–328Google Scholar
  19. Sims K (1994) Evolving 3D morphology and behavior by competition. Artif Life IV:28–39Google Scholar
  20. Terzopoulos D (1999) Artificial life for computer graphics. Commun ACM 42(8):33–42CrossRefGoogle Scholar
  21. Toffoli T, Margolus N (1987) Cellular automata machines: a new environment for modeling. MIT Press, Cambridge, 200 ppGoogle Scholar
  22. Tu X, Terzopoulos D (1994) Artificial fishes: physics, locomotion, perception, behavior. In: Proceedings of SIGGRAPH`94, 24–29 July 1994, Orlando, pp 43–50Google Scholar
  23. Weisbuch G (1991) Complex systems dynamics: an introduction to automata networks, translated from French by Ryckebusch S. Addison-Wesley, Redwood CityGoogle Scholar
  24. Wiener N (1948) Cybernetics, or control and communication in the animal and the machine. Wiley, New YorkGoogle Scholar
  25. Wooldridge M (2000) Reasoning about rational agents. MIT Press, CambridgezbMATHGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2015

Authors and Affiliations

  1. 1.Center for Complex Adaptive Agent Systems Simulation (CAS2), Decision and Information Sciences DivisionArgonne National LaboratoryArgonneUSA