Agent-Based Modeling and Artificial Life
- 485 Downloads
KeywordsCellular Automaton Cellular Automaton Complex Adaptive System Artificial Life Pheromone Trail
Definition of the Subject
Agent-based modeling began as the computational arm of artificial life some 20 years ago. Artificial life is concerned with the emergence of order in nature. How do systems self-organize themselves and spontaneously achieve a higher-ordered state? Agent-based modeling, then, is concerned with exploring and understanding the processes that lead to the emergence of order through computational means. The essential features of artificial life models are translated into computational algorithms through agent-based modeling. With its historical roots in artificial life, agent-based modeling has become a distinctive form of modeling and simulation. Agent-based modeling is a bottom-up approach to modeling complex systems by explicitly representing the behaviors of large numbers of agents and the processes by which they interact. These essential features are all that is needed to produce at least rudimentary forms of emergent behavior at the systems level. To understand the current state of agent-based modeling and where the field aspires to be in the future, it is necessary to understand the origins of agent-based modeling in artificial life.
The field of artificial life, or “ALife,” is intimately connected to agent-based modeling, or “ABM.” Although one can easily enumerate some of life’s distinctive properties, such as reproduction, respiration, adaptation, emergence, etc., a precise definition of life remains elusive.
Artificial life had its inception as a coherent and sustainable field of investigation at a workshop in the late 1980s (Langton 1989a). This workshop drew together specialists from diverse fields who had been working on related problems in different guises, using different vocabularies suited to their fields.
At about the same time, the introduction of the personal computer suddenly made computing accessible, convenient, inexpensive, and compelling as an experimental tool. The future seemed to have almost unlimited possibilities for the development of ALife computer programs to explore life and its possibilities. Thus, several ALife software programs emerged that sought to encapsulate the essential elements of life through incorporation of ALife-related algorithms into easily usable software packages that could be widely distributed. Computational programs for modeling populations of digital organisms, such as Tierra, Avida, and Echo, were developed along with more general purpose agent-based simulators such as Swarm.
Artificial systems which exhibit lifelike behaviors are worthy of investigation on their own right, whether or not we think that the processes that they mimic have played a role in the development or mechanics of life as we know it to be. Such systems can help us expand our understanding of life as it could be. (p. xvi in Langton 1989a)
The field of ALife addresses lifelike properties of systems at an abstract level by focusing on the information content of such systems independent of the medium in which they exist, whether it be biological, chemical, physical, or in silico. This means that computation, modeling, and simulation play a central role in ALife investigations.
The relationship between ALife and ABM is complex. A case can be made that the emergence of ALife as a field was essential to the creation of agent-based modeling. Computational tools were both required and became possible in the 1980s for developing sophisticated models of digital organisms and general purpose artificial life simulators. Likewise, a case can be made that the possibility for creating agent-based models was essential to making ALife a promising and productive endeavor. ABM made it possible to understand the logical outcomes and implications of ALife models and lifelike processes. Traditional analytical means, although valuable in establishing baseline information, were limited in their capabilities to include essential features of ALife. Many threads of ALife are still intertwined with developments in ABM and vice verse. Agent-based models demonstrate the emergence of lifelike features using ALife frameworks; ALife algorithms are widely used in agent-based models to represent agent behaviors. These threads are explored in this entry. In ALife terminology, one could say that ALife and ABM have coevolved to their present states. In all likelihood, they will continue to do so.
This entry covers in a necessarily brief and perhaps superficial, but broad, way these relationships between ABM and ALife and extrapolates to future possibilities. This entry is organized as follows. Section “Artificial Life” introduces artificial life, its essential elements, and its relationship to computing and agent-based modeling. Section “ALife in Agent-Based Modeling” describes several examples of ABM applications spanning many scales. Section “Future Directions” concludes with future directions for ABM and ALife. A bibliography is included for further reading.
Essential elements of life and artificial life
Origins of life and self-organization
Replication and development processes
Learning and evolution
Computation of living systems
Simulation systems for studying ALife
Each of these topics has threads leading into agent-based modeling.
The Essence of ALife
Lifelike behavior on the part of man-made systems
Semiautonomous entities whose local interactions with one another are governed by a set of simple rules
Populations, rather than individuals
Simple rather than complex specifications
Local rather than global control
Bottom-up rather than top-down modeling
Emergent rather than prespecified behaviors
Langton observes that complex high-level dynamics and structures often emerge (in living and artificial systems), developing over time out of the local interactions among low-level primitives. Agent-based modeling has grown up around the need to model the essentials of ALife.
Self-Replication and Cellular Automata
A set of simple state transition rules determines the value of each cell based on the cell’s state and the states of neighboring cells. Every cell is updated at each time according to the transition rules. Each cell is identical in terms of its update rules. Cells differ only in their initial states. A CA is deterministic in the sense that the same state for a cell and its set of neighbors always results in the same updated state for the cell. Typically, CAs are set up with periodic boundary conditions, meaning that the set of cells on one edge of the grid boundary is the neighbor cells to the cells on the opposite edge of the grid boundary. The space of the CA grid forms a surface on a toroid, or donut-shape, so there is no boundary per se. It is straightforward to extend the notion of cellular automata to two, three, or more dimensions.
Von Neumann solved the self-replication problem by developing a cellular automaton in which each cell had 29 possible states and five neighbors (including the updated cell itself). In the von Neumann neighborhood, neighbor cells are in the north, south, east, and west directions from the updated cell.
The Game of Life
A cell will be On in the next generation if exactly three of its eight neighboring cells are currently On.
A cell will retain its current state if exactly two of its neighbors are On.
A cell will be Off otherwise.
Initially, a small set of On cells is randomly distributed over the grid. The three rules are then applied repeatedly to all cells in the grid.
As simple as the state transition rules are, by using only local information, structures of arbitrarily high complexity can emerge in a CA.
The specific patterns that emerge are extremely sensitive to the specific rules used. For example, changing Rule 1 above to “A cell will be On in the next generation if exactly four of its eight neighboring cells are currently On” results in the development of completely different patterns.
The Game of Life provides insights into the role of information in fundamental life processes.
Cellular Automata Classes
Class I: homogeneous state
Class II: simple stable or periodic structure
Class III: chaotic (non-repeating) pattern
Class IV: complex patterns of localized structures
The most interesting of these is Class IV cellular automata, in which very complex patterns of non-repeating localized structures emerge that are often long lived. Wolfram showed that these Class IV structures were also complex enough to support universal computation (Wolfram 2002). Langton (1992) coined the term “life at the edge of chaos” to describe the idea that Class IV systems are situated in a thin region between Class II and Class III systems. Agent-based models often yield Class I, Class II, and Class III behaviors.
Other experiments with CAs investigated the simplest representations that could replicate themselves and produce emergent structures. Langton’s loop is a self-replicating two-dimensional cellular automaton, much simpler than von Neumann’s (Langton 1984). Although not complex enough to be a universal computer, Langton’s loop was the simplest known structure that could reproduce itself. Langton’s ant is a two-dimensional CA with a simple set of rules, but complicated emergent behavior. Following a simple set of rules for moving from cell to cell, a simulated ant displays unexpectedly complex behavior. After an initial period of chaotic movements in the vicinity of its initial location, the ant begins to build a recurrent pattern of regular structures that repeats itself indefinitely (Langton 1986). Langton’s ant has behaviors complex enough to be a universal computer.
Morphogenesis is the developmental process by which the phenotype develops in accord with the genotype, through interactions with and resources obtained from its environment. In a famous paper, Turing (1952) modeled the dynamics of morphogenesis and, more generally, the problem of how patterns self-organize spontaneously in nature. Turing used differential equations to model a simple set of reaction-diffusion chemical reactions. Turing demonstrated that only a few assumptions were necessary to bring about the emergence of wave patterns and gradients of chemical concentration, suggestive of morphological patterns that commonly occur in nature. Reaction-diffusion systems are characterized by the simultaneous processes of attraction and repulsion and are the basis for the agent behavioral rules (attraction and repulsion) in many social agent-based models.
More recently, Bonabeau extended Turing’s treatment of morphogenesis to a theory of pattern formation based on agent-based modeling. Bonabeau (1997) states the reason for relying on ABM: “because pattern-forming systems based on agents are (relatively) more easily amenable to experimental observations.”
One approach to building systems from a genotype specification is based on the methodology of recursively generated objects. Such recursive systems are compact in their specification, and their repeated application can result in complex structures, as demonstrated by cellular automata.
Recursive systems are logic systems in which strings of symbols are recursively rewritten based on a minimum set of instructions. Recursive systems, or term replacement systems, as they have been called, can result in complex structures. Examples of recursive systems include cellular automata, as described above, and Lindenmayer systems, called L-systems (Le Novere and Shimizu 2001). An L-system consists of a formal grammar, which is a set of rules for rewriting strings of symbols. L-systems have been used extensively for modeling living systems, for example, plant growth and development, producing highly realistic renderings of plants, with intricate morphologies and branching structures.
Wolfram (1999) used symbolic recursion as a basis for developing Mathematica, the computational mathematics system based on symbolic processing and term replacement. Unlike numeric programming languages, a symbolic programming language allows a variable to be a basic object and does not require a variable to be assigned a value before it is used in a program.
Any agent-based model is essentially a recursive system. Time is simulated by the repeated application of the agent updating rules. The genotype is the set of rules for the agent behaviors. The phenotype is a set of the patterns and structures that emerge from the computation. As in cellular automata and recursive systems, extremely complex structures emerge in agent-based models that are often unpredictable from examination of the agent rules.
One of the primary motivations for the field of ALife is to understand emergent processes, that is, the processes by which life emerges from its constituent elements. Langton writes: “The ‘key’ concept in ALife, is emergent behavior” (p. 2 in Langton 1989b). Complex systems exhibit patterns of emergence that are not predictable from inspection of the individual elements. Emergence is described as unexpected, unpredictable, or otherwise surprising. That is, the modeled system exhibits behaviors that are not explicitly built into the model. Unpredictability is due to the nonlinear effects that result from the interactions of entities having simple behaviors. Emergence by these definitions is something of a subjective process.
In biological systems, emergence is a central issue whether it be the emergence of the phenotype from the genotype, the emergence of protein complexes from genomic information networks (Kauffman 1993), or the emergence of consciousness from networks of millions of brain cells.
One of the motivations for agent-based modeling is to explore the emergent behaviors exhibited by the simulated system. In general, agent-based models often exhibit patterns and relationships that emerge from agent interactions. An example is the observed formation of groups of agents that collectively act in coherent and coordinated patterns. Complex adaptive systems, widely investigated by Holland in his agent-based model Echo (Holland 1995), are often structured in hierarchies of emergent structures. Emergent structures can collectively form higher-order structures, using the lower-level structures as building blocks. An emergent structure itself can take on new emergent behaviors. These structures in turn affect the agents from which the structure has emerged in a process called downward causation (Gilbert 2002). For example, in the real world, people organize and identify with groups, institutions, nations, etc. They create norms, laws, and protocols that in turn act on the individuals comprising the group.
How does one operationally define emergence with respect to agent-based modeling?
How does one automatically identify and measure the emergence of entities in a model?
How do agents that comprise an emergent entity perceived by an observer recognize that they are part of that entity?
Artificial chemistry is a subfield of ALife. One of the original goals of artificial chemistry was to understand how life could originate from prebiotic chemical processes. Artificial chemistry studies self-organization in chemical reaction networks by simulating chemical reactions between artificial molecules. Artificial chemistry specifies well-understood chemical reactions and other information such as reaction rates, relative molecular concentrations, probabilities of reaction, etc. These form a network of possibilities. The artificial substances and the networks of chemical reactions that emerge from the possibilities are studied through computation. Reactions are specified as recursive algebras and activated as term replacement systems (Fontana 1992).
The emergence of autocatalytic sets, or hypercycles, has been a prime focus of artificial chemistry (Eigen and Schuster 1979). A hypercycle is a self-contained system of molecules and a self-replicating, and thereby self-sustaining, cyclic linkage of chemical reactions. Hypercycles evolve through a process by which self-replicating entities compete for selection.
Structured topology (how interaction networks form)
Altruistic learning (how cooperation and exchange emerge)
Stigmergy (how agent communication is facilitated by using the environment as a means of information exchange among agents)
Digital Organisms The widespread availability of personal computers spurred the development of ALife programs used to study evolutionary processes in silico. Tierra was the first system devised in which computer programs were successfully able to evolve and adapt (Ray 1991). Avida extended Tierra to account for the spatial distribution of organisms and other features (Ofria and Wilke 2004; Wilke and Adami 2002). Echo is a simulation framework for implementing models to investigate mechanisms that regulate diversity and information processing in complex adaptive systems (CAS), systems comprised of many interacting adaptive agents (Holland 1975, 1995). In implementations of Echo, populations evolve interaction networks, resembling species communities in ecological systems, which regulate the flow of resources.
Systems such as Tierra, Avida, and Echo simulate populations of digital organisms, based on the genotype/phenotype schema. They employ computational algorithms to mutate and evolve populations of organisms living in a simulated computer environment. Organisms are represented as strings of symbols, or agent attributes, in computer memory. The environment provides them with resources (computation time) they need to survive, compete, and reproduce. Digital organisms interact in various ways and develop strategies to ensure survival in resource-limited environments.
Digital organisms are extended to agent-based modeling by implementing individual-based models of food webs in a system called DOVE (Wilke and Chow 2006). Agent-based models allow a more complete representation of agent behaviors and their evolutionary adaptability at both the individual and population levels.
ALife and Computing
Creating lifelike forms through computation is central to artificial life. Is it possible to create life through computation? The capabilities and limitations of computation constrain the types of artificial life that can be created. The history of ALife has close ties with important events in the history of computation.
Alan Turing (1938) investigated the limitations of computation by developing an abstract and idealized computer, called a universal Turing machine (UTM). A UTM has an infinite tape (memory) and is therefore an idealization of any actual computer that may be realized. A UTM is capable of computing anything that is computable, that is, anything that can be derived via a logical, deductive series of statements. Are the algorithms used in today’s computers, and in ALife calculations and agent-based models in particular, as powerful as universal computers?
Any system that can effectively simulate a small set of logical operations (such as AND and NOT) can effectively produce any possible computation. Simple rule systems in cellular automata were shown to be equivalent to universal computers (von Neumann 1966; Wolfram 2002) and in principal able to compute anything that is computable – perhaps, even life!
Some have argued that life, in particular human consciousness, is not the result of a logical-deductive or algorithmic process and therefore not computable by a universal Turing machine. This problem is more generally referred to as the mind-body problem (Lucas 1961). Dreyfus (1979) argues against the assumption often made in the field of artificial intelligence that human minds function like general purpose symbol manipulation machines. Penrose (1989) argues that the rational processes of the human mind transcend formal logic systems. In a somewhat different view, biological naturalism contends (Searle 1990) that human behavior might be able to be simulated, but human consciousness is outside the bounds of computation.
Such philosophical debates are as relevant to agent-based modeling as they are to artificial intelligence, for they are the basis of answering the question of what kind of systems and processes agent-based models will ultimately be able, or unable, to simulate.
Artificial Life Algorithms
ALife use several biologically inspired computational algorithms (Olariu and Zomaya 2006). Bioinspired algorithms include those based on Darwinian evolution, such as evolutionary algorithms; those based on neural structures, such as neural networks; and those based on decentralized decision-making behaviors observed in nature. These algorithms are commonly used to model adaptation and learning in agent-based modeling or to optimize the behaviors of whole systems.
Initialization: Generate an initial population of individuals. The individuals are unique and include specific encoding of attributes in chromosomes that represents the characteristics of the individuals.
Evaluation: Calculate the fitness of all individuals according to a specified fitness function.
Checking: If any of the individuals has achieved an acceptable level of fitness, stop; the problem is solved. Otherwise, continue with selection.
Selection: Select the best pair of individuals in the population for reproduction according to their high fitness levels.
Crossover: Combine the chromosomes for the two best individuals through a crossover operation, and produce a pair of offspring.
Mutation: Randomly mutate the chromosomes for the offspring.
Replacement: Replace the least fit individuals in the population with the offspring.
Continue at Step 2.
Steps 5 and 6 above, the operations of crossover and mutation, comprise the set of genetic operators inspired by nature. This series of steps for a GA comprise a basic framework rather than a specific implementation. Actual GA implementations include numerous variations and alternative implementations in several of the GA steps.
Evolution strategies (ES) are similar to genetic algorithms but rely on mutation as its primary genetic operator.
Learning classifier systems (LCS) build on genetic algorithms and adaptively assign relative weights to sensor-action sets that result in the most positive outcomes relative to a goal.
Genetic programming (GP) has similar features to genetic algorithms, but instead of using 0s and 1s or other symbols for comprising chromosomes, GPs combine logical operations and directives in a tree structure. In effect, chromosomes in GPs represent whole computer programs that perform a variety of functions with varying degrees of success and efficiencies. GP chromosomes are evaluated against fitness or performance measures and recombined. Better-performing chromosomes are maintained and expand their representation in the population. For example, an application of a GP is to evolve a better-performing rule set that represents an agent’s behavior.
Evolutionary programming (EP) is a similar technique to genetic programming, but relies on mutation as its primary genetic operator.
Biologically Inspired Computing
Artificial neural networks (ANN) are another type of commonly used biologically inspired algorithm (Mehrotra et al. 1996). An artificial neural network uses mathematical models based on the structures observed in neural systems. An artificial neuron contains a stimulus-response model of neuron activation based on thresholds of stimulation. In modeling terms, neural networks are equivalent to nonlinear, statistical data modeling techniques. Artificial neural networks can be used to model complex relationships between inputs and outputs and to find patterns in data that are dynamically changing. An ANN is adaptive in that changes in its structure are based on external or internal information that flows through the network. The adaptive capability makes ANN an important technique in agent-based models.
Swarm intelligence algorithms simulate the movement and interactions of large numbers of ants or particles over a search space. In terms of agent-based modeling, the ants or particles are the agents, and the search space is the environment. Agents have position and state as attributes. In the case of particle swarm optimization, agents also have velocity.
In a typical ant colony, ants search randomly until one of them finds food.
Then they return to their colony and lay down a chemical pheromone trail along the way.
When other ants find such a pheromone trail, they are more likely to follow the trail rather than to continue to search randomly.
As other ants find the same food source, they return to the nest, reinforcing the original pheromone trail as they return.
As more and more ants find the food source, the ants eventually lay down a strong pheromone trail to the point that virtually all the ants are directed to the food source.
As the food source is depleted, fewer ants are able to find the food, and fewer ants lay down a reinforcing pheromone trail; the pheromone naturally evaporates, and eventually, no ants proceed to the food source, as the ants shift their attention to searching for new food sources.
In an ant colony optimization computational model, the optimization problem is represented as a graph, with nodes representing places and links representing possible paths. An ant colony algorithm mimics ant behavior with simulated ants moving from node to node in the graph, laying down pheromone trails, etc. The process by which ants communicate indirectly by using the environment as an intermediary is known as stigmergy (Bonabeau et al. 1999) and is commonly used in agent-based modeling.
Particle swarm optimization (PSO) is another decentralized problem-solving technique in which a swarm of particles is simulated as it moves over a search space in search of a global optimum. A particle stores its best position found so far in its memory and is aware of the best positions obtained by its neighboring particles. The velocity of each particle adapts over time based on the locations of the best global and local solutions obtained so far, incorporating a degree of stochastic variation in the updating of the particle positions at each iteration.
Artificial Life Algorithms and Agent-Based Modeling
Biologically inspired algorithms are often used with agent-based models. For example, an agent’s behavior and its capacity to learn from experience or to adapt to changing conditions can be modeled abstractly through the use of genetic algorithms or neural networks. In the case of a GA, a chromosome effectively represents a single agent action (output) given a specific condition or environmental stimulus (input). Behaviors that are acted on and enable the agent to respond better to environmental challenges are reinforced and acquire a greater share of the chromosome pool. Behaviors that fail to improve the organism’s fitness diminish in their representation in the population.
Evolutionary programming can be used to directly evolve programs that represent agent behaviors. For example, Manson (2006) develops a bounded rationality model using evolutionary programming to solve an agent multi-criteria optimization problem.
Artificial neural networks have also been applied to modeling adaptive agent behaviors, in which an agent derives a statistical relationship between the environmental conditions it faces, its history, and its actions, based on feedback on the success or failures of its actions and the actions of others. For example, an agent may need to develop a strategy for bidding in a market, based on the success of its own and other’s previous bids and outcomes.
Finally, swarm intelligence approaches are agent based in their basic structure, as described above. They can also be used for system optimization through the selection of appropriate parameters for agent behaviors.
Population: A population of organisms or individuals is considered. The population may be diversified, and individuals may vary their characteristics, behaviors, and accumulated resources, in both time and space.
Interaction: Interaction requires sensing of the immediate locale, or neighborhood, on the part of an individual. An organism can simply become “aware” of other organisms in its vicinity, or it may have a richer set of interactions with them. The individual also interacts with its (non-agent) environment in its immediate locale. This requirement introduces spatial aspects into the problem, as organisms must negotiate the search for resources through time and space.
Sustainment and renewal: Sustainment and renewal require the acquisition of resources. An organism needs to sense, find, ingest, and metabolize resources or nourishment as an energy source for processing into other forms of nutrients. Resources may be provided by the environment, i.e., outside of the agents themselves, or by other agents. The need for sustainment leads to competition for resources among organisms. Competition could also be a precursor to cooperation and more complex emergent social structures if this proves to be a more effective strategy for survival.
Self-reproduction and replacement: Organisms reproduce by following instructions at least partially embedded within themselves and interacting with the environment and other agents. Passing on traits to the next generation implies a requirement for trait transmission. Trait transmission requires encoding an organism’s traits in a reduced form, that is, a form that contains less than the total information representing the entire organism. It also requires a process for transforming the organism’s traits into a viable set of possible new traits for a new organism. Mutation and crossover operators enter into such a process. Organisms also leave the population and are replaced by other organisms, possibly with different traits. The organisms can be transformed through changes in their attributes and behaviors, as in, for example, learning or aging. The populations of organisms can be transformed through the introduction of new organisms and replacement, as in evolutionary adaptation.
As we will see in the section that follows, many of the essential aspects of ALife have been incorporated into the development of agent-based models.
ALife in Agent-Based Modeling
This section briefly touches on the ways in which ALife has motivated agent-based modeling. The form of agent-based models, in terms of their structure and appearance, is directly based on early models from the field of ALife. Several application disciplines in agent-based modeling have been spawned and infused by ALife concepts. Two are covered here. These are how agent-based modeling is applied to social and biological systems.
Agent-Based Modeling Topologies
Agent-based modeling owes much to artificial life in both form and substance. Modeling a population of heterogeneous agents with a diverse set of characteristics is a hallmark of agent-based modeling. The agent perspective is unique among simulation approaches, unlike the process perspective or the state-variable approach taken by other simulation approaches.
As we have seen, agents interact with a small set of neighbor agents in a local area. Agent neighborhoods are defined by how agents are connected, the agent interaction topology. Cellular automata represent agent neighborhoods by using a grid in which the agents exist in the cells, one agent per cell, or as the nodes of the lattice of the grid. The cells immediately surrounding an agent comprise the agent’s neighborhood, and the agents that reside in the neighborhood cells comprise the neighbors. Many agent-based models have been based on this cellular automata spatial representation. The transition from a cellular automaton, such as the Game of Life, to an agent-based model is accomplished by allowing agents to be distinct from the cells on which they reside and allowing the agents to move from cell to cell across the grid. Agents move according to the dictates of their behaviors, interacting with other agents that happen to be in their local neighborhoods along the way.
Cellular automata grids (agents are cells or are within cells) or lattices (agents are grid points)
Networks, in which agents of vertices and agent relationships are edges
Continuous space, in one, two, or three dimensions
Aspatial random interactions, in which pairs of agents are randomly selected
Geographical information systems (GIS), in which agents move over geographically defined patches, relaxing the one agent per cell restriction.
Social Agent-Based Modeling
Early social agent-based models were based on ALife’s cellular automata approach. In applications of agent-based modeling to social processes, agents represent people or groups of people, and agent relationships represent processes of social interaction (Gilbert and Troitzsch 1999).
Identifying the social interaction mechanisms for how cooperative behavior emerges among individuals and groups has been addressed using agent-based modeling and evolutionary game theory. Evolutionary game theory accounts for how the repeated interactions of players in a game-theoretic framework affect the development and evolution of the players’ strategies. Axelrod showed, using a cellular automata approach, in which agents on the grid employed a variety of different strategies, that a simple tit-for-tat strategy of reciprocal behavior toward individuals is enough to establish sustainable cooperative behavior (Axelrod 1984, 1997). In addition, Axelrod investigated strategies that were self-sustaining and robust in that they reduced the possibility of invasion by agents having other strategies.
Culture and Generative Social Science
Dawkins, who has written extensively on aspects of Darwinian evolution, coined the term meme as the smallest element of culture that is transmissible between individuals, similar to the notion of the gene as being the primary unit of transmitting genetic information (Dawkins 1989). Several social agent-based models are based on a meme representation of culture as shared or collective agent attributes.
In the broadest terms, social agent-based simulation is concerned with social interaction and social processes. Emergence enters into social simulation through generative social science whose goal is to model social processes as emergent processes and their emergence as the result of social interactions. Epstein has argued that social processes are not fully understood unless one is able to theorize how they work at a deep level and have social processes emerge as part of a computational model (Epstein 2007). More recent work has treated culture as a fluid and dynamic process subject to interpretation of individual agents, more complex than the genotype/phenotype framework would suggest.
ALife and Biology
ALife research has motivated many agent-based computational models of biological systems, and at all scales, ranging from the cellular level, or even the subcellular molecular level, as the basic unit of agency, to complex organisms embedded in larger structures such as food webs or complex ecosystems.
From Cellular Automata to Cells
Cellular automata are a natural application to modeling cellular systems (Alber et al. 2003; Ermentrout and Edelstein-Keshet 1993). One approach uses the cellular automata grid and cells to model structures of stationary cells comprising a tissue matrix. Each cell is a tissue agent. Mobile cells consisting of pathogens and antibodies are also modeled as agents. Mobile agents diffuse through tissue and interact with tissue and other colocated mobile cells. This approach is the basis for agent-based models of the immune system. Celada and Seiden (1992) used bit strings to model the cell receptors in a cellular automaton model of the immune system called IMMSIM. This approach was extended to a more general agent-based model and implemented to maximize the number of cells that could be modeled in the CIMMSIM and ParImm systems (Bernaschi and Castiglione 2001). The Basic Immune Simulator uses a general agent-based framework (the Repast agent-based modeling toolkit) to model the interactions between the cells of the innate and adaptive immune system (Folcik et al. 2007). These approaches for modeling the immune system have inspired several agent-based models of intrusion detection for computer networks (see, e.g., Azzedine et al. 2007) and have found use in modeling the development and spread of cancer (Preziosi 2003).
At the more macroscopic level, agent-based epidemic models have been developed using network topologies. These models include people and some representation of pathogens as individual agents for natural (Bobashev et al. 2007) and potentially man-made (Carley et al. 2006) epidemics.
Early models of ecosystems used approaches adapted from physical modeling, especially models of idealized gases based on statistical mechanics. More recently, individual-based models have been developed to represent the full range of individual diversity by explicitly modeling individual attributes or behaviors and aggregating across individuals for an entire population (DeAngelis and Gross 1992). Agent-based approaches model a diverse set of agents and their interactions based on their relationships, incorporating adaptive behaviors as appropriate. For example, food webs represent the complex, hierarchical network of agent relationships in local ecosystems (Peacor et al. 2006). Agents are individuals or species representatives. Adaptation and learning for agents in such food webs can be modeled to explore diversity, relative population sizes, and resiliency to environmental insult.
Adaptation and Learning in Agent-Based Models
Biologists consider adaptation to be an essential part of the process of evolutionary change. Adaptation occurs at two levels: the individual level and the population level. In parallel with these notions, agents in an ABM adapt by changing their individual behaviors or by changing their proportional representation in the population. Agents adapt their behaviors at the individual level through learning from experience in their modeled environment.
With respect to agent-based modeling, theories of learning by individual agents or collectives of agents, as well as algorithms for how to model learning, become important. Machine learning is a field consisting of algorithms for recognizing patterns in data (such as data mining) through techniques such as supervised learning, unsupervised learning, and reinforcement learning (Alpaydın 2004; Bishop 2007). Genetic algorithms (Goldberg 1989) and related techniques such as learning classifier systems (Holland et al. 2000) are commonly used to represent agent learning in agent-based models. In ABM applications, agents learn through interactions with the simulated environment in which they are embedded as the simulation precedes through time, and agents modify their behaviors accordingly.
Agents may also adapt collectively at the population level. Those agents having behavioral rules better suited to their environments survive and thrive, and those agents not so well suited are gradually eliminated from the population.
Agent-based modeling continues to be inspired by ALife – in the fundamental questions it is trying to answer, in the algorithms that it employs to model agent behaviors and solve agent-based models, and in the computational architectures that are employed to implement agent-based models. The future of the fields of both ALife and ABM will continue to be intertwined in essential ways in the coming years.
Computational advances will continue at an ever-increasing pace, opening new vistas for computational possibilities in terms of expanding the scale of models that are possible. Computational advances will take several forms, including advances in computer hardware including new chip designs, multi-core processors, and advanced integrated hardware architectures. Software that take advantage of these designs and in particular computational algorithms and modeling techniques and approaches will continue to provide opportunities for advancing the scale of applications and allow more features to be included in agent-based models as well as ALife applications. These will be opportunities for advancing applications of ABM to ALife in both the realms of scientific research and in policy analysis.
Real-world optimization problems routinely solved by business and industry will continue to be solved by ALife-inspired algorithms. The use of ALife-inspired agent-based algorithms for solving optimization problems will become more widespread because of their natural implementation and ability to handle ill-defined problems.
Emergence is a key theme of ALife. ABM offers the capability to model the emergence of order in a variety of complex and complex adaptive systems. Inspired by ALife, identifying the fundamental mechanisms responsible for higher-order emergence and exploring these with agent-based modeling will be an important and promising research area.
Advancing social sciences beyond the genotype/phenotype framework to address the generative nature of social systems in their full complexity is a requirement for advancing computational social models. Recent work has treated culture as a fluid and dynamic process subject to interpretation of individual agents, more complex in many ways than that provided by the genotype/phenotype framework.
Agent-based modeling will continue to be the avenue for exploring new constructs in ALife. If true artificial life is ever developed in silico, it will most likely be done using the methods and tools of agent-based modeling.
- Alpaydın E (2004) Introduction to machine learning. MIT Press, CambridgeGoogle Scholar
- Axelrod R (1984) The evolution of cooperation. Basic Books, New YorkGoogle Scholar
- Axelrod R (1997) The complexity of cooperation: agent-based models of competition and collaboration. Princeton University Press, PrincetonGoogle Scholar
- Back T (1996) Evolutionary algorithms in theory and practice: evolution strategies, evolutionary programming, genetic algorithms. Oxford University Press, New YorkGoogle Scholar
- Berlekamp ER, Conway JH, Guy RK (2003) Winning ways for your mathematical plays, 2nd edn. AK Peters, NatickGoogle Scholar
- Bishop CM (2007) Pattern recognition and machine learning. Springer, New YorkGoogle Scholar
- Bobashev GV, Goedecke DM, Yu F, Epstein JM (2007) A hybrid epidemic model: combining the advantages of agent-based and equation-based approaches. In: Henderson SG, Biller B, Hsieh M-H, Shortle J, Tew JD, Barton RR (eds) Proceeding 2007 winter simulation conference, Washington, pp 1532–1537Google Scholar
- Dawkins R (1989) The selfish gene, 2nd edn. Oxford University Press, OxfordGoogle Scholar
- DeAngelis DL, Gross LJ (eds) (1992) Individual-based models and approaches in ecology: populations, communities and ecosystems. Proceedings of a symposium/workshop, Knoxville, 16–19 May 1990. Chapman & Hall, New York. ISBN 0-412-03171-XGoogle Scholar
- Dreyfus HL (1979) What computers can’t do: the limits of artificial intelligence. Harper & Row, New YorkGoogle Scholar
- Eiben AE, Smith JE (2007) Introduction to evolutionary computing, 2nd edn. Springer, New YorkGoogle Scholar
- Engelbrecht AP (2006) Fundamentals of computational swarm intelligence. Wiley, HobokenGoogle Scholar
- Epstein JM (2007) Generative social science: studies in agent-based computational modeling. Princeton University Press, PrincetonGoogle Scholar
- Epstein JM, Axtell R (1996) Growing artificial societies: social science from the bottom up. MIT Press, CambridgeGoogle Scholar
- Folcik VA, An GC, Orosz CG (2007) The basic immune simulator: an agent-based model to study the interactions between innate and adaptive immunity. Theor Biol Med Model 4(39):1–18. http://www.tbiomed.com/content/4/1/39
- Fontana W (1992) Algorithmic chemistry. In: Langton CG, Taylor C, Farmer JD, Rasmussen S (eds) Artificial life II: proceedings of the workshop on artificial life, Santa Fe, Feb 1990, Santa Fe Institute studies in the sciences of the complexity, vol X. Addison-Wesley, Reading, pp 159–209Google Scholar
- Gilbert N (2002) Varieties of emergence. In: Macal C, Sallach D (eds) Proceedings of the agent 2002 conference on social agents: ecology, exchange and evolution, Chicago, 11–12 Oct 2002, pp 1–11. Available on CD and at www.agent2007.anl.gov
- Gilbert N, Troitzsch KG (1999) Simulation for the social scientist. Open University Press, BuckinghamGoogle Scholar
- Holland JH (1975) Adaptation in natural and artificial systems. University of Michigan, Ann ArborGoogle Scholar
- Holland J (1995) Hidden order: how adaptation builds complexity. Addison-Wesley, ReadingGoogle Scholar
- Holland JH, Booker LB, Colombetti M, Dorigo M, Goldberg DE, Forrest S, Riolo RL, Smith RE, Lanzi PL, Stolzmann W, Wilson SW (2000) What is a learning classifier system? In: Lanzi PL, Stolzmann W, Wilson SW (eds) Learning classifier systems, from foundations to applications. Springer, London, pp 3–32CrossRefGoogle Scholar
- Kauffman SA (1993) The origins of order: self-organization and selection in evolution. Oxford University Press, OxfordGoogle Scholar
- Langton CG (1989a) Preface. In: Langton CG (ed) Artificial life: proceedings of an interdisciplinary workshop on the synthesis and simulation of living systems, Los Alamos, Sept 1987, Addison-Wesley, Reading, pp xv–xxviGoogle Scholar
- Langton CG (1989b) Artificial life. In: Langton CG (ed) Artificial life: the proceedings of an interdisciplinary workshop on the synthesis and simulation of living systems, Los Alamos, Sept 1987, Santa Fe Institute studies in the sciences of complexity, vol VI. Addison-Wesley, Reading, pp 1–47Google Scholar
- Langton CG (1992) Life at the edge of chaos. In: Langton CG, Taylor C, Farmer JD, Rasmussen S (eds) Artificial life II: proceedings of the workshop on artificial life, Santa Fe, Feb 1990, Santa Fe Institute studies in the sciences of the complexity, vol X. Addison-Wesley, Reading, pp 41–91Google Scholar
- Mehrotra K, Mohan CK, Ranka S (1996) Elements of artificial neural networks. MIT Press, CambridgeGoogle Scholar
- Peacor SD, Riolo RL, Pascual M (2006) Plasticity and species coexistence: modeling food webs as complex adaptive systems. In: Pascual M, Dunne JA (eds) Ecological networks: linking structure to dynamics in food webs. Oxford University Press, New York, pp 245–270Google Scholar
- Penrose R (1989) The emperor’s new mind: concerning computers, minds, and the laws of physics. Oxford University Press, OxfordGoogle Scholar
- Poundstone W (1985) The recursive universe. Contemporary Books, Chicago, 252 ppGoogle Scholar
- Ray TS (1991) An approach to the synthesis of life (tierra simulator). In: Langton CG, Taylor C, Farmer JD, Rasmussen S (eds) Artificial life Ii: proceedings of the workshop on artificial life. Wesley, Redwood City, pp 371–408Google Scholar
- Rechenberg I (1973) Evolutionsstrategie: optimierung Technischer Systeme Nach Prinzipien Der Biologischen evolution. Frommann-Holzboog, StuttgartGoogle Scholar
- Searle JR (1990) Is the brain a digital computer? Presidential Address to the American Philosophical AssociationGoogle Scholar
- Taub AH (ed) (1961) John Von Neumann: collected works. vol V: Design of computers, theory of automata and numerical analysis (Delivered at the Hixon Symposium, Pasadena, Sept 1948). Pergamon Press, OxfordGoogle Scholar
- von Neumann J (1966) In: Burks AW (ed) Theory of self-reproducing automata. University of Illinois Press, ChampaignGoogle Scholar
- Wilke CO, Chow SS (2006) Exploring the evolution of ecosystems with digital organisms. In: Pascual M, Dunne JA (eds) Ecological networks: linking structure to dynamics in food webs. Oxford University Press, New York, pp 271–286Google Scholar
- Wolfram S (1984) Universality and complexity in cellular automata. Physica D 1–35Google Scholar
Books and Reviews
- Artificial Life (journal) web page (2008) http://www.mitpressjournals.org/loi/artl. Accessed 8 Mar 2008
- Banks ER (1971) Information processing and transmission in cellular automata. PhD dissertation, Massachusetts Institute of TechnologyGoogle Scholar
- Batty M (2007) Cities and complexity: understanding cities with cellular automata, agent-based models, and fractals. MIT Press, CambridgeGoogle Scholar
- Ganguly N, Sikdar BK, Deutsch A, Canright G, Chaudhuri PP (2008) A survey on cellular automata. www.cs.unibo.it/bison/publications/CAsurvey.pdf
- Gutowitz H (ed) (1991) Cellular automata: theory and experiment. Special issue of Physica D. 499 ppGoogle Scholar
- International Society for Artificial Life web page (2008) www.alife.org. Accessed 8 Mar 2008
- Jacob C (2001) Illustrating evolutionary computation with mathematica. Academic, San Diego, 578 ppGoogle Scholar
- Michael CF, Fred WG, Jay A (2005) Simulation optimization: a review, new developments, and applications. In: Proceedings of the 37th conference on Winter simulation, OrlandoGoogle Scholar
- Miller JH, Page SE (2007) Complex adaptive systems: an introduction to computational models of social life. Princeton University Press, PrincetonGoogle Scholar
- Pascual M, Dunne JA (eds) (2006) Ecological networks: linking structure to dynamics in food webs, Santa Fe Institute studies on the sciences of complexity. Oxford University Press, New YorkGoogle Scholar
- Simon H (2001) The sciences of the artificial. MIT Press, CambridgeGoogle Scholar
- Sims K (1991) Artificial evolution for computer graphics. ACM SIGGRAPH ′91, Las Vegas, July 1991, pp 319–328Google Scholar
- Sims K (1994) Evolving 3D morphology and behavior by competition. Artif Life IV:28–39Google Scholar
- Toffoli T, Margolus N (1987) Cellular automata machines: a new environment for modeling. MIT Press, Cambridge, 200 ppGoogle Scholar
- Tu X, Terzopoulos D (1994) Artificial fishes: physics, locomotion, perception, behavior. In: Proceedings of SIGGRAPH`94, 24–29 July 1994, Orlando, pp 43–50Google Scholar
- Weisbuch G (1991) Complex systems dynamics: an introduction to automata networks, translated from French by Ryckebusch S. Addison-Wesley, Redwood CityGoogle Scholar
- Wiener N (1948) Cybernetics, or control and communication in the animal and the machine. Wiley, New YorkGoogle Scholar