Ludotronics

A Comprehensive Game Design Methodology
From First Ideas to Spectacular Pitches and Proposals

The content of this website is licensed under a Creative Commons CC BY-NC-SA 4.0 Attribution–NonCommercial– ShareAlike 4.0 International License. You can freely use and share everything non-commercially and under the same license as long as you attribute it to and link to:

J. Martin | betweendrafts.com | ludotronics.net

However, you can also buy the Ludotronics PDF edition
for an unreasonably moderate price at DriveThruRPG.
Learn here about the five excellent reasons to do so!

Why DriveThruRPG? It’s the largest download store for role-playing stuff in existence and you’ll probably end up buying much more than just your copy of Ludotronics. Which would benefit game designers everywhere!

Why not Amazon? For one, illustrated non-fiction isn’t well-suited for the Kindle format. Also, at a €14.99 price point, Amazon’s cut amounts to €9.75. Well, no.

Level Two: Interactivity

Process Phase Level Two

Beat 2. Reactions

From Interactions to Emergence

This beat, as a warning, is probably the most arduous you will encounter in this territory. We will race through a dizzying array of abstract concepts: determinism and unpredictability; randomness, contingency, and complexity; and emergence and self-improving subsystems. To be able to do that at all within the confines of these pages, we have to trim these concepts down, sometimes brutishly in Procrustean fashion, as we did with other concepts before. Similar to scene openings in screenwriting, we will jump into these concepts as close to their contributions to our purpose as possible, which is the purpose of designing great games. So, if you have prior knowledge in any of these fields, prepare to catch yourself muttering “wait, it’s more complicated than that!” all the time. Or, if you have no prior knowledge whatsoever in these fields, keep your jar of aspirins handy.

From every game state to the next, every output of every interaction between your game’s agents—a simple process, a complex in-game AI, a player—is determined by the rules.

But that doesn’t mean that games are deterministic by nature. Applying the concept of determinism from math or physics, a game could be called deterministic when there is a causal chain from start to finish, and no randomness is involved in the development from its initial game state to its final game state at all. In other words, if the initial conditions are the same, the output will be the same. With most games, that’s obviously not the case. But what is the case?

We can differentiate games along two major conditions, predetermined and undetermined. But being one or the other isn’t about endings! Final game states as such aren’t of great interest in that regard. What makes match-based games interesting is not that one player or team wins, but how that player or team wins. What makes arcade-type games interesting is not that the player fails, but the measure of player success at the point of failure. What makes puzzle-type games interesting is not that the player succeeds in the end, but how, and maybe how fast, that success is achieved. One or the other also always applies to management or simulation games. In dramatically complete games that have a story, endings or multiple endings are certainly of interest, but wouldn’t be so at all without their preceding plot structures and perhaps player decisions that lead up to these endings.

Where being predetermined or undetermined is of importance instead is how the player proceeds from the initial game state to the final game state in terms of possible paths. This can be weakly or strongly predetermined or weakly or strongly undetermined.

On one side, there are puzzle-type games, think crossword puzzles or memory or match 3 games; role-playing games; management and simulation games; or match-based games in general. For such game types, the path from the initial to the final game state is largely undetermined. On the other side, there are point-and-click and direct-control adventure games; action-adventure games; and story-focused games in general. Here, the path from the initial to the final game state is largely predetermined. Both lists aren’t exhaustive, and there will always be special cases. Arcade-style games, e.g., can be largely determined, like a Galaxian level, or seem to leave room for indeterminacy, like a Pac-Man level (we’ll come back to that). It’s the general principle that counts.

In games that are largely undetermined in this manner, the number of possible paths between the initial and the final game state can differ considerably, and with it how these paths differ in perceived quality with regard to the playing experience. For match-based games, as an example, it’s among the criteria that differentiate board games like chess from board games similar to Pachisi. (Other important differentiators like randomness and complexity will be discussed later in this beat.) But the number of possible paths can shrink over time, even to the point that a game switches its condition from being undetermined to predetermined. That’s because there’s always the possibility, for any game, that a dominant strategy exists that always leads to success, and that the game becomes “solved” so that a player can always win through “perfect play.” Examples for solved games are the game of checkers or Connect Four. Are there games that are completely immune to this? Possibly, but it’s hard to tell. So far, whenever computing capacities caught up with a game’s level of complexity, it didn’t take long until solutions were found—or at least proof that such solutions exist, as is the case with Hex (11×11 standard board). The bets are open.

We will pick up on this later to discuss design patterns attached to creating an unlimited or nearly unlimited number of possible paths. But before we can do that, we need to clear something up. That we enjoy playing undetermined games is no mystery. But why is it that we can also enjoy predetermined games?

There’s another aspect involved—“predetermined” doesn’t equal “predictable.” Even “deterministic” doesn’t! Chaotic systems like weather patterns, for example, are deterministic but become so unpredictable over time that they appear to be non-deterministic. Schrödinger’s equation for the evolution of the wave function in quantum-mechanics is fully deterministic but not predictable outside of “determined probabilities.” The interesting thing is that even strongly predetermined games need not be predictable, let alone fully predictable. The same applies to solved games with solutions that are too complex to commit to memory.

The answer to our question is, then, that players can enjoy games that are strongly predetermined as long as they offer an element of unpredictability. In other words, as a game designer, you always have to design a certain amount of unpredictability to make up for whatever determinacy your game contains!

To design unpredictability in the Interactivity territory, we will look at three major strategies that you can apply in different circumstances: controlled randomness, constrained contingency, and input complexity. (Unpredictability attached to dramatic structure belongs in the Architectonics territory and is discussed in Level Five: Architectonics.) Controlled randomness isn’t too challenging and constrained contingency isn’t that complicated either—neither will keep us long. Input complexity, though, will be vast and sprawling, taking us deep into the territory of subsystems, self-organization, emergence, and the issue of self-improving agents.

But first, controlled randomness. Can a game have truly random elements? Leaving aside flavors of randomness that would engage us with various subfields of physics, the answer appears to be yes. You can see it at work, e.g., in the distribution of cards after a Skat deck was properly shuffled. But how do you shuffle anything in a video game? The problem is that in the digital realm, players would need cryptographic hardware on their devices that create truly random values from statistically random physical input like thermal noise. Algorithms can’t produce true randomness! But, with respect to video games’ prevalent rinse-&-repeat dynamics, that’s not necessarily a bad thing. With truly randomized values, it’s very hard to create the same input/output sequence twice, let alone limitlessly often, so the popular learning-by-dying recipe would be severely curtailed.

Then, could the whole game be truly random and still enjoyable? Not for long. True randomness alone isn’t the whole picture, though. Players first have to realize that the game is truly random. As long as the players don’t know that, they can enjoy it. Take War, for example. In this card game, every move and every game state up to and including its final game state is truly random. But, as Jesse Schell astutely remarks in The Art of Game Design, children enjoy playing War as long as they believe that they can affect the value of the next card through “magic” rituals. (If you find that amusing, just watch how grown-up pen & paper roleplayers occasionally treat their dice. Or, in more serious fashion, gambling table rituals.)

This is an important insight. As in a game that is fully predictable, there is no player agency in a game that is truly random, and neither of these are enjoyable. But the illusion of player agency makes the game enjoyable by bringing about the illusion of non-randomness.

All that brings us to controlled randomness as a design choice in video games. It’s the application of algorithmic randomness in a controlled manner. Controlled randomness can create the exact intended amounts of predictability and unpredictability that make your game interesting. With controlled randomness, players can have agency, not just the illusion of agency. With controlled randomness, players can become better at doing whatever it takes to overcome a difficult challenge. With true randomness, in contrast, the player would not be able to learn in systematic ways. They’d just have to try again and again until they get lucky.

There’s another important aspect with respect to controlled randomness. As Morgan McGuire and Odest Chadwicke Jenkins point out in Creating Games, randomness can relieve the player of the burden to plan ahead as far as possible and play as smart as possible. As a rough guideline, if you design your strategy game or your management simulation game as a pure intellectual challenge on the order of chess or the game of Go, with the barest minimum of randomness involved, its entertainment value for less ambitious, non-professionalized players will be drastically reduced. Moving into the other direction, in contrast, more randomness makes your game increasingly entertaining for less ambitious players and increasingly suited for relaxation and socializing. Rare is the drinking game that gets under way over a game of chess.

What’s more, if you introduce controlled randomness, the way how you do it can and will affect player perceptions toward plausibility and realism, to be copiously demonstrated along pen & paper role-playing rule design in Beat 4. Representations.

The second major design choice to create unpredictability is constrained contingency. Now, what’s contingency! Reality, as it appears to us, is contingent. Things that happen could have happened differently, but didn’t. Certainly, nothing happens without a cause. But that doesn’t lead to visible or experienced determinism because many if not most of these causes cannot be known with certainty, or known at all. One principal reason is that over time, ever smaller differences in initial conditions bring about ever greater differences in outcomes. So shit happens, as it were, but deterministically. “True” first causes either fade into perfect obscurity or cannot be known at all because they reside in a domain of physics that does not allow precise measurements under any circumstances.

We find examples for constrained contingency in games like memory games, Sudoku games, or crossword puzzles. Everything is set, deterministically, which is the “constrained” part. How the game or match evolves, though, is entirely contingent on the actual succession of moves. This is also discussed by Greg Costikyan in Uncertainty in Games—all the letters in a crossword puzzle are fixed, but solutions are contingent on letters from crossing solutions that have already been found. Thus, for these and similar game types, everything is contingent on the first move, which wasn’t even of great importance at the time. Players just have to begin somewhere. And while this first move—the first memory tile a player turns over, e.g.—is certainly “caused” by something along a chain of physio-psychological events, where a certain set of neurons and no other fired in the brain which caused the hand to pick up that tile and no other, these causes are not retrievable and cannot be known.

Think of the first communication between strangers, as an example loosely based on Niklas Luhmann’s “double contingency.” Anything you say to initiate a conversation with a stranger, let’s say at a party hosted by Jack and Jill, is again “caused” by a chain of physio-psychological events. Yet, whatever you say—from “So, have you also been invited to this party?” to “Do you know Jack and Jill from work?” to “What’s your opinion on Everett’s many-worlds interpretation?”—doesn’t necessarily follow from anything and you could have said something else entirely. But all the communication with your new acquaintance that follows will flow from your initial utterance—just like your memory game will flow from the first tile you turn over and your crossword puzzle will evolve from the first letters you drop into the grid (but toward fully predetermined final game states).

To recapitulate, the unpredictability of your game is based on constrained contingency when all its game states except the initial and the final game state are determined by the player’s very first input. In memory games, Sudoku games, crossword puzzles, and a whole range of casual and online games, everything is always already fixed, every element has its place. There is only one correct final game state, and that game state is perfectly predictable. What isn’t predicable at all is how the game will proceed from its initial to its final game state because there is a huge number of possible paths and you cannot foresee the individual path a player will take. Constrained contingency is a form of unpredictability design that countless players enjoy immensely, certainly no less than any of the other forms of unpredictability design in games.

Let’s proceed to the third and final design choice, input complexity. As has been mentioned, this will lead us deep into the rabbit hole of games as systems, with subsystems, emergence, self-organization, and human agents. Naturally, this design choice is about the right amount of predictability, but it is also about the right amount of determinacy and indeterminacy. Thus, as promised, we’ll pick up our discussion of undetermined games again and the design questions associated with it.

Being undetermined, as a reminder, has nothing to do with determined or undetermined endings, but with the number of possible paths between initial and final game states. Undetermined game types include all kinds of match-based games, from board games like chess or the game of Go to arena games and strategy games in general and simulation games and management games as well.

These games’ flavor of unpredictability is very different from constrained contingency. Their initial move or moves are not contingent in that sense at all, on the contrary. Early moves in such games or matches are indeed highly predictable, even standardized. It’s over time that moves begin to deviate from the “norm” and become new, unrecognizable, and exciting. Later game states are not independent of earlier conditions, certainly. But they’re not contingent upon them either. Instead, each move, from the very first move to the last, is contemplated and deliberate. Every move is made for a reason.

All this is brought about, first and foremost, by input complexity.

Input can come from many sources, from simple processes to complex AI to human players. On the system level of a game, all these count as subsystems of a game. Thus, we need to look at the critical role that subsystems play in games to be able to proceed.

Crucially, the more subsystems directly or indirectly interact with each other in non-trivial ways, the more the overall complexity of the system as a whole will rise, and with it its unpredictability. For reasons discussed in Beat 1. Rules, rules are categorized into rule sets and rule sets into game mechanics, which in turn support the elements of the game loop. Rule sets and game mechanics embody important subsystems of a game. They interact with each other, directly or indirectly, and with the player or players, who are also subsystems of the game in ways to be discussed shortly. The greater the number of involved subsystems and their interactions in a game, the more unpredictable its game states will become over time. With rising unpredictability, again, it will become more and more likely that the output from these interactions will be unexpected, interesting, and entertaining. But it will also become more and more likely that they create unexpected output that isn’t desirable at all, or healthy. When that happens, the game might behave erratically and crash or become unplayable, or create impressive opportunities for hacks and exploits.

To illustrate this process, let’s revisit our first-person shooter Shroom! and cook up a simple example that will get us into trouble. Items for our ammunition subsystem—clips with a certain number of bullets—are dropped by killed AI enemies. Items for our health subsystem—health packs—are distributed across the map. The player can pick up both ammo and health packs; the former replenishes their ammunition supply, the latter replenishes their health for a health pack’s standard replenishing value, but not beyond the player character’s regular maximum health. All this should be familiar. But now you want these subsystems to interact with each other, to keep the player within the flow channel as discussed earlier in this level. Your solution is to have a predetermined, fixed amount of health packs on any given map and a variable amount of ammunition, and the latter is determined by a ratio of remaining enemies to remaining health packs. When the level starts, with 100 % enemies and 100 % health packs, the ammunition dropped by killed enemies will be a default value of 100 % which stands for a predetermined number of bullets. When the percentage of remaining health packs becomes less than the percentage of remaining enemies, killed enemies will drop more ammunition conforming to that predetermined ratio. And vice versa: if the percentage of remaining health packs is greater than the percentage of remaining enemies, killed enemies will drop less ammunition.

You can certainly see where this is going. To start with, when the player picks up the last remaining health pack on the map, your game will explode that instant over a division by zero under the hood. Moreover, experienced players are likely to catch on to this system and exploit it by systematically using up health packs for minimal damage in order to collect the maximum amount of ammo their weapons and backpacks can hold.

Of course, designing subsystem interactions to keep the player in the flow channel are rarely that naïve, and problems are much harder to spot. But you have to realize that the difficulty of catching undesirable interactions rises exponentially with the number of interacting subsystems, the number of interactions between these subsystems, and the quality of these interactions.

But, one might ask, why don’t we just introduce a few more rules that solve the problem? One rule catches the division by zero exception, another rule checks how much health has actually been restored to the player character, and a third rule deals out some punishment when tracking data from the second rule suggests that the player is trying to exploit the original rule.

The answer is that this is a terrible idea for a whole raft of reasons. To start with, quick-fix rules in particular and special rules with a very narrow range of application in general are “clutter rules” in Ludotronics parlance. They violate the principle of skill, style, and substance matter as discussed in Level Three: Plurimediality. They violate it because they’re not professional, they don’t contribute to original, recognizable patterns, and they don’t work toward the rule system as an integrated whole. In other words, they’re neither productive, nor interesting, nor in sync with anything else. In The Art of Computer Game Design and elsewhere, Chris Crawford refers to such rules as “dirt.” This is a superb designation.

But that’s not the end of it. Quick-fix rules introduce additional complexity into the system as a whole. As such, it’s extremely likely that they will turn on you somewhere down the road in wildly unexpected ways. What’s more, quick-fix rules sugarcoat the fact that the original rule or rule set that made them necessary wasn’t such a stroke of genius in the first place and should be removed, or at least extensively reworked. Finally, a rule system should be carefully pruned at all times to remove unnecessary rules, much like a manuscript should be pruned to remove superfluous, unnecessary, and redundant words (you get the idea). Just like a writer searching for the mot juste, you should always strive for the rule that’s exactly right.

Then, one might ask, why do these subsystems have to interact with each other in the first place? Why not play it safe and keep the ammunition subsystem and the health subsystems isolated from each other? Without interaction, there won’t be no trouble!

This is an excellent question that will lead us to insights that feed back into our original question of possible paths and input complexity. It’s all about the individual interactive playing experience, and about the set of possible player actions and the consequences of these actions as a fundamental part of this playing experience.

Imagine three different Pac-Man games. In the first game, the ghosts act along deterministic rule sets that are easily observable and fully predictable (when and how and how often to switch directions and such). In the second game, the ghosts act according to interaction rule sets that process the input (movement and action) of other subsystems, i.e., of the player and of each other. In the third game, finally, the ghosts are in idle mode until triggered by player movement or player action, upon which they execute scripted actions according to limited (local) interaction rule sets, also processing input from the player and from each other. If released that way, the first game might be called Puzzle-Man, the third CoD-Man, and the second one is of course the Pac-Man game we know. None of these experiences is necessarily better or worse than any other, and they differ considerably with respect to their design characteristics in other territories. But they also differ with respect to the number of possible playing experiences—which, within the context of the Interactivity territory and our current topic, is the matter of possible paths from the initial to the final game state.

Puzzle-Man or CoD-Man levels have only limited or very limited sets of possible paths that lead from the initial to the final game state. How limited depends on a variety of factors. A Puzzle-Man level might offer only one solution that, while being ingeniously designed, nevertheless reduces the number of possible paths to just one. A CoD-Man level might have several possible paths, from stealth to frontal assault. Or it can be the other way around: the CoD-Man level, when heavily scripted, might have only one possible path, and the Puzzle-Man level might have more than one possible path because it offers more than one possible solution.

Then, a Pac-Man level. While the number of possible paths is certainly limited, there’s still a substantial amount of them and, therewith, different interactive playing experiences. Moves and actions have consequences that affect subsequent moves and actions with their own consequences and so forth. But there’s a catch. While each ghost processes input from other game agents, or subsystems, their behavioral rules are not complex and cannot surprise. Hence, players are able to optimize their gameplay in such a way that they replay a Pac-Man level until they have found an optimal path, and then stick to this path with great precision. (According to James Pittman’s seminal “The Pac-Man Dossier,” this also applies to the ghosts’ “frightened” mode in earlier levels as the pseudo-random number generator for random turns always starts with the same seed value.) We can describe this in several ways. First, the level has become fully predictable. Or, the player has found a dominant strategy. Or, there’s only one possible path left between that level’s initial and final game state

Now, as you might have guessed already, there’s a fourth game that we haven’t mentioned yet. A game with a virtually infinite number of possible paths, and with them an almost infinite number of playing experiences. This game, of course, is Go-Man.

What provides Go-Man with an almost unlimited number of possible paths are emergent properties. But what about Pac-Man, Puzzle-Man, or CoD-Man? By degrees, doesn’t Pac-Man display a narrow range of emergent properties too?

To get to the bottom of that, we need to introduce the concept of emergence in some detail.

As a term, emergence has become so ubiquitous that its mere mention tends to evoke a familiarity with the actual concept that is rarely warranted. Also, there’s a considerable range of different scientific views and approaches regarding what emergence means and how it comes about—in other words, there is no scientific consensus about the definition of emergence or about its properties, requirements, and effects. Therefore, it shouldn’t come as a surprise that within game design lore, the terms “emergence” and “emergent properties” often appear somewhat ill-defined. There is Conway’s Game of Life, probably the most widely known game featuring emergent properties and a great tool to study emergent computational behavior. But as it is utterly devoid of interactivity after the initial game state, it can’t help us design interactive playing experiences.

One very useful and productive description of emergence has been championed by Peter A. Corning. What differentiates emergent behavior from synergetic effects, following Corning, is that the constituent parts of a system that display emergent properties are “modified, reshaped, or transformed by their participation in the whole.” Corning also differentiates between emergent phenomena in self-organizing systems and emergent phenomena in “purposeful organizations” as systems that have a functional design.

This is a direction that can help us apply the concept of emergence more productively, and, in turn, help us answer our question of why subsystems in games should interact with each other in the first place.

Games are certainly not self-organizing systems. If they display emergent properties, they do so as purposeful systems with a functional design. The perhaps most important purpose of a game as a system is to create environments from which unexpected, interesting, and entertaining outcomes can arise that are never fully “exhausted.”

That’s what’s meant by emergence in games. That’s the purpose of emergence as a functional design element. We design subsystems that interact with each other because only interactions between subsystems can facilitate the necessary range of possible paths to create environments from which unexpected, interesting, and entertaining outcomes can emerge that are never, or almost never, fully exhausted.

We can apply this immediately to eliminate games like Pac-Man, Puzzle-Man, or CoD-Man from our roster of games with emergent properties. These games’ constituent parts are neither modified, nor reshaped, nor transformed by their participation in the whole. The player might be, to a certain extent, and it’s not that the other subsystems—ghosts, puzzle elements, ultranationalists—remain unaffected by the player’s actions and decisions. But, following Corning, all that puts these games not into the category of emergence, but into the category of synergetic effects.

To get to the next point, let’s get a bit technical for a paragraph or two. Emergence, as we understand it now, arises when each participating subsystem—and that includes the player or players—constitutes the environment for every other participating subsystem, and each subsystem reacts with internal events and feedback loops to external states that have been altered by the actions of other subsystems, thereby modifying its own state, its decisions, and its input for upcoming game states. This environment can have greater or lesser potential for emergence, depending on several factors. What was modified and reshaped through subsystem interactions by our ill-fated Shroom! rule set—the number of bullets dropped by enemy AI—is certainly less spectacular than what can be modified and reshaped through interactions from more complex subsystems, and especially players. But the complexity of a subsystem isn’t the whole story. Being human, players are certainly more complex than rule sets and game mechanics and have a greater potential for interesting modifications and changes. But, as subsystems of your game, players are not necessarily more complex than your rule sets and game mechanics because their input isn’t arbitrarily complex. Their input is exactly as complex as your game allows it to be. For the system as a whole, the complexity of a subsystem equals the complexity of the input from that subsystem. That’s a crucial point. The complexity of a subsystem behind its input is completely invisible for the system. But that’s still not the whole story! For the system, the complexity of a subsystem’s input is defined by the quality and quantity of environmental changes it causes, in other words, how many other subsystems are affected by this input and how strongly. Thus, while the complexity of a subsystem’s input isn’t necessarily invisible for the system as a whole, it doesn’t count for anything as long as it doesn’t impact other subsystems in ways that substantially alter the state of the system as a whole.

In games like Go-Man, this is exactly the case. The complexity of the subsystem “player” or, more importantly, the permitted complexity of its input, is several orders of magnitude higher than the input complexity that other games or game types allow, especially our three Pac-Man games, with momentous consequences for the system as a whole. In a scripted CoD-Man level, for example, player input is restricted in ways that make the player as a subsystem actually less complex than the game’s rule sets and game mechanics.

Within this model, moreover, both types of subsystems—rule sets and game mechanics on the one hand and human players on the other—can be “modified, reshaped, or transformed by their participation in the whole” in ways we can compare. For rule sets and game mechanics, most modifications will appear as modified value states (but games can be designed in ways that whole rule sets and even game mechanics can undergo change under certain conditions, defined by the rules). For the player subsystem, “changes in value states” can correspond to changed assumptions and intentions; it can correspond to switching to a different strategy; and it can correspond to so-called “out of the box”–thinking—different types of change that are increasingly difficult to perform. Through system-wide feedback loops, moreover, any modification within any type of subsystem can be amplified over time, which can be a good thing or a bad thing.

Naturally, the moment we leave this abstract system-level model behind, those correspondences between rule sets and players become metaphors, but they’re still useful metaphors. The range and complexity of the permitted input from rules, rule sets, game mechanics, and players as well as each input’s permitted impact on game states defines the possible number of individual interactive playing experiences and shapes these experiences in original and profound ways.

Taken together, games that exhibit emergence belong to a club where games like Go-Man are members, and also complex strategy, management, or simulation games in general. But still, not all members are alike. Why are the game of Go and chess and multiplayer strategy games in general among its most distinguished members, with far stronger and more interesting emergent behavior than others? The reasons are that the player subsystem’s permitted input complexity and impact is higher, and, crucially, that there are not one but two or more player subsystems at work. Which, in turn, leads to another interesting aspect. In games that have more than one player subsystem, and the permitted input complexity is high enough, the input from these two player subsystems need not necessarily be of equal quality and equal impact to other subsystems because their individual potential for game-related internal changes can differ. Which, outside the system-level model, will be experienced as the difference between weaker and stronger players. Who, as AlphaGo taught us, need not necessarily be human.

That’s why you want to design subsystems that interact with each other in ways that make emergence possible. Depending on game type, complex interactions among subsystems that allow for emergence can make your game more interesting, more challenging, and more enjoyable because the number of possible paths, and with it the number of individual interactive playing experiences, is never or almost never exhausted. This gives your game’s replay value a stratospheric boost.

But it comes with a price tag. You have to control for unwanted and potentially destructive effects, which becomes staggeringly difficult with greater numbers of subsystems and greater input complexity. Thus, as you can imagine, putting this model into practice comes with unique challenges. There are two things that you need to keep in mind.

The first thing to keep in mind is that a system that has emergent properties doesn’t have to be complex. Games with emergent properties can have just a few subsystems, each with a manageable number of rule sets and manageable input complexity, which together produce increasingly interesting game states over time.

The second thing to keep in mind, when your system has to be complex, is that you should not design a complex system from scratch ever. According to “Gall’s Law,” formulated by systems and child development theorist John Gall, complex systems that work have always evolved from simple systems that worked. (Which doesn’t mean that simple systems always work.) Complex systems designed from scratch, according to Gall, will never work and cannot be made to work by patching them up.

Which is truly great advice. If you want to design a game with emergent properties that is also complex, like a multiplayer strategy or multiplayer role-playing game or any game with dynamic in-game economics, then you should start with a simple system with a manageable array of interacting subsystems. When this simple system works, extend and expand through variation and selection in a tightly controlled manner, step after step. With each successful step, your game will become a little more complex and will, hopefully, sooner or later display emergent properties. When things fall apart, you can always backtrack to your last successful step. From there, you can either trace forward to find and eliminate the cause of the meltdown, or try something different altogether.

Then, there’s modularity. As Tracy Fullerton points out in Game Design Workshop, and that is also great advice, you should keep your subsystems modular at all times. Because then, and only then, will you be able to make controlled changes in one subsystem and subsequently observe and measure how these changes propagate throughout the system and affect other subsystems. This ties in with our rule management system that we developed in Beat 1. Rules, which strongly encourages modular subsystem design in the form of rule sets and game mechanics.

As a kind of PSA, trying to design a complex game with emergent properties from scratch, or designing it without a modular structure, might not be conducive to your physical, mental, and financial well-being.

All this has ramifications for game AI as well, which will be our final topic in this beat. To exemplify these ramifications, let’s examine two different types of AI that stand out with regard to emergence on the one hand, and synergetic effects on the other.

With respect to emergence, there are deep-learning AI systems like AlphaGo and its successors. While supported by a very complex deep neural network–technology that comprises value networks, policy networks, tree search algorithms, and so forth, the AI itself starts out with a handful of simple rules like “match this” or “predict that” and becomes more and more capable, and more and more complex, through learning over time. Until, like AlphaGo, its internal complexity and the complexity of its input for a given system, like the game of Go, can compete with that of human agents.

In terms of synergetic effects, there are very simple game AI types that follow a handful of very simple rules in the tradition of Pac-Man’s ghosts. These AI agents always remain as simple as they started out. But, together with player input, they display synergetic effects that can provide the illusion of emergence. This provides a great number of possible paths and playing experiences, and it produces much less design and balancing problems than true emergence. But, as it isn’t truly emergent, it is susceptible to gameplay optimization, as discussed before around Pac-Man levels.

Here’s an example. Among the documents you should have read as a game designer, if not outright studied, is Warren Spector’s game design document for Ion Storm’s original Deus Ex game (preferably the GDD version 13.12 from 1999, a pruned-down version from its 500+ pages predecessor; you will find it posted in forums or on blogs if you search for a file named “Deus Ex 1312”). If you go to the “NPC Base Behaviors” chapter on page 192 of this document, and there to the “AI Families” section, you will find lists with very simple behavioral rules for minor NPC classes like Civilians, Thugs, Military, Animals, and Robots. Will they harm civilians or protect them? protect themselves or others? ignore or investigate unidentified sounds? attack with or without warning? flee after suffering a certain damage value or never flee? As you can see, the rule sets for these AI families are trivial. But they facilitate synergetic effects through interactions on the system level. Their behavior, and with it the system’s behavior, is sufficiently unpredictable with regard to player input, barring gameplay optimization. (You can find more in-depth discussions of this famous example in Neil & Jana Hallford’s Swords & Circuitry and in Katie Salen & Eric Zimmerman’s Rules of Play.) If the player reloads the game and does the exact same thing they did before, it will have the same or at least very similar consequences over time that are not random, so the player can learn and gain knowledge. If the player reloads and acts differently, the game will react differently. What such simple, but effective NPCs can’t do, though, is surprise us—they won’t be able to come up with a brilliant, creative “move 37” like AlphaGo did in its second match against Lee Sedol in 2016, or temporarily lose their wits, like AlphaGo did after Sedol’s brilliant move w78 in the fourth match.

Will we see self-improving AI like AlphaGo employed in video games of the future? Absolutely. But these should be games that need this kind of AI. Keep It Simple, Soldier. Also, if you want to introduce complex, self-improving AI in your game, prepare for Balancing Hell.

Then again, many games already have more than two or even thousands of self-improving intelligent agents as subsystems that interact with each other—namely, MMOs. Input complexity, though, is fairly limited in most cases. For MMOs, self-improving AI agents will probably shine, and they might make it possible—in any game, not just MMOs—to raise the permitted input complexity for human agent subsystems and with it the range of possible player interactions a game can permit.

Bringing self-improving human agents together with self-improving AI agents would make it easier to provide in its entirety what the Ludotronics paradigm calls the “5C” range of game agent interactions: Competition, Coopetition, Cooperation, Collaboration, Cocreation. Not many games employ the whole 5C range for their agents, human or otherwise, and even MMOs rarely encourage and stimulate them as effectively as Eve Online does.

When the time comes that you can enrich your game with all kinds of self-improving AI agents, you should not content yourself with using plain old competition vs. cooperation mechanics in your game. Instead, you should use the whole 5C range to make your game more interesting, more challenging, and more engaging in the Interactivity territory.

Fig.4.15 5C Range of Game Agent Interactions
Fig.4.15 5C Range of Game Agent Interactions

With that, we can close our journey along controlled randomness, constrained contingency, and input complexity as three major design strategies at your disposal to create unpredictability in this territory. Yet, there’s one more thing, picking up on a previous thought. Life is truly emergent insofar that every interaction as communication between subsystems comes with a feedback loop that makes the communicating agents revisit and change their internal states including zero change, and these changes have consequences for all the states that follow. Stories, in contrast, are nothing like that, and story-driven games cannot have true emergence as a rule. Dramatic structures, plots, and dénouements cannot emerge. In fact, stories are highly controlled emergence simulators and, as such, incapable of actual emergence. These simulators can be quite robust, like myth, where the simulation can survive outrageous changes over time and remain surprisingly intact. Or they can be very fragile, like a modernist short story, where the whole simulation can be knocked over by one minor change. It’s right here, possibly, where attempts at emergent, or “procedural,” storytelling must fail, even those that experiment with the more robust form of mythical abstraction. When stories are controlled simulations of emergence, and emergence is created by interacting subsystems, then an “emergent story” would literally require a controlled simulation of emergence to emerge from interacting subsystems. Make of that what you will, but it probably doesn’t signal a sound investment opportunity.

up