Ludotronics

A Comprehensive Game Design Methodology
From First Ideas to Spectacular Pitches and Proposals

The content of this website is licensed under a Creative Commons CC BY-NC-SA 4.0 Attribution–NonCommercial– ShareAlike 4.0 International License. You can freely use and share everything non-commercially and under the same license as long as you attribute it to and link to:

J. Martin | betweendrafts.com | ludotronics.net

However, you can also buy the Ludotronics PDF edition
for an unreasonably moderate price at DriveThruRPG.
Learn here about the five excellent reasons to do so!

Why DriveThruRPG? It’s the largest download store for role-playing stuff in existence and you’ll probably end up buying much more than just your copy of Ludotronics. Which would benefit game designers everywhere!

Why not Amazon? For one, illustrated non-fiction isn’t well-suited for the Kindle format. Also, at a €14.99 price point, Amazon’s cut amounts to €9.75. Well, no.

Level Two: Interactivity

Process Phase Level Two

Beat 3. Revolutions

From Cycles to Circuits

In this beat, we will look at non-transitivity and feedback loops. These two elements are not related per se, but they share a common characteristic: they feed back into themselves. It’s a characteristic that stimulates and aides conflict design in different forms and different ways, and it is vitally important for designing interesting and non-exhaustive interactive playing experiences. But it’s also a characteristic that causes a range of very nasty problems that you need to take care of, primarily through compensating, or tweaking, a topic on which this beat will conclude.

When we examined rules and rule sets and game mechanics in Beat 1. Rules, and how and why they interact, we looked at value states only superficially, how they can change in keeping with the rules from game state to game state. For this beat’s first topic, we will focus on particular types of values that aren’t supposed to change during gameplay: value and value sets attached to the rock–paper–scissors mechanic, or simply RPS mechanic.

RPS has a whole bunch of “loopy” characteristics but, to quell any confusion, it’s a game mechanic. It’s a bundle of rule sets that define three or more game elements that can be attached to any game loop element, but particularly to game loop elements related to competition or combat.

In the titular game Rock Paper Scissors, the game loop consists of one single element—playing either rock, stone, or scissors against an opponent—that is supported by one single game mechanic, the RPS mechanic. Which, in this barebone version, includes a rule set that governs how the three elements rock, paper, and scissors relate to each other, and a rule set for when and how the players can make their moves.

The RPS mechanic has two fundamental characteristics. The first characteristic is that it always controls three or more competing game elements. The second characteristic is that all game elements controlled by a given RPS mechanic must stand in a non-transitive relationship to each other.

Let’s illustrate that. For a transitive relationship between elements, imagine you are running a test to compare the long-range capabilities of a twentieth century field gun, a field howitzer, and a mortar. In your first test, you find out that the field gun has a longer range than the howitzer. In your second test, you find out that the howitzer has a longer range than the mortar. Knowing that, you don’t need a third test to compare the field gun and the mortar. There’s no way that the mortar can have a longer range than the field gun when the mortar has a shorter range than the howitzer which has a shorter range than the field gun!

That’s a transitive relationship. If A beats B and B beats C, then A will beat C. The same is true, by and large, for our artillery weapons’ capacity to clear terrain elevations, only the other way around. When the mortar can clear higher obstacles than the howitzer and the howitzer can clear higher obstacles than the field gun, then we can be reasonably sure that the mortar will be able to clear higher obstacles than the field gun.

The three elements of the Rock Paper Scissors mechanic, and that applies to every RPS mechanic, do not have that kind of relationship. These elements have a non-transitive relationship. Paper beats rock and rock beats scissors, yet paper does not beat scissors! Here, if A beats B and B beats C, then C beats A. In other words, a transitive relationship is a straight line with a beginning and an end, as in our artillery examples, and a non-transitive relationship is like a cycle without a beginning or an end, as in the Rock Paper Scissors example. (We call it “cycle” to differentiate it from “game loop” and “feedback loop” terminologies, and it’s also not just a circle, as will become clearer soon.)

Why would you want to create non-transitive relationships for game elements? The answer is fairly simple, but far-reaching. If your players must choose one element from a set of elements in your game, and there is one element in that set that is consistently more advantageous than all the other elements in that set, then most if not all players will choose that element most if not all of the time. That can be a weapon, a character, a military unit, a terrain, a branch from a technology tree, and a thousand other things, depending on game type. Which element or elements the player selects is a strategy, and if there is a strategy that has consistently more advantages than any other possible strategy, that strategy is called a dominant strategy. And you don’t want to have dominant strategies in your game! Any dominant strategy will greatly reduce the number of possible paths and the range of possible interactive playing experiences, which we discussed in Beat 2. Reactions, and it will make your game less interesting, less enjoyable, less meaningful, and less rewarding.

There are several ways to design non-transitive relationships for game elements in multi-agent or multiplayer games. To start with, you can create non-transitive relationships from scratch mathematically. To make these relationships more interesting, you can increase the number of elements, increase the number of non-transitive relationships between these elements, or both.

On the surface, for example, the elements of a given set could be represented on the surface as swords, spears, javelins, archers, and cavalry. Under the hood, it’s all math.

To give you an idea (and to give you ideas), let’s take a closer look at James Grime’s “Grime Dice” and how they work, as illustrated below (Fig.4.16 and Fig.4.17). Stripping Grime’s mathematical setup of several ingenious and beautiful details like color alphabetization and word lengths, these dice display non-transitivity with a number of properties that will turn out to be very interesting for our purposes. The arrows indicate statistical advantages, e.g., a red die would beat a blue die in the long run, or two blue dice would beat two red dice in the long run.

Both illustrations are based on James Grime’s own illustrations on the one hand, and the correction for the green-red problem through employing D10 on the other, attributed to an (elusive) Australian mathematician by the name of Jon Chambers. (In Grime’s original D6 setup, two red dice performed better than two green dice on average, which they shouldn’t.)

Fig.4.16 Grime Dice I
Fig.4.16 Grime Dice I
Fig.4.17 Grime Dice II
Fig.4.17 Grime Dice II

In brief, these five dice form not one but two non-transitive cycles, an external and an internal one. What’s more, with two dice of the same color instead of one, the external cycle switches directions and the internal cycle stays the same! Which, in turn, has an interesting side effect: you will always find a color that performs better on average against any two colors at the same time as long as you’re allowed to determine the number of dice.

The reason is this. Whichever colors your two opponents might pick, these colors will either be next to each other on the external cycle or next to each other on the internal cycle. If the colors are next to each other on the external cycle, there will always be a color that has a better chance against both opponents on average in a two dice–game. If the colors are next to each other in the internal cycle, there will always be a color that will beat both opponents on average in a one die–game.

When it is possible to design such beautiful multiple and reversible non-transitive relationships within the constraints of physical dice, imagine what you can do under the hood without such constraints! On the surface, then, these values can be mapped onto anything imaginable. Combat moves for beat ’em ups, military units for strategy games, weapons for arena shooters, items for puzzle games, teams for sports games, amusement park rides for management games, technology trees for 4X games, you name it. The difference between one die–games and two dice–games could be mapped to different combat situations (offense or defense), terrain modifiers, damage modifiers, and whatever you can think of that makes sense in your game.

Moreover, these mechanics offer no short-term predictability. This is another advantage that needs to be mentioned, especially in the context of our lengthy discussions in Beat 2. Reactions. Any RPS setup with Grime Dice characteristics will only provide a statistical advantage per match to win, not an outcome that can be predicted with certainty.

If you want to design an RPS game mechanic for your game, you can also collect transitive real-world relationships and combine them into one overarching non-transitive relationship by making them context-sensitive. Take our artillery example with its two transitive relationships for range and obstacle-clearance. These relationships are already reversed, which is a good start! Now let’s throw in field mobility and how long it takes to get them mounted and ready to fire. Additional factors we can include are cost, building time, ordnance cost, ordnance availability, damage value, and crew size. If well-balanced, these three game elements’ respective rule sets will provide different advantages in different contexts that amount to a non-transitive relationship overall.

For reasons that will become clearer in a moment, we will call these advantages utilities with regard to current and anticipated game states. From the perspective of these utilities, even the original Rock Paper Scissors mechanic is context-dependent—only in a much simpler way, as the context for one player’s throw is fully defined by the other player’s throw and vice versa.

But, simple as it is, it has interesting consequences, again. If you play a series of RPS matches, a good strategy is to try and anticipate what the other player will throw next. What that will be, in turn, depends on what the other player anticipates you will throw next. And so on, ad infinitum. In other words, each match’s utilities and each match’s expected utilities mutually define each other.

Perhaps, once more, you can already see what direction this will take. You can use this principle to create decision-related non-transitive relationships between characters and story elements by applying game-theoretical characteristics, best represented by the famous “Prisoner’s Dilemma” example. Game theory is a mathematical model used in social sciences, particularly economics, to analyze the behavior of rational agents in conflict situations and their decisions regarding competition and cooperation.

The Prisoner’s Dilemma itself is a standard situation within that theory, whereby this “standard” has accumulated a good number of variations, primarily in the form of different confinement values—some have 2, 10, and 8 years, others 1, 20, and 5 years, and so on. We’ll stick with 1, 5, and 2 years, for simplicity’s sake. In order to work, the original Prisoner’s Dilemma setup has to make several restrictive assumptions, but we can eliminate the causes that necessitate these assumptions by giving the setup a few narrative tweaks.

Imagine you’re a burglar and you’ve met another burglar, maybe while serving a short time in jail together or through a fence who buys and resells your loot, and you decide to pool resources and work together. Later, in the aftermath of an unsuccessful burglary attempt, you’re both caught with your illegal burglary tools, but without loot or any other clear evidence that would link you to the crime in question. So much for the setup, now the game. You’re alone in your cell, and a police officer visits you to tell you the following. If both you and your friend won’t talk, both of you will be going to jail for one year for the possession of burglary tools. If you snitch on your friend regarding the burglary attempt, you’re going scot-free, but your friend will be locked up for five years. If both of you snitch on each other, both of you can look forward to staying in the can for a period of two years. Then she tells you that, right at this instant, her partner’s making your friend the exact same offer.

Remember the Rock Paper Scissors utilities mentioned above? That’s what this is about. It’s the same thing, only a lot more entertaining and with terrific potential for designing challenging gameplay situations. Your decision depends on what you think your friend will decide, which depends on what your friend thinks you will decide, which depends on what you think what your friend thinks what you will decide, and so on. In practice, each player’s expected utilities for competing or cooperating depend on each player’s anticipation of the other player’s behavior, just like the utilities in a Rock Paper Scissors match. Competing, which amounts to snitching in the Prisoner’s Dilemma, “dominates weakly” because, when both players snitch, both players can only do worse by unilaterally changing their decisions, i.e., from snitching to not snitching. (Here, so-called Nash equilibria enter the picture, but that would lead us too far astray.)

Fig.4.18 Prisoner’s Dilemma
Fig.4.18 Prisoner’s Dilemma

In reality, though, humans have a strong tendency to cooperate, which shouldn’t come as a surprise. As a species of social animals, humans make cooperative decisions all the time, and these cooperative decisions aren’t purely rational actions in the name of one’s individual and immediate self-interest. Or, from another perspective, the social dimension is so strong that it becomes an important part of that very self-interest.

Now you can take these tools, non-transitive math, game-theoretical payoffs (another term for advantages or utilities), and infinity mirrors of mutual trust, to create intricate and challenging relationships between items and agents both on and below the surface. The more social and psychological context you build around such relationships, the less obvious and predictable player actions will become. Factoring in trust, commitment, social ties, beliefs, self-image, shame, and a truckload of other social and psychological mechanism, strategies that weakly dominate under the hood become much less attractive or desirable on the surface. Add to that repeated games (yet another game theory term), where each player remembers the decisions of the other player or players from earlier matches, and it becomes even more interesting and challenging from match to match in non-exhaustive ways. Games that focus on strategy have a high replay value to begin with, but this will give your game’s replay value an extra boost.

That should suffice for cycles related to forms of non-transitivity. For the second part of this beat, let’s turn our attention to the feedback loop. It’s a different kind of cycle, but it has similar characteristics that demand a similar design mindset.

As much as you want non-transitive relationships in your game instead of transitive ones, you want negative feedback loops instead of positive ones. To make the difference between these two immediately clear: the thermostat in your home that keeps your rooms at a pleasant temperature runs on negative feedback while the infernal squeal that fills your conference room when your mic gets too close to the speakers runs on positive feedback.

Positive feedback is a circuit or loop along which the output of a system is fed back to it as input. It often articulates itself as a run-away effect that leads to extreme values of some kind. It’s easy to build, both knowingly and unknowingly; it’s pretty robust and hard to terminate; and it is often a side effect of emergent behavior. Complex systems can create positive feedback loops without warning at any time. And both their exact nature and effect are often unpredictable. (Which makes them a great tool for certain kinds of art.)

There are two major types of positive feedback loops in games. The first type is truly unpredictable and fed by all the math, rules, and cycles we’ve discussed so far. Once it kicks in, it creates effects that no longer depend on player actions and cannot be controlled or terminated through play. This type can only be caught through testing, a topic to be discussed later in the Proposition phase.

The second type primarily affects multiplayer games, both against human players or game agents. It manifests itself in a runaway effect popularly known as “the rich get richer, the poor get poorer.” These are situations where being in the lead has intrinsic advantages that feed back into itself to widen the lead, often progressively. That’s not just profoundly frustrating for those who try in vain to catch up, it’s profoundly uninteresting gameplay as such. To avoid such positive feedback loops and keep everybody on the edge of their seats all the time, a game needs negative feedback loops that counter these effects in clever ways.

Here, good game design goes a long way. Not every solution that works is a good solution, or a great one. The popular rubber band effect in racing games, where AI-driven vehicles never fall too much behind nor get too far ahead in thermostat-fashion, is certainly a low-quality solution that is as transparent as it is annoying. While it is hard to come up with a better, different solution for more realistic racing games, it’s certainly worth a try. (Rubber banding is usually categorized as DDA, or dynamic difficulty adjustment, but true DDA is definitely more clever than that.)

For less realistic racing games, negative feedback solutions are easier to find because they are allowed and even expected to be more imaginative, even off-the-wall. The Mario Kart franchise is a renowned and remarkable example. Item boxes for leading players have a higher probability to contain less powerful and more defense-oriented power-ups. Power-ups for players that lag behind, in contrast, have a higher probability to be more powerful and attack-oriented—up to and including the entertainingly nuclear “Spiny Shell,” introduced with Mario Kart 64, that can be hurled forward and toward the leading player as the most gratifying target.

This negative feedback strategy is not only wildly successful because it counters the runaway effect. It also keeps matches with a mixed field of more experienced and less experienced players enjoyable and challenging for everybody.

Other game types need other solutions. For example, punishing encumbrance modifiers for more powerful weapons. Logistics that become exponentially complicated the more troops or terrain a player amasses. More powerful spells that inflict more powerful headaches and necessitate longer rest periods. A berserk mode that kicks in after receiving substantial amounts of damage. And so on.

Another strategy is bottleneck design. It puts a crushing obstacle toward the end of the game that gives trailing players or the trailing team a substantial chance to catch up. Famous examples are board games in the tradition of Pachisi where only direct throws can bring pieces “home,” or the dart game’s “checking out” and “final double” rules.

There’s a time-tested, game type-agnostic heuristic you can follow. If the player has to add assets in order to win (troops, territories, weapons, loot, friends, powers, favors, resources of any kind), make the game easier to play with fewer items of the same kind. If the player has to get rid of items in order to win (cards, balls, debts, encumbrances of any kind, maybe clothing depending on what you’re into), make the game easier to play with more items of the same kind. This principle can kick in right from the start or be delayed until a threshold value has been reached.

To wrap it up for this beat, a few words on compensation. You can compensate for problems with fudging, which is a bad thing, and you can compensate for problems with tweaking, which is a good thing.

All your math under the hood should be rock-solid. If the math refuses to work out as intended, don’t create special rules that will make it look like it’s working out on the surface. Not only would such rules be clutter rules or dirt, as discussed in Beat 2. Reactions. Special rules to repair other rules will inevitably lead to weaknesses that will beg to be exploited. Chances are, some of these weaknesses will lead to dominant strategies, which will necessitate even more patches with clutter rules. And don’t fall back on designing rules as arcane as telecommunication plans or health insurance options. You will never preempt possible exploits by obfuscation.

Above all, fudging and clutter rules will give you a definition of living hell when it comes to tweaking rules and values—and tweak you must.

As a ground rule, symmetrical games need less tweaking over time than asymmetrical games. For a highly symmetrical game like soccer, rule changes are very rare. In the twentieth century, the entire set of soccer rules got along just fine for decades until a general shift toward defensive playing styles triggered a round of rule changes to encourage more offensive strategies and tactics. Compare that to baseball! Baseball has major or minor rule changes almost every year.

The reason is this. In highly symmetrical games like soccer, the defending team and the attacking team do the same thing, basically, only in opposite geographical directions. So when a new technique comes along, it will benefit the attacking and the defending team equally and cancel out. That way, a new technique will make the game more interesting, not less. There are exceptions, of course, like the aforementioned rise of successful defensive soccer strategies. Baseball, in contrast, is so asymmetrical that a new technique or tactic or strategy or even slight advances in equipment will benefit either the attacking team or the defending team, but never both. That way, every half-inning will be dominated by the attacker or the defender, depending on who profits from that change. This makes the game profoundly uninteresting because it makes it much more predictable, and the rules are quickly amended to balance everything out and make every half-inning as interesting as it should be.

Now take a highly asymmetrical game like baseball, make it even more asymmetrical, add a third party, and you get StarCraft. The necessity to keep everything balanced, that each species will beat each other species on average 50 % of the time over time with players of equal skill while huge masses of players invent new techniques and tactics every minute, makes the maintenance of such a game almost more demanding than designing and creating it in the first place.

Provided the game leaves room for new tactics and strategies, non-transitive RPS relationships are, with StarCraft units as a great example, asymmetrical by nature. Like the rules of baseball, they will regularly demand tweaking even and especially after release, no matter how much prototyping, playtesting, balancing, and QA efforts were put into it before the gold master finally left your premises. And then you will find out that every fudge and every clutter rule you implemented has been a ticking time bomb all along.

up