Critique: The Chemistry of Game Design

Recently, Daniel Cook aka Danc from Lost Garden wrote another essay on Gamasutra. It is very interesting. Danc compares Game Design to alchemy and urges to develop a more systematic approach akin to chemistry. I certainly agree that we need to work on a theory of game design. However I am a bit concerned that the model he proposes might have some flaws. Here is a list of comments on his essay:

People are not Atoms
Chemistry’s approach deals with dead matter, identical atoms which will always react the same when placed in the same situation. Because of that chemistry is able to simplify a lot. As Danc correctly points out, Game Design is centered on living beings. Living beings are diverse, chaotic and smart. They are very difficult to predict. A rigid approach like Danc’s effectively reduces people to zombies (as he himself illustrates that) in order to work. A lot of important facts are being left out in the process. We should prepare ourselves for the eventuality that in the end it might be yet impossible to create a working model for game design because our knowledge of the human mind is very limited. Danc’s model must be incomplete by definition. What can’t it do? After all, chemical models have also their limits.

Are those really Skill Atoms?
Atoms are the smallest, indivisible elements in chemistry. Atoms have a very solid definition and it is very easy to tell an atom apart from something bigger, like a molecule for example. By applying rigid structures on fuzzy mental concepts Danc’s model gets into the same problems as Memetics – it is quite difficult to break it down into indivisible elements with a solid definition. We end up applying the same label to things with different properties. In his Tetris Skill Chain, Danc calls “Rotating” and “Faster slot recognition” both Skill Atoms. I think those two are very different from each other. The first one is a sudden realization, the second one is a skill which cannot be “completed” but persists throughout the lifetime of the game. Players get gradually better at “Faster slot recognition” but it is impossible to master it. You cannot even recognize “levels” of mastery there unless you introduce some kind of metrics – which on the other hand probably don’t apply to “Rotating”.

Is that all?
Actually “Faster slot recognition” provides a clue here. Some skills have a rather binary nature, they are information – you either “get it” or not. Others seem to be rather fuzzy. In the end, there might be “invisible” skills. So fuzzy that is is impossible to put them in words. If you could put them into words, it would be possible to learn to ride a bike or dance Tango by just reading a very well written book about it. In fact riding a bike and Tango both require practice. The result of that practice is a bunch of skills but it is impossible to put them in words. There has been already a theory about it and those “invisible” and fuzzy skills are called Tacit Knowledge. Tacit Knowledge seems like a threat to Game Design theory but it is at the same time the very thing that makes games such and exciting medium – because they can convey something which is difficultt to experience otherwise.
Another idea I wrote about just recently works the other way around: humans are intelligent because they are making predictions. The mental model part of the diagram is actually the most important and prominent one. It might very well be that in reality players “leapfrogging” many atoms presented by Danc at a time, getting ahead of the game very quickly. A similar effect was recognized in how children learn languages – they seem to use new words and grammatical constructions without much practice or examples. They figure it all out in their minds without the need for rehearsal and feedback. Danc’s model on the other hand seems based on Behaviorism which slowly is being replaced with more sophisticated systems, most prominently in language acquisition.

Is it universal?
I have experienced problems applying Danc’s and other simple models to more complicated games. In general, it all seems to be working when we talk about pressing buttons and observing reactions. But what about long-term strategies in games like Civilization or Starcraft? The simple cause/effect relationship is rendered useless if I try to model the strategic thought process behind a 20 minute Starcraft match. Players consider a huge amount of factors when making decisions. The general Stategy consists of thousands of button presses and observations. No game is like another. Players make assumptions, predictions modify their strategy as they go. Everything is happening simultaneously. In the process, players even attempt to construct a mental model of their human opponent. Using Danc’s theory in this case is like doing architecture one atom at a time. We need different models with lower resolutions for more complex games. Just like Quantum Physics and Theory of Relativity are for different-sized objects.

What is it good for?
A way to improve the theory might be to start thinking about why we are doing it. How would this kind of Skill Chain Diagram fit into a game design process? What kind of insights might game designers draw out of it. How do we recognize problems? How does an ideal Skill Chain Diagram look like? Danc already mentioned what impact burnout can have. Is this the purpose of the Skill Chain Diagram? To check against reality and cross out the skills which have been missed because of burnout? Would it be impossible to recognize those faults without such diagram? In the end, Alexey Pajitnov made Tetris without it. How could the Diagram be improved to pragmatically augment the Game Design process?

Is it tested?
One important problems with game theories (and theories in general) – they sound great until they are tested against reality. I’m curious if the the Tetris Skill Chain Diagram was actually tested against a real person learning Tetris. That’s why went the other way around and began with “Inductive Game Design Research” after all.

Of course in the end, we live in exciting times and it is extremely motivating to see other people like Danc also work towards a better understanding of the medium. I certainly do hope that we can start a discussion to gradually improve and diversify our theories, models and tools.

Krystian Majewski

Krystian Majewski was born in Warsaw and studied design at Köln International School of Design. Before, he was working on a mid-size console project for NEON Studios in Frankfurt. He helped establish a Master course in Game Design and Research at the Cologne Game Lab. Today he teaches Game Design at various institutions and develops independent games.

9 responses to “Critique: The Chemistry of Game Design”

  1. Susan

    Actually, it is proven. What Cook describes is amazingly similar to basic learning theory. His chain of skill atoms is like a rubric for a lesson plan. Educators use this kind of model all the time to teach. A good game is basically a good teacher.

  2. Krystian Majewski

    What I would like to know is if this specific Tetris Skill Chain diagram has been empirically tested. If there are people out there who learn Teris just like that.

  3. steve

    Great points here. I’d like to comment on the paragraph: “Are those really Skill Atoms?”

    I would agree that these aren’t really “atoms”, and a better term should be used. Maybe just “skill”. After all, “Reach Platform” isn’t even trying to be an atom – it’s obviously a combination of jumping and knowing that you can stand on platforms.

    However, you could argue that “atom” is indeed a good word, because as far as the player is concerned, it’s a discrete skill they have learned. “Riding a bike” could be an atom, and “Riding with no hands” another. So it’s not atomic in terms of what the skill involves, but atomic in terms of how the player learned it. This is very player-centric, and thus will probably have some variation from player to player. But I think, if you know your target audience, you can generalize pretty successfully.

    And using that player-centric definition, it can help you as a designer. If you find that skill A is not being learned by players, that’s probably a sign that it’s not “atomic” enough. So, you should break up that skill into smaller ones, A0 and A1. Then, you can find ways to teach A0 and A1, and once they learn those, they can finally learn A. Of course, if A0 is still not small enough, you should divide even more!

  4. Krystian Majewski

    I wouldn’t be so sure if this model is really player-centric. As a player learning to ride a bike is totally different from learning that pressing Button A rotates the Tetris piece. Yet, in this model, both are treated equal.
    It seems more like the model is not player-centric but analytic-centric: it focuses on those skills which are easy to analyze.
    So breaking down Skill A into A0 and A1 will not necessarily result in more exciting game but in one which yields a more exciting diagram.

  5. Danc

    Hi krystian!

    This is a wonderful post on the essay. Everyone raises some good points. Let’s see…

    What is it good for?
    There are two uses of skill atoms.
    - Clarifying a design: This is a nice simple framework with very few exceptions. By mapping your design onto this framework, you are forced to be explicit about things that would otherwise involve hand waving. I find that the simple act of writing something down can help clarify my thought process. There are lots of similar techniques. Skill chains tends to be a bit more comprehensive than most since they clearly include all elements in a functional framework. When you add that new piece of art, you can clearly assign it a purpose. Admittedly, some folks enjoy their fuzzy thinking and random urges to add random things to their random project. In a strongly artist medium like game design, not all creators are interested in analytic thought. :-)

    - Regression testing a design: I’m a big believer in personal observation of players. However, there are situations where this is not possible. In any game involving a larger number of players, very long term mastery of skills, or very short term mastery of skills, automated logging becomes interesting. Skill chains become a logical way of instrumenting a game so that you know you are getting useful, clearly organized information back.

    It is this second use where I see the most promise for skill chains. Game design is a highly iterative activity. If we can give game developers easy-to-implement feedback mechanisms that let them understand where they fail in a rapid clear-cut manner, they can evolve their games towards a fun state more quickly.

    People are not atoms
    This is true, yet simple models can still be useful. Both macro and microeconomics offer simplified models of human behavior that can be quite powerful with a limited scope of activities. Just because we can’t model everything doesn’t mean we shouldn’t attempt some measurable aspects of game players. If early scientists had given up because they had incomplete theories, we would never be where we are today. :-)

    There are many limits to skill atoms. I think the strategic processes that some players engage in while playing Chess or Go (or StarCraft or Civilization) are quite difficult to instrument. We can only measure and model that which has happened previously. If each game results in unique situations that require unique improvised skills, then skill atoms aren’t very useful.

    The flip side of this is that there are many strategies we can measure. For example, in Civilization, the strategy of exhausting the technology tree is explicitly rewarded. My rule of thumb is that a skill chain will never be complete since the players are always smarter than the developers. However, for most games, it can be complete enough to offer insight on the majority of the player’s actions.

    Has the Tetris skill chain been tested?
    I provided this diagram primarily as an example of what a skill chain might look like. I’m sure it could be improved if someone starting instrumenting Tetris and testing how the results held up over a larger population. I used my own experience learning Tetris as the foundation.

    There are some obvious dependencies that can be determined logically. It is mechanically impossible for the player to regularly complete lines if they can’t rotate or move, for example.

    Lovely stuff,

  6. ChronoDK

    As far as I know alchemy is not a real science – chemistry is. That is why I think using alchemy as the metaphor for this model is rather brilliant, and why I feel that your first point of critique is a bit harsh. I don’t think the model was ever meant to provide an exact scientific description – it’s alchemy, not chemistry.

    Other than that – nicely done critique. Those are points I will keep in mind when (not if) I start using the model. Your blog has just been bookmarked in the same folder as DanC’s :)

  7. Krystian Majewski

    Thanks for your answers.

    ChronoDK: I think I need to apologize a bit. Of course, my intension was not to argue against scientific thinking. In the contrary: science means developing falsifiable theories. So the way to do science is to be as skeptical as possible. This can be easily misunderstood, especially if the skeptic is not diplomatic enough. ;-)

    As for Dancs comments:

    I agree that having a model to work on has an advantage over “hand waving”. I think the strength of your model is that it helps a game designer be more consequent with his vision then he otherwise would be.
    However, I’m a bit unsure how well the results of the model will represent real player behavior. Having some experience in web design I often encountered very organized people doing very logical and scientific predictions of how users will use their website. Those predictions were shattered as soon as real people sat behind the monitor to actually use the website. Users didn’t read the text they were supposed to read, they clicked the wrong buttons and did everything backwards. I would like to see how your model works being tested against reality.

    Also I have noticed that your systems tend to focus so much trying to measure things, which are difficult to measure. In this system you try to measure what the users learn, in your system of game play notation you try to measure the reward they receive. It might be a good idea to focus more on what you can really measure, like the game itself. Developing a model of a game is difficult enough, no need to make things even harder and add a model of the player on top of it. The disadvantage would be that the results of such model need more interpretation. The advantage would be that as a game designer, you don’t have to subscribe to a specific model of mind the player to be able to use the system. I will post an alternative to your system of gameplay notation soon, stay tuned.

  8. Danc

    I look forward to seeing what you come up with, Kristian.

    One of the lessons I’ve learned about logging is that it helps to have a clearly defined set of questions that you are interested in answering. A framework like skill chains helps you get to those interesting question without reinventing the wheel every time.

    Skill atoms ask some really basic questions:
    - When do players start performing important actions in the game?
    - When does the player experience designer-specified feedback intended to cue them into learning a new set of actions?
    - When do players stop performing interesting actions?

    These are very measurable activities that can be logged in the game, not pie in the sky theory. The theory, however, suggests what to measure and why it is important. One without the other is flying blind.

    Shattered predictions
    Skill atoms are not intended to be used as an a priori description of the game design. As you say, when designs meet players, designs get munged. I’ve been running usability tests this past week and am always impressed at how complex ‘obvious’ designs turn out. To imagine that you can jot down a design for skill atoms out of your head and that you’ll have created a great game before it is built is foolish.

    Instead, you accumulate skill atoms through iterative building, testing and watching players interact with a system. With the complex simulations (even Tetris!) at the heart of most games, it is difficult to predict what skills are valuable until you play. The good designer observes what behaviors are interesting and then codifies them with feedback systems so that they are accessible by and interesting to a broader population. Actions + Simulation + Feedback. In other words, they create skill atoms.

    And then you test your design. Did people learn the skill? Did they use it? For how long? Watch them, did they become frustrated or bored when the logging system said that they had stopped pursuing a skill? At this point, you revise your atoms. Ideally the design improves. :-)

    take care

  9. Krystian Majewski

    Thanks for taking your time to explain your approach, Danc. I think I have a better understanding of the model now. It is an interesting concept and I’m looking forward to the next essay.

    Until then, here is the post about the notation system I mentioned.


The Game Design Scrapbook is a second blog of group of three game designers from Germany. On our first blog, Game Design Reviews we describe some games we played and point out various interesting details. Unfortunately, we found out that we also need some place to collect quick and dirty ideas that pop into our minds. Hence, welcome to Game Design Scrapbook. You will encounter wild, random rantings. Many of then incoherent. Some of them maybe even in German. If you don't like it, you might enjoy Game Design Reviews more.


follow Krystian on Twitter
follow Yu-Chung on Twitter
follow Daniel on Twitter