Friday, 20 March 2015

Game Design #35: Game Design & Playtesting - Brent Spivey

This was originally part of an interview on "making wargames" but Brent's responses are so in-depth and useful I didn't want them "lost" in the interview so I have reproduced them separately here.

The question Brent is responding to is "Describe your design stages"....

I tend to start from one of two very different directions depending on what initially inspires me.

The first would be from a mechanical standpoint. If there's a mechanic [or group of mechanics] that are particularly interesting to me at the moment, I'll ask myself what sort of game would they be best at representing. Usually, this part of the process is very organic and the genre or game type will jump out at me without me having to really 'ask the question' at all. Sometimes this will happen while trying to help one of my design buddies create a mechanic for one of their games, and I'll say 'You know, that would actually be perfect for x!'
The other is to start from a setting, genre, or game type, decide on a play style, and either create a set of mechanics to achieve what I want or use existing tools that I've been saving for a rainy day. I then go about the business of creating what I consider to be the core design of the game.

After the Concepts and Core Design
After I have all the basic mechanics, interactions, and essential moving parts in place, I go through the process of doing the math to get the values for the various skills, traits, characteristics, and even the non-priced components of the system. By non-priced, I mean things under the hood of the design that the end user will never see but have a real impact on the overall balance of the design [ie How much does an action cost?]. These values then get pushed around a bit so that certain components, combinations, and classes maintain a specific ratio in relation to one another. Why? As a serious geek, I have a belief that the use and employment of the golden ratio in design adds to the feeling of balance and ‘rightness’ and even beauty that transcends the actual math. These small shifts and nudges don't have any adverse effects on game balance as the numbers that I begin with are very large for the purpose of maintaining accuracy and specificity, but all that’s probably more than most people reading this will want to know!

Bringing it to the Group
I’m fortunate enough to meet with other designers and enthusiasts each week for a 5-6 hour session at a local Barnes and Noble. Everyone comes with a different skill set and area of interest or expertise. We take over the cafe area, get our espressos [and maybe a little cheesecake!], and dive into some game design. 

Once I’ve finished my core draft and initial calculations, I bring my design document to these guys without the mathematical values, and let them take a crack at calculating the various points themselves [using whatever method and/or logic they prefer]. This lets me not only see how close their points and calculations are to mine, but it also let’s me see how they view the ‘economy of the system’ without having played it yet and only having read through a basic draft. Once the calculations are complete the conversations begin. Ideas, concepts, and methods are challenged. 

After initial points and values are agreed upon, everyone will take the rules home to get some plays in. If we have enough time, we do some gaming that evening. My philosophy for these first tests differs from many other designers in that I do NOT give them any testing guidelines or areas that I want them to focus on. I simply want them to play the game and come back the next week with their impressions. This is important to me for several reasons:

I don’t taint the process by coloring their view of the game, pushing them in a direction with my expectations, or having them focus on particular elements. In short, It keeps the testers from being biased. It also let’s me see if my ‘hidden’ design goals, like promoting a particular style of play or experience, are met.

First Debrief

When we next meet, it’s time to talk about the game and get the initial impressions. These come in the form of what was liked, disliked, how the game moved and flowed, perceived balance issues, and all the other things one talks about when looking at any game design. I start with the most common observations that are noted by multiple testers, good or bad, and we attack them as a group. I think it’s important to note that we never pull any punches with one another when it comes to our designs. There are no hurt feelings here as we don’t attack one another but instead focus on making the best game possible- period. In fact, it’s an important part of my design philosophy that even the ideas that work well should constantly be challenged. Working well or good isn’t enough when it could be better or possibly great. Sometimes things will work fine mechanically, but the overall game lacks that fun factor. This can require major revisions or a complete reboot.

Subsequent Tests and Computer Simulations
The next weeks and months of testing are more structured and look at specific design elements. During this time we also collect data on each others playing styles, win-loss  ratios, force composition, strategies, and tactics. Once it's felt that a certain degree of balance has been met, the game rules and other data is put into a computer simulation program and large volume virtual testing is performed. While the program we use most certainly doesn't rival Google's AI or feature true 'deep learning', it's pretty slick and extremely useful for spotting exploits, tendencies, and patterns. Small tweaks to the game are then made based on the information. 

The Outside World
Once the last round of internal changes have been made, I send playtest copies out to various groups of players around the world. While this is obviously useful from a hard data standpoint, I find that it's even more valuable from a 'player perception' standpoint. How they feel about the game, which concepts aren't delivered clearly, perceived balance issues, assumptions based on presentation, and general gameplay impressions are instrumental in choosing the wording and presentation of the final draft. Gameplay changes can still be made at this point, but the overall system is usually sound.

I thought there were a lot of really interesting points made by Brent. The ones that stood out to me were:

(a) the emphasis on "the math."  Ever picked up a game that didn't feel 'right?' For example, shooting in LOTR, or the US infantry move-and-fire rule in FoW.   This is looking at the "cost"and worth of things - not only a troop "point system" but of actions themselves - right at the START of the design process, rather than something tacked-on at the end.  This is something I've often had a gut feeling about but never heard so clearly articulated. 

(b) initially giving no playtest guidelines or requirements - to avoid bias - and to check if the gameplay style/tactics the author was trying to promote occurs naturally.

(c) all game elements (even good ones) are questioned

(d) playtesting then becomes specific

(e) use of computer programs to playtest.  This is something I had never previously considered and it offers several advantages.

Again, I'd like to thank Brent for his very thorough (and useful) responses.  Remember to check out the rest of his interview here.  


  1. Really enjoyed seeing your insights on this.

  2. I'd be interested in looking at the math and the software that group uses.

  3. Glad you enjoyed it. It was both fun and interesting to examine my own processes and try to translate them into some condensed responses [condensed being a relative term].

    1. I think it was a great post - and you should consider more, targeting areas of interest.

      I'd like to do a "think aloud while designing a game" post but the problem is I have a lot of experience with game mechanics, but very little with making the game. (I.e. I could do the commentary, but making the game to start with... meh)

      Perhaps consider doing something on your Bombshell games site?

    2. I like that idea a lot. I'll have to put it out there and see what people are interested in hearing about.

  4. I'd be interested in reading people's thoughts on the Ludology episode about heuristics.

  5. Fascinating approach to playtesting.
    The initial free phase has a lot in common with "brainstorming".
    I imagine this allows tuning to occur quite late in the development process.

    Computer testing is also interesting.
    I've noted elsewhere (Playtesting isn't scientific testing) that testing helps to ensure that match-ups feel about right (An I've encountered more than a few games which fail the feel test).
    I can see how an automated simulation can go a lot deeper in ensuring game outcomes match real-world expectations (for example how many Shermans to fight each Tiger).

    Like most simulations it is possible to run the probabilities on paper, but the sums soon reach hit level fiendish.
    Interested readers can search for the work of Frederick W Lanchaster.
    He built armoured cars when he wasn't solving paper wargaming problems.