Playing your own game regularly is a hugely important part of making it. So much of game design and creativity is subjective, and requires a tight interplay between input and response, to create the right sensation, or ‘game feel’.
Many studios run regular internal playtests – taking an hour or two each week to get the whole team to play through the latest build, and send feedback to some poor soul to analyze.
When speaking to game studios I often hear frustrations with how these internal playtests go. The feedback from them is often vague, and unhelpful – quickly becoming little more than bug reports.
Teams also end up with too much feedback! Piles of unsorted notes, which will take weeks to go through and react to.
Internal playtesters then get frustrated – reporting the same issues week after week, and not feeling like their feedback is being recognised or valued. They then eventually drop out, and attendance dips.
In this post we’ll look at how to get more value from your internal playtests. By the end of this article you’ll be running tighter, more useful internal playtests and making the most of your colleague’s time.
What are internal playtests
Many studios have weekly, or twice-weekly playtests where everyone tries the latest version of the build and gives their feedback.
By default, that feedback is often collated by email or stuck into a slack channel, with everyone giving their own impressions or thoughts.
Although it’s important to play your own game regularly, this can quickly become unmanageable – the feedback is unstructured, scattershot, and often vague or unactionable.
Internal playtests have limits
There’s also a limit to what can be learned from an internal playtest. You and your team have different background knowledge, motivations and abilities to your real players, which will impact how you play the game. You are not your players.
A lot of research objectives are out of bounds – it’s impossible to test tutorials (because you already understand how it works), explore the new player experience (because you are an expert player), or balance things like difficulty.
Poor scoping can quickly get out of hand, and generate a lot of useless data, and so time put into scoping playtests properly is hugely valuable. We’ll look at how to address that shortly.
Fewer, higher-quality tests
These playtests are a continual stream of messy data. To get a handle on it, we need to slow down that stream.
First question how often you need to be running playtests? Are you running them weekly out of habit? For team building reasons? Or because you genuinely need data that often.
The true cadence of playtests should be based around ‘how often do we have a decision that we need playtest data to help answer’. That can change based on how frequently your studio can make decisions and iterate – but I think very few studios could handle that on a weekly basis.
This will reduce the flood of feedback and data coming in, and more time can be spent analysing, prioritising and actioning each.
You may still want to run weekly play sessions, for team bonding – but not every play session has to be a ‘playtest’. Use different descriptions to set expectations within the team about which playtests are ‘for fun’, and which are to gather valuable data, and explain that the approach for each of these will differ.
Agree on some objectives
Without direction, feedback is scattershot. You’ll get a mixture of bug reports, long-held grudges, conspiracy theories and random thoughts.
This isn’t helpful – the point of playtests is to help decision-making. Each playtest should have a goal – what do we want to learn from it, and what decisions will that help us make.
This means that before running your playtest, you need to agree what to learn from it. Have a call with other leads (or a think by yourself), and make a shortlist of ‘what decisions do we have coming up soon’ or ‘what decisions have we made recently that we’re most uncertain about’.
Then decide which can be answered with internal playtests, and prioritise that list to pick the most important thing to learn now. Inform your prioritisation by considering ‘what do we absolutely need players to get’, ‘what’s new to us that we’re not confident about’, and ‘what can we actually take action on’.
Once you have a focus area – whether it’s a specific map, a specific feature, the feel of a mechanic, you can have more constructive discussions out the best way to explore this.
🚀 Do this for your next playtest: Have a discussion with other leads about upcoming or recent decisions, and rank them by risk to generate focus areas.
Gather data efficiently
Having decided what you want to learn, you can now make smarter decisions about how to gather feedback. Rather than having a slack channel for people to chuck their unstructured feedback into, you can use structure to shape the feedback before it reaches you.
A winning combination is:
- Giving a briefing document to playtesters, to let them know what areas you want them to focus their attention on
- Letting the team play as normal
- Creating a survey (such as with Google Forms) that asks specific questions related to the research objectives, with a space for other off-topic thoughts too.
Putting the burden of adding structure to the data onto playtesters will make analysing it much quicker and more efficient – all of the feedback about each element will be in one place, and easier to theme and prioritise.
This can be supplemented by observing people playing – walk around the office, watch over people’s shoulders, ask some questions about what you’re seeing – as there will be plenty of interesting things that playtesters forget to write in the survey!
🚀 Do this for your next playtest: Make a survey and use that to gather feedback from your next playtest.
Convince people to take part by showing the impact
Over time, people lose enthusiasm for taking part in playtesting. Enthusiasm wanes – especially when the benefits of putting effort into doing them is not clear.
Because of this, we need to show the value of running internal playtests.
Some of the techniques we’ve discussed already will make this easier. Having fewer, more focused playtests means it’s more likely you’ll take action based on it and it’ll lead to game design changes.
People want to feel heard, so it’s worth the time writing up some notes about what you learned from each internal playtest, and sending around an email about how it’s led to changes to the game. This will help convince people to take part again!
Making playtesting actually have value will make it easier for people to give up their time to do it, and increase attendance.
🚀 Do this for your next playtest: Send an email after around the company describing the changes made as a result of your playtest.
Run better internal playtests
Internal playtests are an important tool in your arsenal. Although they don’t replace genuine player-focused playtests, they can be a rough steer on what issues players may report, and help prioritise what needs further attention. They will also help team building, and get everyone familiar with the state of the game.
By setting objectives, gathering data efficiently, and showing the impact of internal playtests, they can become a reasonably simple and effective task, and a core part of your iterative game development process.
For more guidance on how to run different types of playtests, sign up to the newsletter for more articles like this each month!
Thanks to Nat Herman for her help putting together this article
Run better playtests
Every month, get sent the latest articles on how to run better playtests and de-risk game development.
Plus get my free course on how to get your first 100 playtesters (Coming early 2023)