Playtests generate a lot of data. A typical study might lead to behavioural observations, survey responses, interview answers, telemetry and more.
Analysing that data usually leads to a lot of issues being uncovered – where the experience players are having doesn’t match the experience we intend to make.
I would love it if every one of those issues could be addressed – but that’s often not realistic.
In real life, game development is busy and it’s difficult to make time for running playtests, let alone reacting to the results from the playtest. Backlogs are usually packed full of bugs, feature requests, and new content.
We need to be realistic that player experience is just one factor of a successful game, and ‘actually finishing the game in a reasonable timeframe’ is also important.
This means you need to prioritise your findings.
Here are two approaches for prioritising playtest issues. The first is for usability issues – where players didn’t understand or weren’t able to do something they were meant to do. The second is for more subjective feedback, such as player opinions, which needs more care.
Prioritising usability playtest findings
Usability findings are areas where players failed to understand what they were meant to do, or were unable to act in the way the designer expected. For example – getting lost on a level, or not knowing how to upgrade your armour.
These are objective facts – they definitely happened, and are definitely contrary to the intended experience we want players to have. So dealing with them is reasonably simple.
For each usability issue you discover ask yourself three questions
Is this a core part of the experience?
Some issues impact a core part of the experience. Jumping in Mario, Shooting in Halo, Merging in Candy Crush. Knowing how to progress. These are important systems that are core to the game experience, and if we find issues with them, we should pay attention.
Other systems in games are secondary, not required for progression, or less important. These we can pay less attention to.
So, the first question to ask yourself is whether the issue impacts a core part of the experience or not. Then move along the flowchart to question two.
Was this issue hard to overcome?
Some issues cause significant issues for players – blocking progression, requiring them to ask for help (or google ‘where do I get the moon pearl in A Link To The Past’), or making them extremely upset while they try to overcome it. Others are no big deal.
The second question to ask yourself is ‘was this hard for players to overcome’. Perhaps agree in advance with your team how long is appropriate to be stuck – maybe 30 seconds is fine, but 30 minutes is probably not. Then move onto the third question.
Was this issue persistent?
This one causes some confusion, so I want to be clear about what I mean by persistent.
Once a player has encountered an issue, does that same player get caught by it again, or is it solved forever.
For example – if the player doesn’t know how to upgrade their character – if you show them, and they know it from now on, that is not persistent.
On the other hand, if they keep forgetting how to do it – it is persistent.
What I don’t mean by ‘persistent’ is “how many players did this impact”. With qualitative research, that is a dangerous thing to prioritise on – as qualitative research is focused on learning which behaviours exist, not reliably sizing them.
So, ask yourself whether it’s a learnable thing to overcome, or whether it’ll always be difficult for players.
Come to a conclusion
Having asked yourself these three questions, you can use the flowchart to work out the priority of your issue, from Critical to Low.
Then use that score to prioritise against other tasks in your backlog – you need to definitely take action on critical issues, but other ones will depend on how critical your other development tasks are.
By giving issues a priority, it ensures that they can be prioritised appropriately against other tasks in the development backlog.
How to handle player opinions
Not everything from a playtest is so black and white as usability issues. Some of the results will be vaguer and more subjective, such as “a lot of people are complaining that the sewer level is annoying”.
Because they are subjective, opinions need more care to decide whether to take act on them. Screening becomes even more important – making sure that these players represent your real player-base accurately. Remember that online communities are often more vocal and extreme than your typical players, and don’t get drawn into over-valuing their opinions.
Consider the design intent
With opinions, we need to take an additional step, and consider the intended experience. Whereas a usability issue is automatically ‘broken’, some of the feelings players report will be intentional – and represent the experience we’re trying to make.
We wouldn’t want to make a Soulsbourne game ‘easy’ – even if players are saying it’s hard.
With this in mind, we need to first consider the intended experience. Speak to designers about the core pillars of their game, and the vision they have in their mind for players experience. Push them to define this as specifically as possible – e.g. if the game is meant to be hard, what are you expecting players to understand or be able to do before they can progress.
Then when players are describing a section as hard, or repetitive – compare that to your intended vision.
Prioritise opinion feedback
If the feedback you are hearing is contrary to the intended experience, you can then follow a similar process to prioritise the issue, including…
- How different is the feedback from the intended experience. Just a little bit, or a significant difference?
- Is the issue causing players significant frustration?
- And how many people have this issue. Because opinions are subjective, we want to size how big each opinion is, rather than acting when we first discover it. This will require quantitative work.
Using a similar framework to prioritizing usability issues, you can then decide how important acting on these opinions is, against other items in your development backlog.
Focus improves the value of playtesting
Although it can be hard to ignore feedback, we have to resist the temptation of trying to act on every bit of feedback we hear.
By taking the additional time to prioritise your playtest feedback, we can focus attention on the most critical issues – improving the chance they get acted upon with enough attention to resolve them properly.
Integrate player insight throughout development
Every month, get sent the latest article on how to plan and run efficient high quality playtests to de-risk game development. And get Steve Bromley’s free course today on how to get your first 100 playtesters for teams without much budget or time.
Plus your free early-access copy of ‘Playtest Plus’ – the essential guide to the most impactful playtests to run throughout development of your game