We run playtests to make games better. The success of a research study should therefore be measured by impact – ‘did this playtest improve the game’. That requires decision makers (designers, producers, senior leadership) to be listening to our studies, and ready to act on what we learn.
When trying to benchmark whether your research team is being successful it’s easy to fall into the trap of measuring outputs (number of studies run, number of participants) rather than the much harder to measure outcome of “has this changed the team’s direction in a positive way”. When working on Console/PC games, playtests are most impactful pre-launch, where it’s particularly hard to measure the impact (unlike mobile and live service where we have retention data available to us).
Impact requires teams to act on our findings, so running the playtest is just the first step of a successful study. In this article we’ll look at some impactful ways of getting decision makers to care about research, so that we can deliver on making games better.
1. Align research with what they care about already
We need designers, producers, and creative leaders to pay attention to our work. However these roles are often over-stretched, working under tight deadlines and forced to strictly prioritise their time.
Although everyone agrees the player experience is important, it’s rarely urgent – and immediate incentives force teams to prioritise elsewhere (for example, their boss is shouting at them to get something completed). More urgent demands means we can’t rely on “it’s good for players” as an argument for why we should run a playtest, or why the playtest findings should make it into the backlog.
Our first step should be to understand what is important to decision makers, by asking questions such as “what are you working on currently?”, “what are you unsure about?” and “what do you think is the biggest risk currently?”. Understanding how success is measured for their role is also an important part of your toolkit.
I’ve found through experiences that producers are often focused on “will the quality be good enough, and will it be ready on time”, which means we can emphasise how the study will help prioritise backlogs, reduce wasted effort and best spend their limited development time. Designers often need support to justify their decisions in spaces where everyone has an opinion. Leadership is often focused on the potential commercial success of the game, which is another framing we can consider using for justifying our studies.

Understanding what’s important to our stakeholders, and using this information to prioritise what playtests happen when is important – and my most successful relationships with teams usually start with regular roadmapping to achieve this.
For more significant cultural change, I see crises create the opportunity to create a more player-focused culture – such as underperforming launches, missed milestones, retention issues. By understanding what’s important to decision makers, we can create opportunities to run high impact research.
2. Open up the playtest for observation
Humans are social creatures, and we naturally pay more attention to ‘stories’ than data. Seeing players interact with games first hand will create memories that stick with our design and production colleagues – and they are much more likely to remember the pain of watching a player get stuck on an encounter for 30+ minutes, than if they’d just read about it in a report.
To create memorable moments, we want to open up our playtests for the team to observe. For lab based work, this can be done with observation rooms, behind one way glass or streamed virtually. For remote teams, streaming the sessions live will allow people to watch from their desks (and this can be encouraged by setting up a dedicated space in the office to watch, and buying snacks).
Game development is busy, and it’s hard to find time to watch playtests. I’ve found in practice teams often don’t watch recorded videos (and the teams that do are my favourite to work with ❤️❤️❤️). Using the ‘live’ aspect of streams to create shared experiences where the team are all watching and commenting on the sessions together encourages teams to prioritise watching the session, and creates the opportunity to create memorable moments.
3. Make ‘what does it mean’ collaborative
Synthesis – turning raw data into meaning – can be a slow process (although still essential – I don’t yet trust this can be automated by AI tools). I’ve previously shared a video of the process I go through to do this.
In addition to your formal synthesis, there’s huge value in getting design colleagues to also go through a more informal synthesis process – encouraging them to share their notes and observations from the sessions they observed, grouping them with similar observations, and prioritising them. This is not only helpful for the designers (who are being exposed to more of the playtest data), but also valuable to help you ‘tune into’ what is resonating with them, and what themes are important to interrogate further or highlight in your own synthesis.
I run these as interactive workshops, where each colleague takes it in turn to share their observation with the team, and theme it live – either with physical post-its, or on a digital platform such as Miro.
This can also be a good early opportunity to catch misconceptions – conclusions your design colleagues are drawing that you don’t believe are accurate – or to address potential objections (such as “I don’t think this person had really played our game before”) before they start to grow.
Ready to finally start your games user research career?
Every month, get sent the latest articles on how to start a career in game development, and find games user research jobs.
Plus get two free e-books of career guidance from top games companies
4. Push the team into action
Most research studies end with a report (I’ve written about what to include in a research report before). Reports are useful because they are shareable, and can be a good reference for the future, to understand why decisions were made. However they aren’t the most impactful way of presenting our conclusions.
If you send a report, it’s likely teams will only skim it. If you present it, you can guarantee slightly more attention. But to encourage teams to really digest how your conclusions will influence the decisions, we need to take more active methods.
I recommend partnering debriefs with action-focused workshops, going through each of the top findings in turn, and asking multi-disciplinary teams to consider “what actions should we take to address this”. These ideation sessions don’t need to be a hard commitment to taking that action, but will help encourage teams to explore and discuss potential solutions – helping them take the first steps. Working with your production colleagues to prioritise those actions and get them into the backlog will then turn speculative ideas into a genuine commitment to change.
Research is a team sport
Our job isn’t really ‘running studies’, and researchers understanding players isn’t enough. By prioritising collaboration in our playtests we can inspire and influence our teams, and deliver on making games better.