Scope your studies to increase research impact

How to scope games user research studies for impact - uncovering the context for decision making, and setting sensible objectives.

Last updated:

It’s a common problem for games user research teams. You run an excellent study, with a pragmatic yet appropriate method selection, learn interesting things, create some beautiful slides and then… 

…nothing.

If you’re lucky, you get a debrief where everyone claps, but ultimately your development team carries on as they were – with your study quickly being forgotten about, and not impacting the roadmap at all.

Left unaddressed, this will cause long-term issues. If you’re external to the team (for example an agency or publisher), this will reduce the perception of the return on investment of research, and teams may not come back for more.

If you’re internal, this can make your role seem less important, and put you at the top of the list when the next round of layoffs occur.

And for everyone, low impact is demotivating, and makes you feel unappreciated. 

This is a big topic, but in this article we’ll look specifically at scoping studies for impact – why the roots of this problem lie in setting the objectives for your study, and how we can address this problem to run research that matters.

The only thing worse than being talked about…

Our objective as games user researchers is to de-risk game development, using our toolkit of research methods to anticipate player experience issues early, while teams still have opportunities to make changes.  

The biggest risk for our own team though is a lack of impact – especially when studios are making cuts. How can we justify our roles if no-one is listening?

When speaking to researchers about projects that have struggled, some common themes emerge:

  • running studies too late to be able to react, 
  • learning things that aren’t in-line with current development priorities
  • Ineffective communication which doesn’t align with how our teams work

The causes for these can be complex, however the root of the problem starts in how studies are scoped.

It’s hard to get heard in game development

Many of these challenges aren’t unique to the games industry, and you’ll hear similar complaints from user researchers in other sectors. 

However game development creates a particularly challenging environment. Games are by nature complex to develop, since they are built by a huge number of different disciplines, all working to an unclear specification, with often very hard deadlines to hit. 

Very soon everything ends up on fire, rapid changes of scope are needed, and crunch emerges…

As the stress kicks in, and team’s attention move away from “are we creating the best player experience we can” to “can we finish a game in time”, it becomes increasingly hard for researchers to be heard. Without interrogation, this can create an environment for low-impact research.

User researchers need to diagnose study requests

The first trap to watch out for is when team’s come to us with a request for a study. It’s reasonably common to hear “we need to run a survey, please run a survey for us”. 

Or game teams treating research as a gate in their development process – “we’ve finished our alpha milestone, please test it”.

These can be a recipe for low impact studies, as leading with the method or milestone will cause the link between “research” and “decision making” to become increasingly stretched. This can cause studies to end up far too large, teams learn too much, and it all becomes noise. Or it can also lead to studies that learn irrelevant things – stuff that isn’t informed by the decisions that teams have to make, and creating the impression games user research has a poor return on investment.

Just as your doctor wouldn’t perform an amputation just because you asked, researchers need to also understand the context that led to a request for a study, diagnose what led teams to that request and make their own informed decision about what action is appropriate.

Understand what informed a study request

Unpacking how teams came to the conclusion that they wanted a study will be essential to reframing the study with impact. 

A study request should be informed by:

  • What decisions need to be made
  • Who is making the decision
  • What do they believe, and what are they influenced by
  • When does a decision need to be made by.

To learn what decisions need to be made, you can ask about context, including questions in a kick-off about “why are you running this study”, and “what decisions will you make as a result of this study”.  These contextual questions, alongside asking ‘what do you want to learn from this study’, is an essential part of your kick-off meeting.

To understand who is making the decisions will require stakeholder mapping. Ask who will be using the information from your study, and then spend time 1:1 understanding who they are, and what they are hoping to achieve with this study. Informal coffee chats are a great way to get to know stakeholders as people, and start to understand their objectives and priorities. 

To understand ‘how to influence people’ has to recognise that teams aren’t perfectly rational decision making machines. Erika Hall describes in her article on setting objectivesOrganizations are the social context in which design and product decision-making happens. If you don’t understand how people make decisions in your organization, you will never be able to influence them”. I’m sure you’ve encountered stakeholders who question the validity of qualitative data, or don’t believe anything that doesn’t have a number attached to it. Identifying and mitigating this before you run your study is essential to impact.

Last of all, understanding when decisions need to be made ensures that your study design remains practical – you will get results before it’s too late, greatly increasing the chance someone will act on your conclusions.

Teams often can’t articulate these points when asked

Many of these points make good questions for a kick-off. However this isn’t the only technique we can use to understand the context in which our studies are running, and scope our studies appropriately.

If you’re in-house, you can also be:

  • Meeting with colleagues with regular informal check-ins (“coffee chats”)
  • Sitting in meetings unrelated to your discipline, and regularly joining stand-ups
  • Building trust with small bits of value over time (once you understand what people care about, you can share an article or small insight from a study that you know they’ll care about)
  • Running end-of-day wrap-ups for your studies, to get team members to articulate what they have seen and are thinking halfway through a study.

This informal networking can be harder if you are external to the game team, but some other techniques to consider include:

  • Reading everything! (game design documents, wikis, anything else the game team have ever written down)
  • 1:1 meeting with each stakeholder, before the study
  • Following up after previous studies to ask ‘what happened as a result of what we learned’
  • And as always – using the questions above to inspire your kick-off meetings

Ultimately, context is incredibly important when teams come to us and ask for a study – and getting that context allows us to design a more impactful and relevant playtest.

But what if teams aren’t coming to you with study requests at all?

Inspiring teams to request studies

It’s reasonably common that busy game teams don’t make user research the top of their priority list. If you’re lucky, they will run a playtest when hitting a milestone. If you’re unlucky they won’t come to you at all. 

That’s a recipe for big bundled up tests, which learn problems that the team is not in a position to address. But team’s are always making decisions, and if we can help team’s articulate their hypotheses behind those decisions, and recognise the assumptions about player behaviour implicit within their hypotheses, we can create smaller, more tightly focused studies that find answers at the right time. 

We need to understand what decisions the team is trying to make, and use that to inspire study designs.

articulating design hypotheses

Articulating design hypotheses

This can be achieved by a workshop. Sitting the leads from each discipline down, making them consider what decisions they are making currently, have made recently, and the biggest uncertainties can help inspire what to learn from upcoming playtests.

This long list of potential topics can then be prioritised, as covered in this previous article.

Anticipate your team’s priorities

The type of decisions our teams are making is heavily influenced by the game’s development stage, and we should check that study objectives align with development priorities.

Understanding the stage of development can be a shortcut to anticipating our colleagues priorities, and ensure we’re making sensible decisions about what to focus on with our studies. 

My free early access edition of Playtest Plus gives inspiration for what teams will want to learn throughout development, with appropriate research objectives for each phase. You can get it here:

Integrate player insight throughout development

Every month, get sent the latest article on how to plan and run efficient high quality playtests to de-risk game development. And get Steve Bromley’s free course today on how to get your first 100 playtesters for teams without much budget or time.

Plus your free early-access copy of ‘Playtest Plus’ – the essential guide to the most impactful playtests to run throughout development of your game

Free Book Free Course

Ensure your objectives are mapping to the team’s priorities, to avoid your study appearing irrelevant or trivial.

Scoping research studies for impact

As we’ve covered, Impact requires understanding the decisions that teams are making, and getting convincing data to them in time to inspire their decisions. 

To do this successfully requires not just a deep understanding of research methods, but also understanding your stakeholders, their biases and the social context in which decisions get made.


I help teams with this, get in touch for a free no-pressure initial discussion, if you want to take your playtesting from good to great.

Author image

Meet the author

Steve Bromley is an expert user researcher, who works with studios of all sizes to run playtests, and integrate user research into the game development process.

Learn more

Keep Exploring

Test A Game With Kids

How to test games with kids

Practical guidance on how to get high quality data from your playtests with children. 

How much to playtest

How much playtesting is enough?

When to start (and stop) playtesting throughout game development.

budget for games user research

How to budget for games user research

How much does user research cost, what to budget for user research, and how best to spend the budget you have to de-risk game development.

Master Games User Research

Free monthly new articles teaching playtesting & how to be a games user researcher. Join the most interesting conversations about games user research, discover job opportunities, and be introduced to new ways to think about game development.

Which best describes you?(Required)