Analysing a games user research study

After running a usability or UX playtest, the raw data needs to be analysed to reveal the conclusions and inform game design decisions. Learn how to analyse a games user research study to turn raw data into meaningful findings.

Last updated:

After running a usability or UX playtest, the raw data needs to be analysed to reveal the conclusions and inform game design decisions. Learn how to analyse a games user research study to turn raw data into meaningful findings.

This is AN EXTRACT FROM THE gAMES USER RESEARCH BOOK

The ultimate resource for aspiring or junior researchers who want to start a career in games. Learn how to run professional quality playtests, improve the UX of games and make games players love.

Start running playtests, getting job interviews, and making games better today.

Learn more about the book

Analysis is the process of moving from raw data to creating well-described findings that are useful to the game team. Well-described findings give developers enough information to take the correct action and improve how the game is experienced by players. 

Game development moves very fast, and delays can reduce the impact of a research study. We want game teams to observe sessions because it increases their understanding of what occurred. It also increases their faith in the findings from the study. However it also creates the risk that teams will start to fix what they observe immediately after the study finishes. When this occurs, teams don’t wait for analysis. We believe that analysis is valuable to make sure the right action is taken from research studies. Therefore it would be a failure to allow teams to act without analysis. To reduce the risk of this occurring, analysis needs to be fast and accurate. I’ve previously written about how mind maps can speed up analysis in the book Building User Research Teams

In many studios, efficient analysis on reasonably sized studies can be done in as little as 24 hours. However, it is worth taking more time to ensure that the findings from analysis are accurate and well explained. Poor analysis will break the game team’s trust and set back the cause of integrating user research into the development process.

Identify if the players experience worked as intended

The process for analysis differs based on the study design. It typically involves reviewing the raw data captured in the study – such as the observation notes, interview transcripts and survey responses. From this the researcher will identify indicators that the player’s experience differed from the experience the designer wanted them to have, and then explore the data to explain why the experience was different. 

To identify where the player’s experience was wrong requires an understanding of the intended design. This is why researchers need to take time to understand the game and the designer’s intent prior to running a study.

Just as important is understanding and communicating where the game is working as intended. This will reduce the risk of developers making changes that break the parts of the game which players experienced as intended.

First irrelevant data and group the findings

Analysis starts by reviewing all of the notes and separating all of the notes that indicate an issue, while removing observations which don’t provide relevant or useful data. These can then be tagged or grouped in logical sections based on the study objectives – for example putting all of the findings about level 1 together, or putting all of the findings about combat together. 

Next identify unique issues within groups

Then each grouping can be interrogated to identify the number of issues within that group. Some observations might describe the same issue – either encountered by different players, or encountered in seperate places within the game. Other observations, although on the same subject, will be separate issues. A good way of deciding if some observations are one or more issues is considering the game team receiving it – will they need to come up with separate solutions for this issue, or will one fix resolve the issue everywhere.

An example from a pretend game is these three observations:

  1. Players didn’t recognise where to use the jetpack in the tutorial because they didn’t see the jetpack visual cue
  2. Players forgot the controls for the jetpack and were unable to obtain a reminder without moderator assistance
  3. Some people didn’t know they could use the jetpack in level 3, due to not seeing the visual indicator.

Observations 1 + 3 might be the same issue since the missed prompt was the cause of each despite occurring in different places in the game. In contrast, although observation 2 is also about the jetpack, the problem is about remembering the controls for using the jetpack, and so should become its own, separate, issue.

All of the raw data should be worked through until all of the issues are extracted.

Describing user research findings

Having identified where the player experience differs from the intended experience, further probing is required to understand and communicate why the issue occurred. Putting the work into uncovering and explaining why issues occur ensures that colleagues are able to take the right action to fix it. This is one area where a user research study will bring more value than informal playtests.

Explain why the issue occurred

An example of a poorly described issue is ‘It was unclear how to complete the puzzle near the cave.’ This fails to explain why the puzzle was unclear, or what action can be taken to bring it closer to the designed experience. Since why and how the issue occurred isn’t explained, the game team are unable to take the right action to resolve it.

To improve the quality of the finding, be sure to explain the cause of the issue. The cause describes what aspect of the game made the problem occur. There can be multiple causes for an issue. It may include attributes about the objects in this level (‘the puzzle was the same colour as the background.’) or aspects of the players experience up to this point (‘previous sections had optional puzzles which didn’t need to be completed for the player to proceed.’). 

Describing the cause of an issue

Causes should only describe things about the game, not the player. If a cause is being written about a player (‘players didn’t understand…’), this is a sign that the issue isn’t well understood and more investigation is needed to uncover what about the game caused players to not understand. 

In this case, a better way of describing the puzzle issue would be:

It was unclear how to complete the puzzle near the cave.

  • Cause: The puzzle was grey on a concrete background and didn’t stand out from the wall

In this example, the cause clearly describes something about the puzzle which was wrong and will direct the designer towards some potential ways to fix it. Well described causes should always be transparent about why the issue occurred and describe real things. Poor causes rely on domain knowledge or are written to make the researcher look smart. ‘Cause: The puzzle’s appearance fails Nielsen’s first heuristic and gestalt principles. impresses no-one and reduces the chance that the development team will take the right action to resolve the issue

Careful moderation is needed during the sessions to ensure that the correct cause is established, and other potential reasons are identified and eliminated. As covered previously, the moderator needs to expose and test their hypotheses about why issues occur during the session rather than relying on their assumptions. Many causes are invisible without understanding what players were thinking at the time.

Explain what happened next

Describing what happened because the issue occurred is useful in order to decide how important the issue is. There is a difference in importance between issues which cause players mild inconvenience and ones that are impassable and require players to use a walkthrough to progress. Furthermore, in many games, failure is an intended part of the experience, so describing the extent of the failure is needed to decide whether the issue exists at all. To capture this, describe the impact the issue has on players. Unlike the causes, this should describe the player’s experience, and how this differs from the intended experience.

Describing the impact of the issue

A version of the example issue which includes the impact could be:

It was unclear how to complete the puzzle near the cave

  • Cause: The puzzle was grey on a concrete background, and didn’t stand out from the wall 
  • Impact: Players didn’t spot the puzzle as the correct way to proceed and carried on exploring the cave
  • Impact: Players didn’t learn how to enter the cave and backtracked through the level for up to thirty minutes. Some needed moderator assistance to proceed.

In this example, we’ve also specified how long it took players to overcome the issue, to help game teams decide whether this is the experience they want players to have. Proper preparation for the study will allow the researcher to identify and explain how each issue differs from the intended experience. This will help prioritise the issues in the most meaningful way for the team.

By using a standardised format, such as the cause and impact example above, it helps researchers verify that they have included all of the relevant information the game team will need, and speeds how long it takes to interpret issues.

Avoid prematurely giving solutions

When describing issues, researchers need to be careful to focus on the problem and avoid describing potential solutions. The reasons for this are covered in more depth in the debrief section, but in brief, their solutions can appear naive and fail to take into account what the game team has tried before or what is feasible in the production schedule. This will impact the trust a team has in a researcher’s competence. One area to be particularly vigilant about is that solutions don’t sneak in as causes. ‘This issue was caused by the lack of a tutorial’ implies that the solution is a tutorial, and will limit the potential solutions a team might come up with.

To ensure that issues are fully described, with clear causes and impacts, it’s often helpful to peer review this with other researchers who also understand the game before sharing it further. This will help protect the credibility and trust you have established with a game team.

Prioritising user research issues

For studies which identify usability issues, it’s likely a lot of issues will be discovered. Resolving issues creates work for the game team, and this time commitment needs to be balanced with other development work, such as adding features or fixing bugs. Games development is often a very time-pressured environment, and so not every issue can be addressed.

Because of this, games user researchers need to rate the severity of the issues they report so that the game team can quickly decide the appropriate priority for them amongst other work.

There can be a tendency from novice researchers to over-exaggerate the impact of usability findings, rating them all as critical issues, to make their work look important. Conversely, some researchers can be tempted to under-exaggerate the findings, and rate them all low, thinking they are being nice to their team. Both of these behaviours make prioritisation difficult for producers, and damage trust with a team.

To avoid this, applying a standardised method of rating the severity of issues will reduce the chance for researchers to subjectively influence the severity. This will help game teams decide how to fit the work required to resolve issues amongst other tasks they need to do. 

A method for deciding the priority

There are different ways of achieving this, and many studios have their own preferred in house method. A method I like is a four-point scale for issues:

  • Critical
  • High
  • Medium
  • Low

To decide the appropriate severity, I use a version of the method described in the Userfocus article ‘How to prioritise usability issues’. 

Each issue starts as ‘Low’. Then ask three questions about the issue. Each time the answer is yes, raise the severity up a level.

In the context of games, these questions are:

  • Is this something that the player needs to do to progress?
  • Did the moderator need to step in to help resolve the issue?
  • Once the player had overcome the issue, did they know how to avoid it when they encountered it again?

Some studios use ‘how often did we see it in testing’ as an alternative question to rate issues instead of ‘did the player overcome it when they encountered it again’. Neither of these questions are perfect. If a player only encounters the potential for an issue to occur once, it’s impossible to judge whether they would overcome it in the future. However, ‘how often did we see it’ can be an unreliable thing to count in small qualitative studies. Considering which question is more useful for the method being applied, and being consistent in applying it, is probably the best approach for rating the severity of issues.

These questions only work directly for usability issues and aren’t suitable for describing behaviour. Deciding and applying severity criteria consistently is important for every kind of research finding, not just usability issues. An alternative scale will be needed for other types of finding. 

The value of analysis

Having understood, described and prioritised each issue, analysis is now complete, and it’s time to share it with the team. It’s common for colleagues to ask for early results before analysis is complete – for example, at the end of a day of research. Although this can be a nice opportunity for building positive relationships with colleagues, researchers should also be careful. Before analysis, it’s not always clear what the true cause of the issue was, or the severity of issues. Sharing information too soon can encourage teams to start working on the wrong thing. Taking time to fully analyse the results will demonstrate the value that a researcher brings to running user research studies, and will lead to better quality games.

Ready to finally start your games user research career?

Every month, get sent the latest articles on how to start a career in game development, and find games user research jobs.

Plus get two free e-books of career guidance from top games companies

Free Games User Research Ebooks
Author image

Meet the author

Steve Bromley is an expert user researcher, who works with studios of all sizes to run playtests, and integrate user research into the game development process.

Learn more

Keep Exploring

Test A Game With Kids

How to test games with kids

Practical guidance on how to get high quality data from your playtests with children. 

How much to playtest

How much playtesting is enough?

When to start (and stop) playtesting throughout game development.

Scope Studies For Impact

Scope your studies to increase research impact

How to scope games user research studies for impact – uncovering the context for decision making, and setting sensible objectives.

Master Games User Research

Free monthly new articles teaching playtesting & how to be a games user researcher. Join the most interesting conversations about games user research, discover job opportunities, and be introduced to new ways to think about game development.

Which best describes you?(Required)