There’s a huge variety of research methods available to researchers, and they need to be carefully matched to the study’s objectives. In this section learn about different games user research methods and when they are suitable.
This is AN EXTRACT FROM THE gAMES USER RESEARCH BOOK
The ultimate resource for aspiring or junior researchers who want to start a career in games. Learn how to run professional quality playtests, improve the UX of games and make games players love.
Start running playtests, getting job interviews, and making games better today.
Games user research methods
As described in the previous chapter, researchers need to pick an appropriate method to reliably answer their research objectives. Many of these methods aren’t specific to games, but do have some nuances in the objectives or approach when applied to games that are worth describing briefly. In this chapter, we will look at some games user research methods, consider some typical research objectives that they might be applied for, and discuss some important points to consider when applying these methods to research studies.
When designing a study, it’s usual that a variety of methods will be combined to get complete results. For example, understanding what players are thinking via interviews while observing their behaviour in the game, or combining observation with a survey to answer all of the research questions. Combining methods is very common when designing studies.
When working as a games user researcher, do keep in mind that only researchers are particularly interested in the method used to get results and focusing on research methods can distract when promoting research to new teams. Over time it can be useful to expose teams to the nuances of research methods to raise their literacy with research and help them identify when they could be running more studies. Early on in a relationship with a new team, you should focus on getting good-quality results, rather than the details of how the results were obtained.
Methods for understanding what players are doing
Some of the most common studies are focused on understanding a player’s behaviour in the game, and whether that matches what the designer wants players to experience. This includes usability testing which identifies where players don’t understand what they are meant to do (although this is a failing of the game’s ability to communicate, not the player’s fault).
Observation describes watching players play the game and looking out for parts where the player doesn’t do what the designer expected them to do. This is a very common format for studies and is typically combined with other methods to understand why the player acted in the way they did. Understanding not just what players did, but they did it is essential information for designers to fix issues.
Probing questions are usually asked while observing, to get players to reveal their understanding of what they believe is happening. This uncovers gaps between how they think the game works, and how it actually works. Part of a researcher’s skillset is how to ask those questions without revealing information to the player – broad questions like ‘what are your goals?’, and ‘how did you discover what you are meant to do?’ are best for probing understanding. These phrasings avoid artificially altering the player’s understanding of how the game works. Similarly, combining observation sessions with interviews can give another opportunity to explore a player’s understanding of how the game works (their mental model of the game), and identify how this differs from how the designer wanted it to be experienced.
About think aloud
Another way researchers may decide to uncover a player’s understanding is by asking them to think aloud. This involves asking players to talk about their thoughts as they play, which can reveal their understanding of the game without needing to be asked questions.
Talking aloud while playing is an unnatural task, and introduces some cognitive load, which might artificially impact a player’s performance. Talking about what they are doing out loud may lead people to the right answer, which wouldn’t have happened if they were playing silently at home. Another impact can be that talking out loud will distract players and create issues that wouldn’t occur if they were paying attention. Because of these problems, care needs to be taken when deciding to use this method. Thinking aloud shouldn’t be dismissed entirely though – sometimes this can achieve this naturally, without artificially changing player behaviour. Cooperative games are an example of this, which require players to communicate and explain how things work to one another and will reveal the participant’s understanding in a realistic way.
Observing multiple players at once
Using observation as a research method can introduce challenges unique to games. Unlike many other types of software, games might require multiple players to play simultaneously. This can be required when testing multiplayer games, or when trying to test long sections of gameplay with multiple participants simultaneously to save time. A researcher will find it difficult to reliably observe more than one or two participants at a time and so may work with other user researchers (or train up some students) to cover these tests reliably. Study designs with multiple simultaneous players will also rely on surveys and analytics to gather data, as will be covered shortly.
Enhancing observation with biometrics
Biometrics can also help support observational studies. This describes using sensors to measure responses from player’s bodies as they play – for example, sensors to detect when participants are alert, or eye trackers that record where participants are looking. These tools can help focus the researchers’ attention to the most interesting parts of the session, and may one day be able to do more than this and draw meaning from player’s body responses. Although this has potential, many research teams don’t regularly apply these methods because they increase analysis time and disrupt the player’s experience without providing enough additional insights to justify the time cost. They are currently not widely applied in games development because of this.
What objectives can be answered with observation?
Some typical objectives which a researcher may decide to use observation to answer include:
- Do players learn the game’s objectives?
- Will players know where to go on level 2?
- Do players understand how to use fast-travel?
- Does the tutorial teach players how to use their jetpack correctly?
There are ethical and legal reasons why it’s important to ensure that no-one is prevented from playing games due to disabilities, and user researchers can run studies that help make this possible. The most effective method of uncovering accessibility issues is running usability testing with people who have disabilities, to identify parts where the game doesn’t support their needs. However, usability testing can be logistically challenging to recruit for and run – many players rely on custom setups that are difficult to recreate in a usability lab. Additionally, the need for fresh participants to uncover new issues means that any participants who live close enough to visit for running studies will only be useful for a single study before their behaviour will become biased by their participation in previous studies. For pragmatic reasons researchers often supplement usability testing with other methods of uncovering accessibility issues, including using panels of users who have access needs, or reviewing against best practice guidelines.
What objectives can be answered with accessibility testing?
The top research objective that accessibility testing will help answer is:
- Does the game support people who have disabilities which impact how they play games?
Analytics, or telemetry, describes methods that measure player behaviour. This can be achieved manually – with a researcher or the participant marking whenever an event occurs – or automatically with the game itself tracking events. Measurements are easy to compare, and so can be a simple way of comparing what players are doing to the designer’s intentions – are players failing more often than the designer wants, or does one team have an unfair advantage in a multiplayer game.
Lo-fi methods of measurement, where the researcher is counting events such as player deaths, or where they fail tasks, is a reasonably common research method. In studies where multiple participants are playing simultaneously, it’s not possible to closely observe behaviour and probe appropriately to understand it. As a consequence, these studies often rely on counting events, combined with surveys to determine whether players are experiencing it as intended.
Recording in-game events automatically using analytics is a specialist role, and becomes increasingly important later in development and after the launch of a game. As a user researcher, it’s unlikely that you will be required to implement and measure these events yourself, however working closely with a data scientist can be a very beneficial relationship. Identifying which research objectives are appropriate for a data scientist to address and working with them to ensure that the measured results can be explained with insight from research can be a compelling argument for evidence-based decision making in games development.
What objectives can be answered with analytics?
Some of the objectives that analytics can help address are based around balance and difficulty:
- Can players complete the puzzle in the right time?
- Is the difficulty of the game correct?
- Is the multiplayer fair and balanced for both teams?
- Are any of the weapons overpowered?
Games user research methods for understanding what players think
A very common research objective that teams suggest is ‘do people like my game?’. Although it’s not the only ingredient for success, a fun game is more likely to be successful. As a research question, this can be very difficult to answer, and might not be particularly easy to uncover or interpret.
As a first step, identifying usability issues using the behavioural methods described above can be most impactful. Usability issues will increase friction and frustrate players, reducing their enjoyment.
These games user research methods do not predict review scores
There is a tendency to attempt to use scores from players as a predictor of a game’s Metacritic rating (Metacritic is a website which aggregates scores from review sites to give one overall score). Metacritic scores are important to many studios – some base financial bonuses on getting a high Metacritic score. Although asking players to rate their experience playing the game can give some information that can be benchmarked against data gathered in similar methods, it’s very different from the ratings that would be given by professional critics. This is not only because a critic is different from an average player, but also because a player is rating the whole experience of being paid to visit a studio to play unreleased games, not just the game itself.
Beyond that, a combination of qualitative and quantitative methods can be used to uncover what people think about a game, why they think that, and how representative those opinions are. There are methodological challenges with interpreting what players think and turning it into useful information to inform game design. When caution is used to avoid teams drawing unsafe conclusions from these studies, this can be another valuable service for user researchers to offer.
Interviews involve asking players questions, either during or after gameplay, to understand their thoughts. As covered above, observing or measuring player behaviour will reveal a lot of information about what players are doing. However, this isn’t necessarily enough by itself to make decisions about how to change player behaviour. A designer would need to know why players did what they did, in order to make the right change to the game that will change player behaviour.
To achieve this, it’s common to ask questions during user research studies to reveal what players think and how they understand the game to work. There may be a gap between what they understand, and what the designer wants them to understand, and so designers will try to reduce this gap by making changes to the game.
As described previously, asking questions can change a player’s understanding, potentially revealing information they didn’t previously have or causing them to think about the game in a manner they wouldn’t have normally. Researchers learn to time and phrase their questions appropriately to cause the minimum disruption to the player’s natural experience.
Watch out! Asking questions changes player behaviour.
It’s also important to remember the difference between a player’s statements and the truth. This is not just people intentionally lying, but also players are subject to a number of unconscious biases which can mean their own opinion or beliefs about why they did something might not reflect what really occurred. This is most risky when asking players to predict future behaviour. Questions such as ‘will you buy this game?’ do not correlate well with the behaviour of buying the game, and so can be extremely misleading and unhelpful. Care should be taken with interpreting interview data, and a researcher will create the most value by analysing player’s statements and focusing on the insight into the thought processes they reveal.
What objectives can be answered with interviews?
Some research objectives interviews might help answer include:
- How do players think the special ability works
- Why did players not go down the correct path?
- Why do players think they enjoy or not enjoy the game?
Integrate player insight throughout development
Every month, get sent the latest articles on how to plan and run efficient high quality playtests to de-risk game development. And get Steve Bromley’s free course today on how to get your first 100 playtesters for teams without much budget or time.
Coming Late 2023: Your free copy of ‘Playtest Plus’ – the essential guide to the most impactful playtests to run throughout development of your game
A common research method is to use surveys to measure players sentiment around a game, or individual levels. This often takes the form of scales and ratings – asking players to rate a level out of ten, or rate the game using scales such as ‘I understood what to do’ or, ‘I understood where to go.’
By itself, the figures from surveys aren’t very helpful – it’s meaningless to report that a game is ‘seven out of ten’ because this doesn’t give enough information for a designer to know what to do to improve the rating. The data from surveys is most useful when compared to other results – e.g. comparing ratings between levels to identify dips, or comparing to scores of other games. It should also be combined with qualitative methods to explain why levels received these scores and what changes might improve them.
Getting reliable results from surveys
To get sensible results from a survey will usually require a lot of participants. This can be challenging for secretive or unannounced games since many studios will prefer to run those within their building where access to recording devices or the internet can be monitored. This is one reason that the multi-seat playtests can be common in games since this allows surveys to be run on-site to answer research objectives.
As a researcher, it’s important to understand enough about statistics to identify safe and unsafe conclusions from the data received. One of the most common statistical tests a user researcher performs is to calculate the confidence intervals between different results, to see if there is a significant difference in the ratings between two things – such as the ratings for each level. Many qualitative research studies will not achieve statistically significant differences between ratings, and being able to test this will help build trust in research conclusions.
What objectives can be answered with surveys?
The types of research objectives that might be answered with surveys are largely focused around player’s perceptions. They include:
- Do players think this level is too hard?
- What are player’s favourite weapons?
- How does the jump feel?
- Do players like the chase sequence?
Games user research methods without users
Not all user research methods require users. Over time, user researchers develop a degree of expertise in anticipating the impact of design decisions on players. This expertise can be used to evaluate games without watching players.
There is a risk with relying on methods that don’t use users. When sharing the findings from other studies, there is an objective truth in the findings – players really didn’t see where to go, or players really didn’t understand how to use the jetpack. This is (hopefully) indisputable- although defensive teams might start to question the quality of the participants or the study design. Because of this objective truth, when issues are identified the development team should take them seriously.
When relying on methods that don’t test real users, it is easier to challenge the finding. ‘Players might find it hard to use the jetpack’ is less convincing than ‘Players did find it hard to use the jetpack.’ To get teams to take action requires the team to trust their researcher, and the researcher to not betray that trust. Being neutral and objective, and avoiding pushing their own idea or agenda is even more essential for researchers running these kinds of studies.
Expert reviews and Heuristics
One method that researchers employ is to provide feedback on issues they anticipate players will encounter, based on their own experience playing the game. Because researchers have watched lots of people play games, they are more likely to be able to correctly anticipate issues than other members of the team who may have unrealistic expectations of real player’s abilities or thoughts.
Expert reviews are cheaper and quicker to run than other research methods because they don’t require the time or expense of recruiting participants. An expert review can be particularly useful before running other research studies. Identifying issues that would probably occur during a study with users, and then resolving them before the study starts will allow new and more interesting issues to emerge, and will become easier as experience with running studies grows.
Despite a common title for this method being expert review, calling it that won’t endear you to your team, who are experts themselves. Usability review is a less controversial title that still describes reviewing the game to identify usability issues.
What about heuristics?
Related to expert reviews is the idea of heuristics and heuristic reviews. Heuristics describe principles for good design, the most famous being Jakob Nielsen’s 10 Usability Heuristics for User Interface Design, although game specific ones exist also. Some expert reviews will decide to evaluate against these heuristics, and refer to specific heuristics when explaining why issues exist.
Personally, this is not the approach I advocate. I believe that referring to heuristics can limit the range of issues that a researcher is looking out for. Also, many game-specific heuristics describe design decisions rather than identifying gaps between the intended experience and the experience players actually receive. I don’t believe making design decisions should be part of what a user researcher does. However it is not uncommon for this to be done, and researchers should at least be aware of relevant heuristics while they are playing through the game and identifying potential issues.
What objectives can an expert review answer?
Some of the more common objectives that a researcher might use an expert review to identify include:
- What usability issues do we anticipate players will encounter on this level?
- Do we believe that players will learn how to use the jetpack from this tutorial?
- Is it clear where to go on this level?
When a game is complete, new sources of information become available to help understand how players experienced it. Unlike many other mediums games attract an enormous amount of attention from critics, fans and commentators which can give some insight into the quality of the experience.
Post mortems (or ‘review analysis’) involve analysing feedback from these sources, and identifying some themes in the issues that are described. These issues can be then analysed, to understand which decisions caused them to occur, ultimately improving the game development process. This can be very valuable feedback to help consciously change how a game team works and promote evidence-based decision making.
Some issues with post-mortems
There are some risks to be aware of when interpreting post-launch feedback for a game. Most importantly the people who give this feedback are not typical of normal players – critics are professional players trying to describe the game to a less professional audience, commenters are likely to be more extreme around their viewpoints than people who don’t comment, and streamers are entertainers trying to find amusing topics to focus their videos around. This context can make their experience significantly different to that of someone playing at home, and care should be taken when applying insights learned from these audiences to more typical groups of players.
Waiting until reviews are published means it is often too late to make significant changes to that game. Despite this, running post-mortems on released games can give some information that will help improve decision making for the future. This can also be done with competitor’s games, and can be referred to as a competitor analysis.
What objectives can be answered with a post mortem?
Some objectives that might be addressed with this method include:
- Did people like the combat?
- Did people understand the story?
- What are some successful ways that other games have implemented crafting?
Ready to finally start your games user research career?
Every month, get sent the latest articles on how to start a career in game development, and find games user research jobs.
Plus get two free e-books of career guidance from top games companies