Newsletter Research Skills

How to keep games secret in playtests

Techniques for avoiding leaks, and keeping games secret when running playtests and UX research.

We don’t want leaks from our playtests

When speaking to teams, I often see that they are worried about leaks. That fear can put them off the idea of playtesting.

With small teams and solo developers there is the worry that playtesting their game will give away their ideas, and someone might steal them. From the larger teams, there is the worry that leaks will ruin their marketing plans

Many games spend as much on marketing as they do on production (because marketing is such an important part of making a successful game). This means that leaks can be seriously disruptive to established marketing plans, and there is a lot of wariness around playtests. Compared to other industries games are particularly at risk of leaks, because players are very hungry for information, and there are very active player communities to share leaked information. Even the smallest game teams need to be conscious of what assets they are putting online, and who might see them.

This means that game teams, and senior management can be very hesitant about running playtests. 

This fear is well founded. From the thousands of hours of playtests I’ve run, I’ve had leaks occur twice, and neither was good. Even though they didn’t severely disrupt the game’s marketing or announcement they were embarrassing and had the potential to impact my relationship with the teams I worked with. With a bad enough leak, it could potentially make teams not want to playtest again.

However the risk of leaks can’t stop us from running playtests. The value of playtesting is too high, and needs to occur as early as possible in game development in order for us to have the right impact and fix problems at the right time. So, we need tactics to minimise leaks. 

In this post, we’ll look at techniques available to researchers, or game teams of all sizes, to minimise the risk of leaks from playtests. 

How to keep your games secret

Playtest without leaks

Start with the basics: Get participants to sign an NDA

A non-disclosure agreement (NDA) is a legal agreement to keep something secret. It’s typical to sign it when working on games projects (which makes portfolios difficult!), and equally typical to ask playtesters to sign them at the start of playtests. 

Templates for NDAs exist across the internet. Here’s a UK focused NDA template from UKIE (thanks Seb Long for the link). Most non-hobbyist development teams already have a friendly lawyer, who can put together a bespoke NDA for you. I’ve also included a link to several alternative NDAs in the Playtest Kit.

The risk with relying on an NDA is that they are very difficult to enforce. If a participant breaks the NDA and puts some screenshots or secrets on the internet, it costs money to get a lawyer to act on it, and might draw more attention to the leak. Plus at that point, it’s often too late – the leaked information is on the internet, and will be spreading immediately. 

Because enforcing them can be counter-productive, the point of NDAs is often more about the threat of taking action rather than a real intention to enforce it.

Best practice is to ask your participant to sign the non-disclosure agreement at the beginning of the session, and then let them take a copy with them. Remind them again that they signed an NDA at the end of the playtest. Making the process seem formal and official will hopefully increase players’ likelihood to take it seriously. 

Choose in-person research methods for the most secret games

Different research methods also have different levels of safety for leaks. At PlayStation, the vast majority of our research was in our research lab, so that we can keep an eye on players and prevent them from taking screenshots or information away from the session.  

Taking participants’ phones away (and asking them to put them in a locker) will also help prevent photos from being taken.

This also allows you to take time to explain and emphasise the NDA – running through it with the players to make sure they understand and making explicit that it is a legal document. This will increase adherence.

Make remote research as safe as possible

Lab-based, moderated research is the ‘safest’ method to prevent leaks, but even when running remote research, there are some steps that can help. Choosing to moderate the study, where the participant is talking with you live, will create fewer opportunities to capture leakable information.

Bob Tilford, now of RockStar, gave a great talk about how building rapport can help minimise leaks, which can be viewed here:

For unmoderated research, there is an increased risk that players are left with the game, and can screenshot it. It also prevents you building rapport with the participants, increasing the risk.

For many games, it may be sensible to stream the game to players, rather than sending them a build directly. Although not feasible for games which require extreme responsiveness, for slower games this means you don’t have to actually let players have un-monitored access to the game, and can be in control of when and where they can access the game.

Parsec is one of these tools that allow remote players to stream access to builds on your own device, similar to how Google Stadia works. There is a video of how Parsec works on their website, as well as a case study of Ubisoft using it to distribute builds for a marketing event.

Watermark the build

Watermarking is putting an overlay on the screen, with some identifying information, like in this picture:

A game screenshot. In the corner is a watermark which has a date, the name of the computer, and an IP address on.

This can be invisible to the player (see this thread on how it is done inside World Of Warcraft) but for our purposes, we want players to know that leaked information can be tracked back to them. Some sensible information to include can be the current time, date, and any identifiable information that can link the screenshot to a specific device. 

Showing that screenshots can be linked back to an individual increases the perceived risk of leaking information, and can help discourage leaks. This will however require some development time to implement, so isn’t practical for all teams.

Remove important assets from the playtest

When working with established IP, it can help to obscure the IP in the test build. This could look like changing the title screen or renaming the game in the build so that it doesn’t mention or show pictures from the brand. This is perhaps most relevant to simple mobile games, as replacing entire character models is unlikely to be possible in more complex games. 

Removing assets can be achieved either through creating a custom build, or avoiding including the final assets in any build until post-announcement. Replacing assets also requires development time, but can protect a game from leaks. 

And most importantly – screen your participants. 

Both times I’ve had issues from playtests, they could have been resolved by better screening. In one of the sessions, they were a journalist from a minor publication, who believed they had a scoop. In the other, two participants knew each other, and were able to collectively be brave enough, and then corroborate, leaked information to the games press.

Remember to check when recruiting, and verify during a pre-interview:

  • That they are not a journalist
  • That they are not a streamer/content creator
  • That they have no personal connection to other participants

There are very few actively malicious participants (I’ve seen thousands of players, and only had issues a few times) – but asking these questions, and removing people who fail them or can’t give convincing answers, will help identify the most at-risk sessions.

Building trust through consistent results.

Worries about leaks push teams to delay running external playtests. Instead many teams just run playtests with their colleagues as participants. But our internal colleagues are just so different from real players, that a lot of the value of playtesting gets lost. 

These techniques combined can lead to a robust playtest setup, with minimal risk of leaks. But getting permission to run playtests, or the confidence to run your own is a much more human problem. 

Building trust from management can take time. Looking at teams who have been successful at integrating playtesting in their development process, it can often take multiple games to achieve.  Showing value with late evaluative playtests can build trust, and on the next game playtesting can start earlier and have more impact.

A great example of how a relationship can change over time is the work done at Bungie, moving from a few studies in the year before release for Halo 2, to regular playtests across the four years prior to launch for Destiny.

A graph showing how many tests were run on Halo 2, 3 and Destiny.
On Halo 2, testing is mostly from a year before launch. Halo 3 - two years. Destiny has frequent tests throughout the four years before launch.

(The source is John Hopson’s 2015 GDC talk)

Building trust is a marathon, not a sprint – but being able to avoid embarrassing playtest leaks will help us get there! 

Continue growing your games UX skills

Every month I send a playtesting and games user research lesson, just like this one, to help game developers run better playtests, and people start a career in games user research. Sign up to get the next one direct to your inbox:

Better playtesting today

If you’re interested in turbo-charging your playtesting, I’ve worked with game designers, producers, community managers, UX designers, QA managers and solo game devs to make The Playtest Kit.

It brings the expertise of over 25,000 playtest hours into one complete playtest toolkit – making playtesting accessible to game developers who have no time or money.

Learn more at

Newsletter Research Skills

Find usability issues in games with playtests

Usability issues can ruin games.

When players can’t understand what they are meant to do, where they are meant to go, or what is happening, they get confused, bored and ultimately drop out. This impacts their opinion of the game, retention, and creates an experience that just isn’t fun.

User researchers can help.

One of the core skills for researchers, and one of the focus areas for the How To Be A Games User Researcher book is how to find and deal with usability issues. This helps make games that players understand and enjoy. 

Today we’re going to look at how to use playtests and UX research studies to find usability issues in games.

Get future games user research lessons direct to your inbox

Find games usability issues 

Find Games Usability Issues

Today we’re looking at usability issues. What we mean by usability is “can players do what they are expected to do”. 

That’s different to opinions about a game. “I don’t like that the weapons break in Zelda” is an opinion.  “I don’t understand why my weapons keep breaking” is a usability issue. Today we’re tackling the latter.

1. Start with understanding the design intent

Usability issues are ‘when the player can’t do what we expect them to do’. To spot that, we need to know ‘what do we expect them to do’.

This involves talking to our colleagues. Designers have an idea about how they expect the game to work, and what they think players should understand and do. You need to learn what their expectations are (the ‘design intent’).

Exactly who to speak to, and what to ask depends on what you’re testing – which is why our studies start with scoping around research objectives.  You may need to talk to combat designers, UI artists, level designers, producers, or others, But for an idea about the range of things we may need to understand look at Liz England’s ‘The Door Problem’. You need to know what the designer would say in answer to every one of those questions. 

This can be a lot of chatting.

But it will make the other steps a lot more effective! 

2. Create the right context to create realistic issues

Next comes creating the task. What are we actually going to ask the player to do. 

Some tasks can be very broad “Complete this level” or “play for thirty minutes”. Sometimes they can be very specific “Craft an upgrade to your horse armour” or “Place this object on the shelf”. Exactly what level to set your task depends on your research objectives, but my preference is to lean towards broader tasks where possible, because it allows unexpected issues to emerge.

If you are live in your playtest, you can be looser on the task. Because you are in the room, and watching them play, you can jump in with an improvised task at the right time. 

Designing tasks in UX tests can be tricky. Revealing too much information will artificially lead players to the solution. Revealing not enough information (such as missing tutorials) will create artificial situations that won’t occur in the final game. This needs a lot of thought to balance.

My preference is to be as undirected as possible, while meeting the research objectives. “Play through this bit” is fine, as long as they are exposed to the right content, any tutorials or pre-requisite content, and is easier when you are on hand to subtly guide players towards the right content.

3. Be quiet, watch and look out for deviations

Every time you intervene with players, you are introducing artificial elements to the playtest. Unless you intend to answer questions live for every player post-launch, you should resist answering questions, or revealing information in the session. 

Instead your attention should be focused on listening and watching. Observe what players are doing in the game, and when they do something unexpected, write it down. Looking back at your research objectives, and a good understanding of the design intent will help you recognise unexpected behaviour. 

Write down what you see (we’ll cover good note taking in a future issue). 

When necessary, hand out new tasks to direct players attention to the right areas. (but still be very careful that the words you use or instructions you give don’t reveal information that the player wouldn’t otherwise have)

4. Ask questions to understand what you’re seeing

Observed behaviour is only half of the story. In order to understand why that behaviour occurs, we need to know what’s happening in player’s heads.

This requires asking them questions. 

As described above, there is a massive risk that your questions reveal information to the player that they wouldn’t normally have – artificially influencing your playtest. Bland, non-leading questions are required – such as “what is happening now”. We wrote more about how to ask good questions in a previous issue. 

5. Use the impact to rate severity

It’s common to spot a lot of problems, and as covered in the How To Be A Games User Researcher book, researchers go through a structured analysis process to uncover and describe them all.

In order to focus the team’s attention, we need a process for identifying which problem is most pressing. We do this by anticipating what issues will be most disruptive to players when they play.

This extract from How To Be A Games User Researcher explains one approach for doing this:

A method I like is a four-point scale for issues:

  • Critical
  • High
  • Medium
  • Low

Each issue starts as ‘Low’. Then ask three questions about the issue. Each time the answer is yes, raise the severity up a level.

These questions are:

  • Is this something that the player needs to do to progress?
  • Did the moderator need to step in to help resolve the issue?
  • Once the player had overcome the issue, did they know how to avoid it when they encountered it again?

When running your playtest, noting “what happened to the player as a result of this problem”, gives you the raw information you need to do this analysis + rating process. We will cover analysis in more depth in future issues.

A chance to practice

I’m sharing a session of a player who is playing Pokemon Unite for the first time. During their play session, they encounter some usability issues with the game.

Watch an unmoderated usability test of Pokemon Unite

As you watch, think about:

  • The design intent (what did the game team expect players to understand or do)
  • The issues the players encounter
  • What questions you’d ask, if the session was live
  • What impact the issues had on the player’s experience

(Feel free to share what you spot with me on Twitter!) 

Be ready for a career in games user research

Get my free ebook of career tips from researchers at top game studios such as EA, Blizzard, PlayStation, Ubisoft, Activision and more. Based on exclusive interviews, they reveal ten essential tips to kick-start your GUR career today.

Plus receive a monthly email with free games user research lessons and curated early-career Games UX and UR jobs.

Sign up now to get the free e-book (no spam, just a nice email from me each month!)

Ten essential tips for new and aspiring games user reseachers ebook

Don’t miss the next issue

Every month get an email from me with a free games user research lesson, and the latest entry-level games UX and UR jobs.

Newsletter Research Skills

Expert playtest moderation – ask unbiased questions

Moderating playtests is hard. The wrong question, or an inappropriate tone can derail your playtest. Accidentally upsetting the player, biasing their response, or revealing inappropriate information can severely reduce the value of the data you are collecting.

But one to one interaction with a player is also a core researcher skill, and one of the most impactful methods we have for understanding player behaviour.

As a skill moderation takes practice (I remember how poorly my first moderation went 🤦 ). This month we share some guidance on how to improve your moderation.

Get future games user research lessons direct to your inbox

Learn to moderate playtests

Prepare appropriately

Excellent moderation starts with preparation. Before you step into a room with a player, you will want to be clear on what your research objectives are, and have written a discussion guide to prepare for the session.

This will ensure that your session has structure, and that you know your questions will be relevant. Even if you have to free-style questions, having a reminder of the research objectives in front of you will help remind you of the objectives we’re trying to answer.

Here is a free template for a discussion guide from my first book, which can be adapted for your playtest.

Consider the space

We want players to be at ease when they arrive for their playtest. The room is an important part of creating a friendly atmosphere where players can give feedback freely.

Think about the room you are playtesting in – we want to avoid the space biasing player’s opinions or intimidating them. Remove the many trophies that say you are an excellent game designer, and the marketing posters to create a more realistic play space.

Also think about the playtester’s experience before they reach the playtest room. How will they know what to do when they arrive at the building? Will someone be there to meet them? Will they know what to say? Spending time thinking about and designing the experience of the playtester arriving at your building will help avoid them arriving unsure or upset, and create a more positive atmosphere for feedback.

Start the session off well

The introduction helps set the tone for the rest of the session. As Meta Quest’s Rich Ridlen explains:

Click through to read more great tips from Rich

This can be rehearsed, and again preparation helps. Use the template to remind yourself what to cover at the beginning, including explaining:

  • Your relationship to the game (“I didn’t make this”)
  • This is not a test (“We’re not testing you, we are testing the game”)
  • What is expected from them (“Today we’re going to play the game, and ask you some questions about it afterwards”)

Don’t help.

As a moderator, your presence is unnatural – when the game is released, you won’t be there to help. This means you have to be extremely careful not to introduce new information into the playtest that the game won’t introduce.

Don’t help players when they are stuck*. We need to learn what players would do without outside help, to recreate the authentic player experience and see whether they can overcome the issue.

*The exception to this rule is you can help if it’s something that isn’t in the game yet, but will be. As long as you don’t forget that this is untested because you helped, you can help players overcome unfinished parts of the game.

**The other exception is you can help if everyone understands and agrees it’s an issue. If one player has the problem, let it occur. If the whole team then understands and agrees that this is a problem that should be fixed, you can consider helping other players overcome it, so that you can see new issues.

Don’t confirm, deny or answer questions.

Part of not helping is not letting players know if they are doing the right or wrong thing. If they ask questions about how the game is meant to work, or if they are doing it right, you shouldn’t answer.

A good phrase is What would you do if I wasn’t here. Or turning their question back on them What do you think?

Ask questions, but be careful with what you ask.

So, if we’re unnatural, why are we in the room at all? Part of our role as a moderator is to ask questions to understand why players are acting the way they are. Just watching will show us what players do, but we have to ask questions to understand why.

Questions can be dangerous, as they can introduce information the player didn’t have. Asking How did you know that was the right way to go reveals that this was the right way to go.

Because of this, my favourite questions are usually very bland, and just get players explaining what is going on currently (which you can then follow up on): What is happening currently? or How did you realise that? get players talking, without revealing any information.

Also remember to be careful when you ask questions. As Camille explains:

Read more tips from Camille

Probe deeper

We need players to feel comfortable giving feedback. This requires creating a friendly atmosphere, but also listening to the points they raise.

When players say something, or give feedback, as a moderator we need to consider “is this enough information”. A moderator should always be thinking about what would level of detail would be useful to your team. This requires asking further questions to reveal enough detail that a designer can take action.

This game is too hard isn’t enough detail for a team to action. We need to ask follow up questions to ask what happened that made them believe the game is too hard, so that the team can fix it.

As Laure says this doesn’t always have to be asking a question, leaving space can also help reveal more information:

Read more tips from Laure

Focus on moderation

Moderation is a skill and takes attention. As a moderator we have to carefully watch players to make sure we’re asking timely relevant questions. As Francesca explains, it can’t be multitasked.

Note-taking is an equally valuable skill, which takes preparation to do correctly. As Donat describes:

Read more tips from Donat

Moderation is a skill that takes practice. We’ll cover more tips in a later issue.

Be ready for a career in games UX research

Get my free ebook of career tips from researchers at top game studios such as EA, Blizzard, PlayStation, Ubisoft, Activision and more. Based on exclusive interviews, they reveal ten essential tips to kick-start your GUR career today.

Plus receive a monthly email with free games user research lessons and curated early-career Games UX and UR jobs.

Sign up now to get the free e-book (no spam, just a nice email from me each month!)

Ten essential tips for new and aspiring games user reseachers ebook

Support How To Be A Games User Researcher

Thanks for reading. If you like this and want to support, consider:

Have a great month.


Don’t miss the next issue

Every month, get an email with a free games UX research lesson, and the latest entry-level games UX and UR jobs from me.

Newsletter Research Skills

How to write a games user research report

This month, we explore what goes into writing a games user research report, and how to communicate research results effectively.

When a tree falls in the woods, does it make a sound? I’m unsure. But I’ve seen plenty of research studies make no sound because of how they were communicated.

The hard work that goes into planning and running the study is undermined by poor communication. Excellent reporting requires a balance of building trust, excellent storytelling and clear communication – today we’ll look at some tips on how to do that (and look at a real report!).

This month we also have an exclusive interview with Player Research’s Marco Alesci, Helen Johnson’s top videos for new researchers and top career tips from researchers at top companies like PlayStation, EA, Activision, Ubisoft and more. Read on also for new entry level games user research jobs.

Get future games user research lessons direct to your inbox

Write a games user research report

Write a research report

What is a user research report

A games user research report summarises the findings from a playtest or research study – they are the conclusion of all of the work defining objectives, writing tasks, moderating sessions and analysing data. 

Writing a formal report is only one way of sharing research findings with teams. Its strength is that it is a stand-alone document. A report should hopefully make sense to someone who didn’t see the research study, or to yourself in six months time when you’ve forgotten what you learned in that test. 

The importance of storytelling

A dull report that just lists findings will be easy to forget and will have low impact. With time researchers learn the value of sharing fewer findings, but making the ones they share more impactful. 

Having a narrative thread through the report is one way of achieving that. Here’s some tips from EA’s Jess Tompkins.

A report is not the only way to share findings. Depending on the relationship with the team, an email or conversation may be more appropriate. 

When deciding how to share findings, remember a report is also not the point of research. We’re running studies to improve games. At all times think about ‘what is the clearest way to communicate my findings with my team’, and use that to inform how you present your research findings – whether you are making a report or not. 

What is in the report

VidyaResearcher covers the benefits of keeping the structure for your report consistent: 

Making a template makes it easier to consistently structure your report. Here’s what’s typically included within the report:

A one-page summary

A report will have multiple audiences. Some people will need all the detail explaining exactly what was discovered and what caused the issues. This information is helpful for the people who have to design and implement fixes. This can include designers, artists and programmers working on the features described.

Other team members don’t need to know the specifics of each problem, and just need an overview of the scale of the issues and how much attention they will need – most commonly important execs, or producers who need to manage the schedule.

Many games user research reports start with a high level summary of the test, including the most significant findings – what the most important issues were, what areas of the game need the most attention, and the overall state of the game. A good model for thinking about the exec summary is “if someone asked me in a lift how that test went, what would I say”. 


To increase the impact of the work, it’s helpful to explain why the study is relevant and important to what team’s need to know today. Here’s a tip from Tom Lorusso:

Again, effective storytelling is important to allow your findings to have the impact they deserve.

Grouping issues

Putting similar issues together makes it easier for teams to follow your findings. It allows teams to section the report, and share relevant sections with the right people to fix them. All of this makes communication of the points clearer, and increases the potential your report will be impactful. Your research objectives might inspire suitable ways to group your findings – whether that is by game mechanic (such as ‘issues with crafting’), by ‘type’ of issue (e.g. ‘issues with the UI)

Think about the ordering of the groups, and put them into a logical order. For linear games, that might be chronological – the order the player encounters them. Or you may consider ordering them by severity – the mechanics with the biggest issues first. 

Within each section, again consider how you are ordering the findings. Most severe first? Or chronological? Think about the experience of the reader, and pick an order that makes sense when read.

Each issue explained fully

Proper data collection and analysis should allow you to give appropriate detail to your findings. Describing ‘what is the issue’ isn’t enough by itself. We also need to give enough detail about the causes of the issue so that teams understand where their attention needs to be to fix the issue. Describing the impact on the player will also help teams prioritise, by anticipating the impact of fixing the issue on players.

All of this adds up to a cause/impact structure being common when reporting usability issues – where the intended experience is contrasted to what players actually experienced, such as on this slide: 

Control tutorials may be missed
Cause: The controls appear on screen for a limited time
Cause: The game doesn’t verify that the player has learned the control
Cause: It’s not explained that the tutorials are available on the pause menu
Cause: Some later puzzles require advanced controls to to progress, such as rotating cubes
Impact: Players may miss the tutorial, or pass it without learning the control
Impact: Players may have difficulty progressing before discovering the pause menu

This structure works for usability issues – but again isn’t the only way of reporting findings. With richer findings (such as describing truths about players), you should consider other ways of describing what you learned so that the key points are understood by the reader. 

Some teams also share recommendations or prompts to inspire solutions on their slides also. We’ll cover more about this in future lessons.

Across the whole report, clarity in writing is important. Some tips for writing clearly:

  • Have only one idea or finding per slide.
  • Use uncomplicated words that people use in normal everyday speech. Avoid jargon.
  • Use short sentences.
  • If you have to use a font-size smaller than… 16? you probably have too many words on your slide. If you’re not presenting findings, but trying to make a report that tells a story, aim for nothing under a font size of 30.
  • Pictures and diagrams can make your points clearer – but only if they are understandable. Overwhelming diagrams will quickly lose the audience.

Next steps

We want our research to lead to action – and a next steps section is one way to encourage that. If you intend to run a workshop to help teams apply your findings, you can suggest that as the next step. Or if teams are going to be addressing the issues themselves, you could suggest further research objectives to tackle, or remind teams to re-test to check the issues have been fixed successfully.

See a real research report

In my talk last year on ‘how to get games user research experience without having a job’, I partnered with Cat Floor Studio who kindly allowed me to share the report from our review together. 

It doesn’t incorporate all of the best practices discussed above (where is the storytelling!) and more work could be done on making the most important points obvious. However to see an example of what a report might look like, take a look at this usability review of The GoD Unit. 

Other reporting tips

Laure Duchamp shared her tips on LinkedIn

  • Think about your audiences, all of them. If your communication speaks to a variety of stakeholders, chances are it will have more reach and will be used for various purposes.
  • I usually divide it into parts and at the end of each part sum up key insights and recommendations. That way you have more rhythm, and people don’t have to wait until the end to get your point.
  • Make it more alive with images, gifs and videos clips! Varying inputs helps keep attention
  • Add all your « further data » (any complex graph or raw data) at the end, in a part you might not use when presenting. Having extra slides can be good for picky stakeholders who like to dig into the evidence, or just to add more context to your insights.
  • Your report is a material that has to stand and live on its own. It has to be reusable and shareable without you being there to talk it through.

We’ll explore other ways in which research findings can be shared in future issues, so stay tuned! 

Be ready for a career in games user research

Get my free ebook of career tips from researchers at top game studios such as EA, Blizzard, PlayStation, Ubisoft, Activision and more. Based on exclusive interviews, they reveal ten essential tips to kick-start your GUR career today.

Plus receive a monthly email with free games user research lessons and curated early-career Games UX and UR jobs.

Sign up now to get the free e-book (no spam, just a nice email from me each month!)

Ten essential tips for new and aspiring games user reseachers ebook

Don’t miss the next issue

Get a monthly email with a games UX research lesson, and the latest entry level games UX and UR jobs.

Newsletter Research Skills

How to write a playtest survey

Survey design is one of those topics that looks simple, but it’s incredibly easy to make big errors that invalidate your results. They are particularly risky – unlike some other research methods, it’s hard to spot when your data is junk, and your conclusions are unreliable. 

Despite this, they are a very common research method, often used at the end of qualitative playtest studies, and an important skill for games user researchers to master.

In this issue we’ll look at the steps to designing a robust and reliable survey or questionnaire for your games user research study.

Get future games user research lessons direct to your inbox

Design Playtest Surveys

Designing surveys for games user research studies.

Decide what you want to learn

Before you start writing questions, you need to decide what you want to learn. These are your research objectives, and are different from the actual questions you will ask players. A research objective could be “is the difficulty of our game correct”, but you probably don’t want to ask that question directly to players in those words. 

Research objectives are best gathered by speaking to design colleagues, and understanding where the biggest risks for the game are & what do we need to learn to make good decisions. Our first #gamesUXchallenge this month gives you the opportunity to practice generating them.

Having identified research objectives, consider whether a survey is the best method for answering them. Surveys are good for measuring things, for example opinions (“Does this level look good or bad”) or behaviour (“How many times did it take to complete this level”). They are less suitable for deep qualitative feedback which would require observation and probing to answer (“Why did players take 20 attempts to complete this challenge”)

Remember – the research method should always be informed by the research objectives. Starting with a method (“today I’m going to run a survey”), will lead to low impact findings

Decide your sample

After deciding what you want to learn, think about who needs to answer your questions – who are your real players.

A common mistake I see online is sampling by convenience – e.g. getting whoever you can to fill out the survey. This is a waste of time. If you’re not confident that they are really your players, you’re gathering a lot of opinions and thoughts from people who would never buy your game. Pointless.

Some places to consider looking include forums for competitor games, using twitter hashtags and using facebook ads to recruit players. However any recruitment methods will introduce a bias into your participants that you should anticipate, and consider. Offering incentives – money for taking part – can help make your respondents more ‘typical’.

Putting careful thought into “who are the right people to ask this question to”, “how can I find them” and “how can I convince them to fill out my survey” are important for making sure your data is representative of your real players.

Write your questions

Each research objective should have one or more questions that try to find an answer. Some rough examples are here: 

A table. It shows research objectives in one column (how difficult a level or challenge is) and a matching survey question on the other side (how easy or difficult was this level)

Scales are a very common way of capturing answers to surveys and can include:

An overall numerical score – asking players to rate something out of 10.

A two-directional scale – with two conflicting ideas on each side of the scale. Make sure that they are opposites, or this one gets confusing. For example (“Very bad” to “very good”)

An agreement scale – asking players if they agree with a statement. (“Strongly disagree” -> “Strongly agree”)

When writing the text for your questions, keep the following points in mind:

  •  Will my player understand the question? Do they know what I’m asking about. Do I use language they are familiar with.
  •  Will they be able to remember the thing you are asking about? Are we asking about something that is recent enough for them to have a real opinion?
  • Will they be able to decide on a correct answer? Is this something they actually have an opinion on? And are they comfortable telling you the answer, or is it socially embarrassing to say?
  • Will they be able to enter the answer that matches their true feeling? Make sure your scale will allow them to give their true opinion. Asking them to rate from “neutral” to “good” is no help if they think it’s “bad”

A really powerful combination is combining rating questions with a open-ended question asking ‘why’ This does not replace interviews though, because a survey doesn’t allow the deep interrogation that is often required to really understand what players are thinking or doing.

Some tips for questions:

  • Avoid leading players by providing opposite statements in the question text e.g. instead of asking how difficult was the game, ask how easy or difficult was the game.
  • Bold key words in the question, so it’s easy to pick up the meaning from skimming it: How were the graphics in this level?
  • Keep question text short and direct, so they are easy to understand. Rate this level is clearer than Please provide a rating based on what you thought about this level
  • Be clear – use the same words that players do (interviews can help reveal this), and avoid jargon. What did you think about this character’s DPS won’t make sense to many players.
  • As with interviews, a good structure is to go from broad questions to direct questions. Ask ‘overall, how was the game‘ before asking specifics such as ‘how easy or difficult was using the jetpack’
  • Avoid yes/no questions as they are too closed and force players to give strong binary answers. Rating scales, or asking players to explain, allow more nuance to emerge.
  • Keep the survey short.
  • Never ask about future behaviour. Questions like will you buy this game have always been to correlate poorly with real behaviour, giving you unreliable conclusions. It’s much safer to ask about things they have previously done “What was the last game like this you bought” to get answers that are accurate

I’ve also gathered some excellent tips for survey design from the community. Find them all at the end of this section.

Pilot your survey

Ask a colleague or friend to answer your questions before you send it to players, and ask them what they think each question means. Doing this while sat with them, allows you to ask them questions about each question as they fill it out. This quickly reveals where they haven’t understood your question or the words you use.

Make your playtest survey for real

Use a survey tool to create your survey for real. Google Forms is free, and has some basic survey logic so can be appropriate for simpler surveys. 

More advanced tools, like SurveyMonkey or Qualtrics can be a good next step once you’ve reached the limit of what Google Forms can do.

Distribute your survey

Send your survey out to participants (or ask them to fill it in after coming in for a playtest). The best method of doing this is probably clear after doing the work to ‘decide your sample’ earlier.

Analyse the findings

Analysis is a whole topic in itself, but before we get started, we need to clean up our data.

I personally would always recommend exporting my data into a spreadsheet, rather than using the survey tool’s built in analysis tools, to help explore it. We looked in a previous issue at some tips for how to handle quantitative data

The data we have collected will usually have some issues with it. Some people will not have filled out the survey properly, and would have clicked through as fast as they can, or given some impossible answers, and we want to remove them from the data set. Read through the responses, review the start and end times to see how long people took on the survey, and look out for tell-tale signs such as ‘rating every question the same’ to indicate which participant data needs to be removed.

Analysis and finding meaning in the data is another skill to practice – but we’ll save that for a future newsletter deep-diving into analysis. 

Learn more about writing playtest surveys

When writing this post, I asked the community for their tips. I received so many that I had to spin them off into a separate post – read the expert tips for writing a playtest survey.

Elizabeth Zelle (who kindly submitted some of our expert tips), gave an excellent talk about survey design for games at a previous GRUX conference. Watch Elizabeth’s talk here

I also am a huge fan of the work of Erika Hall, and it would be remiss of me not to link her post about the dangers of surveys. Essential reading before you write your own.

Get ready for a career in Games User Research

Get my free ebook of career tips from researchers at top game studios such as EA, Blizzard, PlayStation, Ubisoft, Activision and more. Based on exclusive interviews, they reveal ten essential tips to kick-start your GUR career today.

Plus receive a monthly email with free games user research lessons and curated early-career Games UX and UR jobs.

Sign up now to get the free e-book (no spam, just a nice email from me each month!)

Ten essential tips for new and aspiring games user reseachers ebook

Happy New Year

2022 is going to be a busy year for me, and so I’m really looking forward to sharing our games user research journey together. Here are some of the things on my mind currently…

Next month is the one year anniversary of the How To Be A Games User Researcher book, and I have been preparing a special gift ready to celebrate – look out for it in the next issue.

It’s also nearly a year of these email lessons. If they, or the book, have helped you get your first games job in the last year, do let me know – I’d love to hear your story!

I’m also sponsoring a few of the games user research and playtesting events this year – I’ll share more as they come around, and their excellent line-ups get announced.

Also early this year I’ll be releasing the playtest kit – aimed at helping developers run better playtests. It’s still at a pre-launch discounted price, so if you’re interested it’s a great time to pick it up now.

Thanks everyone. Do keep in touch on twitter, and I’ll see everyone for more games user research knowledge next month! 

Don’t miss future issues

Newsletter Research Skills

Convince teams to playtest

This month we’re looking at overcoming objections to user research, and convincing teams to playtest. You might assume that this isn’t necessary – that teams understand the value of playtesting or UX research. Although that’s somewhat true – you rarely hear outright objections to user research, it say “not right now”.

This can be caused by fears of exposing the game prematurely to critique, and misunderstanding how research can support creativity. In this issue we’ll look at some fears that our colleagues might have, and what we can do about them.

Get future games user research lessons direct to your inbox

Overcoming objections to user research

Convince Teams To Playtest

Here are some of the most common objections I encounter, and some counter-points to consider.

“We don’t have time to playtest”

Sometimes teams have the belief that playtesting will take a lot of time away from development. A robust study does take time to prepare. Writing a test script, recruiting the right participants, analysing the data to draw reliable data takes time. But this doesn’t mean that development has to stop.

With good communication about the state of upcoming builds and milestones, it’s possible to prepare the study in advance. You can agree and aim for a shared test date. Rather than the team having to wait for you to prepare the test after the build is ready, the test is launched as soon as the build is ready. Doing this well requires strong communication with our teams (but communication with our teams is always important). 

The critical time to minimise is that between the test build being delivered and the researcher sharing some results. Researchers can adapt how they debrief findings, and optimise their analysis process, to minimise this wait. 

“Playtesting is incompatible with creativity”

Teams sometimes mistake what we do with market research or focus testing which aim to discover what is popular. 

This is not what user researchers do – we shouldn’t be trying to make creative decisions. Our studies are focused on testing how effectively games are meeting their creative goals. We learn whether players are having the experience the designers intended. Alongside this, observing player behaviour may inspire game designers to come up with new ideas – but the decisions about “what should this game be” are not ours to make.

When we’re working with teams, be very careful about explaining this, avoiding research objectives that fall into market research, and being unbiased about the creative direction. We should be helping teams define and reach their creative goals, rather than changing those goals ourselves.

“It’s not ready” 

(thanks Dan Choates for inspiration for this one!)

This assumes that the right approach is one big playtest to evaluate every aspect of the game. That’s risky, as it doesn’t normally come together until near the end, when there’s no time to make changes. 

To address this, teach teams about research objectives – defining what specifically they want to learn from a playtest. Base objectives on what parts they are currently working on. The parts that are not ready can be ignored or worked around, and we can get directed feedback on the elements that are ready. 

Conversations with team members will help expose what they are working on, the riskiest decisions they have had to make recently, and inspire potential research objectives. I believe doing lots of small targetted tests leads to a better outcome than one big playtest.

“It’s not valid if it’s not quantitative” or “we want a survey”

(thanks migor)

Teams may not recognise the value of qualitative research, or may have a preferred research method that they trust more. 

As researchers the first step we always have to take is to think about the research objectives. What do the team need to know currently? What is the best method of answering that question? Picking the right method will ensure we get reliable results from our study.

Sometimes a survey or quantitative method is correct. However we need to determine that from the research objectives, rather than blindly pick a method because the team request it.

One of the roles a user researcher has to do is educate their teams on the research methods that exist, and the strengths of each. Talk to your teams about the value of qualitative research. Explain how observation and interviews build a deep understanding of ‘why players do that’. Examples can help teams recognise how understanding a player’s behaviour and mental model is relevant to the game design decisions they have to make.  (mental models = their understanding of how something works) 

“Players don’t know what they want”

(thanks GiGi!)

This comes from a misunderstanding of what user research does. Teams may assume we just aggregate player feedback and report that back to game designers.

This isn’t what good user research is. As we discussed when covering creativity, we’re not taking creative directions from players. Our focus is on whether the experience players are having matches the one we want them to have. Demonstrate to teams that we understand their creative vision. Show how we combine data from observation, interviews and perform thorough analysis to see if the vision is being met. This will build confidence that we’re not just reporting players’ opinions.

“We already understand our players”

When I counted last year, I had observed players for over 25,000 hours (admittedly some of that was watching more than one player at a time!). In all of those sessions, I have never run a study where I didn’t learn something unexpected about player behaviour, or the issues they encountered.

Expert reviews, peer feedback and heuristics can help improve games a bit – but I have never seen a test fail to reveal something new. Look for examples of how the big studios with big budgets handle playtesting – by the end of development, Uncharted 3 was running roughly one playtest a week. There must be something in it! 

The most convincing tool user researchers have

The most convincing tool user researchers have for the value of playtesting is actually doing it. Exposing people to real players – allowing them to observe sessions, spot issues themselves, and involving them in the analysis is a fantastic tool for winning converts to the value of user research and playtests. Don’t work in isolation as a ‘black box’ service – be open, collaborative, and people will see the value themselves. 

More on how to do this in future issues

Be ready for your career in games

Get my free ebook of career tips from researchers at top game studios such as EA, Blizzard, PlayStation, Ubisoft, Activision and more. Based on exclusive interviews, they reveal ten essential tips to kick-start your GUR career today.

Plus receive a monthly email with free games user research lessons and curated early-career Games UX and UR jobs.

Sign up now to get the free e-book (no spam, just a nice email from me each month!)

Ten essential tips for new and aspiring games user reseachers ebook

The end of 2021

I think there won’t be a full issue next month, just a brief update (although maybe a secret bonus feature if everything comes together!). 

Thanks for all your support over this year, subscribing to the newsletter and everyone who picked up the How To Be A Games User Researcher book. I have been really happy with the reception it had this year and everyone’s kind words.  I hope everyone has a great end of 2021, and looking forward to continuing to work together to become better games user researchers in 2022.

Steve (follow me on twitter for more playtest + ux research chat)

Don’t miss future issues

Thanks for reading this issue. Get a monthly email with a games user research lesson, and entry level Games UX and UR jobs – sign up here.

Newsletter Research Skills

Find the fun – measuring enjoyment in games user research

It seems sensible for developers to want to know if their game is fun. Games are entertainment – and so a measure of whether the game is working would be “do people have fun playing it”.

This means that researchers get asked this question reasonably frequently. Today we’ll look at some of the challenges with answering it, and how researchers can handle measuring fun.

Get future games user research lessons direct to your inbox

Finding the fun - How to measure enjoyment in games -

Finding the fun in games

It’s not hard to come up with a question about fun. You can ask something like…

Question text: "I had fun playing this game"
The question has an scale, from strongly disagree to strongly agree

Or you can ask…

Question text: "Overall, how would you rate this game" 
The question has an scale, from very boring to very fun

Or even…

Question text: Overall how would you rate this game
Scale goes from 1 to 10

Each of these variations gets you a fine answer, and it can keep researchers busy for hours debating exactly which is the most reliable question wording to use. (My own take is that it doesn’t hugely matter as long as you consistently ask it in the same way when doing comparisons).

And that’s great – you now have a number to represent fun (or enjoyment, or how good the game is).

But what does a number mean?

A number by itself can not be that helpful. “This game is eight fun” only inspires more questions, including: 

  • “Is eight good?”
  • “What about it makes it eight?”
  • “Why wasn’t it nine?”

I suspect people hope this is predictive of critic review scores, but in my experience that has never been the case – what game reviews are assessing on is completely different from what players are able to assess on.

The number is also heavily biased by the context in which you’ve asked it – especially for high profile games. You’ve brought in some people to play an unreleased game, in a secret user research lab, which is a fun experience. You’re learning that they’ve had an eight of ten day out, not that they have played an eight out of ten game.

Using the number as a benchmark

Some value is in using the number to compare to other things. Eight out of ten is meaningless by itself, but when you know another experience is seven out of ten, that now has meaning – this one is better.

Things you can compare to include:

  • The ‘score’ for one level vs another level.
  • The same game at different stages of development
  • Other prototypes or games at a similar stage of development.

We covered how to do the maths to compare things in a previous newsletter

Keep in mind though that players will be rating everything about the experience they had, which can make comparison difficult. If comparing your game with something at a different development stage, the more ‘finished’ one will probably rate higher – which is only telling you it’s more finished, not that it’d be the better experience once complete.

As mentioned above, players will also be rating if they are having a fun day out – something to watch out for if you don’t run the test in the same location using the same method, or if you use a different type of player. It’s easy to come to misleading conclusions.

Working out what is fun

I find a lot of the value of asking for a ‘fun’ rating is in the discussion after with players. It can lead to learning why they rated it high or low, which is a lot more useful for game teams than an overall score. Asking ‘why you gave that score’ helps teams learn what elements are working, and what players believe are the biggest issues with the game currently.

How I handle fun scores

How I treat measuring fun is:

  • Use one of the questions above to get a numerical rating
  • Ask some interview questions to understand ‘why they gave that score’
  • Do the maths and make a graph – either giving the number in isolation, or as a comparison to something relevant
  • Combine the graph with the qualitative comments about why people gave the scores, grouped by ‘good things they said’ and ‘bad things they said’
  • Put this all at the end of the report, after behavioural and usability findings, and explain why it’s in a deprioritised section by itself
  • Explain to my game teams that the number is difficult to interpret, because of the influence the study design and development stage has on the score. Give extra caveats if it’s a qualitative study where we have a low number of participants. Contrast it to the behavioural findings earlier in the report which we think are safer to take action on.  

Or push for better measures

I asked peers for their opinions on fun scores.

Amanda Jarrell:
I think fun is notoriosly difficult to measure because it is such a vague concept! I find that drilling down on what fun means and shifting focus to player behaviour can lead to more accurate and reliable measures of fun.

Amanda Jarrell’s approach is a really sensible way of handling this. Working closely with development teams to push them to more accurate measures is a more effective way of judging if we’re creating the player experience we want to create.

One way of achieving this is getting teams to define the emotional experience they are trying to create, and then forming questions asking about those emotional experiences, rather than overall fun. This unfortunately blurry picture of the emotional map of Journey is a great example of defining an emotional experience of a game so it can be measured (source: The Dead Pixels). The Guardian newspaper wrote more about emotional game design and mapping emotional arcs here

This could also include pushing teams to define more objective measures than emotion. How many times do you want players to fail this challenge and How long do you think this section should take players are a lot easier to measure and handle.

Looking for more on fun? Do read John Hopson’s article ‘Why I hate fun’ for a great critique of ‘fun scores’ and a description of some alternative approaches.

Also this month there was some great fun score related discussion in the replies to this tweet by Elizabeth Zelle – read the tweet and the replies.

Being a games user researcher is just explaining why you can't give a game a "fun score" over and over again until you retire.

Some prompts to think about:

  • What are the risks from sharing a ‘fun’ score?
  • What questions should you ask teams who want to run a test to discover “is my game fun?”
  • For a recent game you’ve played, what emotional experience is it trying to create? How would you measure if it’s successful

Get ready for a career in games user research

Get my free ebook of career tips from researchers at top game studios such as EA, Blizzard, PlayStation, Ubisoft, Activision and more. Based on exclusive interviews, they reveal ten essential tips to kick-start your GUR career today.

Plus receive a monthly email with free games user research lessons and curated early-career Games UX and UR jobs.

Sign up now to get the free e-book (no spam, just a nice email from me each month!)

Ten essential tips for new and aspiring games user reseachers ebook

Don’t miss future issues

Thanks for reading this issue. Get a monthly email with a lesson like this + the latest entry level games user research roles by signing up below.

Newsletter Research Skills

Sharing Games User Research Findings

User researchers find problems with games. But it’s usually someone else who has to fix them.

This means communication is one of the most important skills for a researcher to have – and sharing games user research results is critical to having an impact on the quality of games. 

In this issue, we’ll look at some approaches for sharing research findings, and share some tips from the pros! 

Get future games user research lessons direct to your inbox

Debriefing games user research results

Sharing games user research results accurately creates an impact

From our playtests, we learn how does the game currently differ from the design intent. Much more crucially, we also understand why it differs – what caused players to not understand, or to do the wrong thing. 

Understanding the why is essential to making the right decisions about how to fix the problems. Usually, it is someone else who actually makes the final decision on what to do to fix it – a game designer, art director, level designer, etc…

Full and accurate communication of the issues, their causes, and the impacts mean that our colleagues make the right choices. 

Making a traditional research report 

The most common way of sharing research findings is to write a report. I have an old example of one on my blog, which – although I would do some things differently today, is still a good idea of what a research report might look like. I’m also sharing a new, real, research report as part of an upcoming GRUX Online talk – look for that in a future issue. 

As you will see, the report runs through each of the issues in turn. They are grouped by topic. Matching your groups to ‘what discipline would be working on fixing this issue’ can make distributing the findings to the right person easier. 

For each issue it explains what the issue was, why it occurred, and what impact it had on the player – this information will help teams make the right fix and prioritise the issue appropriately. 

When writing reports, it’s important to write concisely, and use plain language – we’re trying to make sure we’re understood. Complicated words don’t impress anyone, and just confuse your readers! Here’s some tips on how to write better. 

I would always recommend presenting the report to the team live. People are much more likely to pay attention, will have the opportunity to ask questions if they don’t understand, and it just gets a whole lot more engagement. Although it can be tempting to send a report and say “let me know if there are other questions”, I suspect that this leads to reports being ignored, and reduces the chance the team will react to your findings. Ignored findings make running user research pointless, so let’s try and avoid that when we can.

A report isn’t right for everyone

It’s really easy to get into the trap of thinking ‘a user researchers job is to make research reports’. Our job is to make the game better, and that a research report might not be the best way of doing that. 

For some teams, a report isn’t appropriate and would be ignored – instead consider alternative methods of sharing findings:

  • An interactive workshop
  • An email covering the top 5 issues
  • A message on slack
  • A research insight database that they can interrogate
  • Access to the raw data 
  • A conversation 

When starting with a team, think about how much time they have, how they communicate currently, and use that to decide what is the right way to share the results so that they don’t get ignored.

Research has multiple audiences

As well as the team who asked for the study, there are some secondary audiences who will also be interested in aspects of your study. They might need a different method for the findings to be shared with them.

Some of those audiences can include:

  • Executives who don’t need to know the detail, but do need to know if there is a big problem
  • Other teams working on different projects which some of the findings might be relevant too
  • Yourself in 12 months time, trying to remember if this study is a topic you’ve done before or not. 

Many of these won’t have the time to read through all of the detail of your report – consider some of the other methods that might be more appropriate for sharing research findings with them.  

Some prompts to think about:

In interviews, you are likely to be asked about your experience sharing research findings. Some aspects to think about in your answers include:

  • How did you decide what was the right method of sharing the findings?
  • Did you understand the needs of the audience and use that to inform your decisions?
  • How did you evaluate if you had been understood correctly?

Some research results tips from the pros

I asked the community if they had any tips for a successful debrief. Here’s what they said. 

I like giving my results a hierarchicy that relates to the research problem. That way we circle them back to the research questions and it helps staying in scope. I also use a usability severity rating attached to insights so that it can have an impact on priorities in the debrief with designers :)

Laure described how prioritisation is important for making sure the results are relevant. A common mistake I see (and do myself) is trying to fit in all of the findings, and make them all prominent, when all the team really need is ‘what is the most important thing we need to do next’ – which Laure’s method addresses.

For a usability rating scale, I personally like using this severity scale described by userfocus – but other’s exist too! 

You should definitely make sure to know for whom you are writing your report. The vocabulary you use might be different based on who reads it. Also, the level of details might vary based on how much time readers are willing to dedicate to your report. It's also a good idea to have quick takeaways at the very beginning for people who don't have time or to pick up interest of others.

As Raphael says, understanding your audience is incredibly important. At the start of a project, make sure you know who will be interested in the results, so that you can tailor your delivery to them.

Playtesting at Develop Conference

A busy month ahead for me. I’ll be talking at Develop Conference next month sharing some lessons on running affordable playtesting for teams who can’t afford full-time research support. 

Steve Bromley Develop - Better Playtesting for Indie Developers

Do come and say hi if you’re attending! (and if it’s a topic you’re interested in, do sign up for updates on when I publish more resources to help indie teams)

Have a great month and good luck on your games user research journey!

Steve (follow me on Twitter for more games user research news throughout the month!)

Get a free games user research career guide

Get my free ebook of career tips from researchers at top game studios such as EA, Blizzard, PlayStation, Ubisoft, Activision and more. Based on exclusive interviews, they reveal ten essential tips to kick-start your GUR career today.

Plus receive a monthly email with free games user research lessons and curated early-career Games UX and UR jobs.

Sign up now to get the free e-book (no spam, just a nice email from me each month!)

Ten essential tips for new and aspiring games user reseachers ebook

Don’t miss future issues

Thanks for reading this issue. If you’re reading online, and want to get one email each month with a games user research lesson, and entry level games UX and UR jobs – sign up here.

Newsletter Research Skills

Remote games user research

For the first five years of my career, most of the research studies I ran were in person. It was much easier to bring people to us, rather than transport our technically complex setup to them.

This had started to change for me prior to the pandemic, and then exploded last year when we could no longer safely bring participants to us. Now remote games user research is an essential part of our toolbox. In this issue we explore how to do it, and some of the things to look out for.

Get future games user research lessons direct to your inbox

How to run games research remotely

Running remote games user research studies

Remote games user research is when participants can take part at home, rather than coming to us. That’s always been the case for large-scale beta tests, which use analytics and surveys to gather feedback. It has traditionally been challenging for earlier stage studies, like a typical usability test. 

There are some advantages from running usability studies remotely. Previously our studies were limited to ‘people who could travel to us in a reasonable timeframe’. Running these remotely allows us to overcome our geographical bias, and recruit people from around the world to take part. Remote research makes the logistics of accessibility testing much easier too – it allows us to find more people with access needs who can take part in studies, and doesn’t put the burden on them to travel to us.

However there have always been some difficulties with running these studies remotely. Many studios are very secretive about their games, and the risk of leaks is increased when we can’t keep an eye on everything that our participants are doing. There are also technical and logistical obstacles that are more difficult remotely. It’s much easier for me to install a prototype of a mobile phone game on my phone and give the phone to a participant, than to get them to install the prototype successfully on their own phone. 

Many of the challenges have got easier as specialist tools have developed – we’ll look at some today.

How to do remote games user research

Live moderated research

Moderated research – where you are able to speak to the participant live and ask questions, is often the best way of gathering the most data possible from your playtest. 

Many teams have been exploring using Parsec to allow remote players to stream access to builds on your own device, similar to how Google Stadia works. This solves some of the technical issues (because the game is running on your own device it doesn’t need to be set up on their computer). It also reduces some opportunities for leaking, so can be preferable to installing the game on the playtester’s computer.  There is a video of how Parsec works on their website, as well as a case study of Ubisoft using it to distribute builds for a marketing event.

If your team are comfortable with participants installing the game on their own device, conference tools like Google Meet allow you to view the player’s screen, and talk to them live. It also has the built-in ability to record the session. Google Meet requires very little technical knowledge from the player, so is simple to set up – and it’s free! 

Unmoderated research

Another alternative to consider is unmoderated research – asking players to play in their own time, and receiving a video recording. This has some downsides – it’s not possible to ask questions in reaction to the behaviour you are seeing in the session. However, it can often be logistically easier to organise.

Some teams use automated services like PlaytestCloud to do this. This allows teams to specify a ‘type’ of player, upload a build, and receive videos of people playing it. This eliminates many of the logistical challenges of distributing games and finding players and makes running unmoderated user research studies easier. 

However not every team has the budget to outsource this (or are not working on a mobile game), and will have to recreate this setup manually. This can be done using tools like Obs or DScout (recommended by @eddiepearson) to capture videos from players, and combine that with a free survey created on Google Forms.

This kind of unmoderated research can allow us to see many more players than we could observe linearly. It can be very powerful for combining qualitative feedback (‘observing what they do’) with quantitative feedback from surveys. 

Non Disclosure Agreements

The risk of leaks is difficult to avoid entirely. Non-disclosure agreements can warn participants and reduce the likelihood of them sharing confidential information. This template from the research agency ping-pong will allow you to create your own NDA, and includes a plan to get it signed remotely. 

Some remote research tips

I asked others on LinkedIn for their tips about remote research. 

Christian Ress from PlaytestCloud gave this great advice:

Some great tips for the study design of remote studies (and good advice to run a pilot. Pilots are always an essential part of my in-person research plan, but easy to forget when working remotely!).

For a different take to addressing the issues from the pandemic, Player Research spoke last year about how they had adapted their lab to be safe for in-person studies, and I know this is something other research teams are looking into. 

Get a free games user research career guide

Get my free ebook of career tips from researchers at top game studios such as EA, Blizzard, PlayStation, Ubisoft, Activision and more. Based on exclusive interviews, they reveal ten essential tips to kick-start your GUR career today.

Plus receive a monthly email with free games user research lessons and curated early-career Games UX and UR jobs.

Sign up now to get the free e-book (no spam, just a nice email from me each month!)

Ten essential tips for new and aspiring games user reseachers ebook

Don’t miss future issues

Thanks for reading this issue. If you’re reading online, and want to get future issues directly to your inbox – sign up here.

Have a great month! 

Newsletter Research Skills

Quantitative Research for new games user researchers

Let’s start with my confession. I have been a user researcher working with games for over ten years. I have run hundreds of studies, and overseen thousands of hours of playtests (over 25,000 player hours at last count!). And yet, I know very little about stats.

There are two quantitative research things I know how to do. Today, I will explain both.

Get future issues direct to your inbox

Quantitative Research for new Games User Researchers

Comparing two sets of numbers

The first thing is how to compare two sets of numbers. I was taught this by Cyril and Mirweis at PlayStation, and I am grateful to them both for teaching me the only stats I know – how to compare two sets of numbers. This is useful when comparing things such as ‘how many times did the player fail’ or ‘how long did it take people to complete this level’

This method is appropriate for when the data is numerical rather than categorical (or ordinal). Here’s a short explanation of what that means. 

When you have some numerical data, it’s quite common to want to compare it. This allows you to learn “is there a difference between these two things”, and then inspire conversations such as “do we want players to fail more times on this level than on the next one?”.

To do this, you want to find the average, and then work out some confidence intervals to anticipate whether the difference between them is real or whether it was potentially caused by not measuring enough people. 

So, after counting how many times people died on level 3, you can take an average – which looks like this. 

We can see that on average, players died around 2.5 times on level 3. 
We can then do the same thing for the next level.

(This is probably a good moment to mention there is a template that does the maths for you later in this post…)

Looking at the average for Level 4 shows us that people died on average more often on Level 4 than they did on Level 3. 

But we don’t know if this is because Level 4 causes more deaths, or just random chance that it occurred in this study.

To identify that bit, we calculate confidence intervals. Which looks like this…

And we can see that the confidence intervals (the uppey-downy bits) overlap. The top of Level 3 overlaps with the bottom of Level 4.

Level 5’s confidence intervals do not overlap with any of the other levels. If the confidence intervals don’t overlap, there is a real difference between them. It’s true that more people died, and will die, on level 5 than level 4.

This hopefully means that Level 5 harder – although you should watch people play to understand actually why the difference in deaths occurred.

If the confidence intervals don’t overlap, we can’t tell if there is a difference. This is the case for Level 3 + 4. This either means that the number of times people die are the same, or that we haven’t seen enough players to draw an appropriate conclusion.

(There are probably errors in the terminology above, but as I said, I know little about stats – I just know how to compare two sets of numbers).

I use this all of the time – to count and compare deaths, completion time, etc. I made a template that you can duplicate to see the formulas required, and to have a go at doing it yourself.

Go deeper on quantitative research

Beyond this one technique, I’ve found two other tools very helpful.

Adjusted wald calculators like this allow you to state your completion rate (e.g. 3 out of 10 people encountered this issue), and from that anticipate how many people in the real world would encounter the same issue (between 10% and 60% apparently).

And the book ‘Quantifying the User Experience’ which has lots of nice decision maps like these, which tells me what tools I should (and shouldn’t) be using … and includes a crash course in stats to explain how to do them! 

Picture from the book ‘Quantifying the user experience’
(Picture from the book ‘Quantifying the user experience’).

Avoid common quantitative research errors

The second thing I’ve learned is a collection of things not to do. By recognising some stats errors, it helps me know when I should seek out someone better with stats than me to help out. 

Avoiding common errors include:

  • Don’t do the kind of maths I described above on ordinal data (such as likert scales). People often do, and get away with it, but it’s somewhat inaccurate as you’re treating categories like they are numbers.
  • Think about the sampling bias you have created in your study, and don’t over-emphasise how representative your conclusions are
  • Don’t assume that because you are measuring what players say they think or do, you are actually measuring what they think or do.
  • Recognise that when you are limiting the options you allow people to select from, you are limiting the range of results you will get back, potentially distorting the truth.
  • Avoid dogmatic rules about sample sizes. There’s lots of rules out there that have become dogmatic (‘quant studies need 30 users’, ‘qual studies need 5 users’), and many people repeat them without understanding the reason behind them. Understand why those guidelines exist, think about what you are trying to learn, and make conscious decisions rather than following ‘rules’.

The job is not just ‘qualitative research’

I sometimes encounter the idea that user researchers are synonymous with qualitative research. I don’t think that is appropriate or correct. Even if you are more comfortable with qualitative research, you shouldn’t allow your skillset to determine the method you apply for answering research questions. 

Instead always lead with ‘what does the team want to know’, and then ‘what is the most appropriate way of discovering that’. If that method isn’t one you are comfortable with, use it as an opportunity to learn how to do a new thing, ask for help from the community, or bring in some help from someone who is comfortable with it. Our job is to “help the team make evidence-based decisions”, regardless of the methods we are most comfortable with.

What quantitative research skills should I be ready for in the job interview?

If you can answer the following questions, I would say you would be a stand-out candidate…

  • What is p-value?
  • How would you compare the difficulty between two levels? What would you measure, and how should that be interpreted?
  • How would you measure if players are enjoying a game?
  • How would you handle being asked ‘I think this study should have a larger sample size’?

You will notice that these questions are often not about ‘how do I do the stats’, but much more interested in ‘when is quantitative research appropriate, how should it be applied, how should I explain things to my colleagues, and what are the caveats for this kind of work’. Which I think is where the real challenge lies! 

Thanks, and hello…

Thanks for making it to the end, and particularly thank you to all of the new subscribers – we’ve had over 500 people sign up to this newsletter over the last few months. Many of the new readers joined this month, so welcome to all the new people starting their games user research careers.

I’ve written a book about how to be a games user researcher, do take a look if you haven’t already. If you had, I’d really appreciate a review on Amazon – it does have a huge impact on the book, and I value it very much.

As always, do email me or tweet me with feedback, questions, etc and I’ll see everyone next month! 

Don’t miss the next issue