Run a playtest with your Discord community

The ultimate guide to running a playtest with your Discord community

Last updated:

I’ve previously been very dismissive of playtesting with Discord communities. 

I regularly speak to teams who have tried to run a playtest on Discord. They describe low engagement, difficulty getting players to fill out surveys, shallow responses and other data quality issues that plague Discord tests. 

As a user researcher, I’m also very aware of the risks of sample bias – drawing conclusions from players who don’t represent the majority of your audience. This can lead to missing major issues, and creating a game that doesn’t appeal to your intended audience – the complete opposite of what we’re trying to do with games user research.

However, regardless of my reservations, plenty of teams are running playtests with their communities. For a lot of small indie development teams, working on their first projects, time is short and the budget doesn’t stretch to running more rigorous studies. 

So I want to help.

In this article we’ll look at how to make the most of your community playtests, and how to draw safe conclusions to improve your game from Discord. 

What are the risks of relying on your community for playtesting?

Your community are not the same as your ‘typical’ players. The fact they were willing to join your  Discord means they are more engaged than most of the people who will buy your game, and generates different information to 95% of your audience.

That’s going to change their opinions and behaviour – and can lead to misleading conclusions from your playtests. Testing solely with your community can lead to misunderstanding whether your tutorials are effective, getting an inaccurate understanding of what motivates players to play games like yours and create balance issues with difficulty or pace. 

I’ve heard from teams who have tuned the difficulty of the game based on community feedback, accidentally making it too difficult for typical players, tanking their retention of new users.

I’ve previously said that these problems are too significant to make playtesting with your community worthwhile (other than as a marketing technique for creating an engaged community).  

However a lot of teams don’t have a choice – and are going to run playtests with their community anyway (because it is valuable marketing, so worth doing for that alone). 

To make the most of Discord playtests, we’re going to look at how appropriate planning, sampling and analysis can help draw reliable conclusions from community playtests. 

Work out your playtest priorities

Playtests end up messy and unfocused if you’re not clear about what you want to learn from them. Left unchecked, this can lead to playtesting becoming a waste of time. 

“What do players think about my game” isn’t concrete enough – to make effective use of your community playtest, you need to be very clear on what you hope to learn.

Working out the playtest priorities is a great activity to do with the other decision-makers in your team. Defining these collaboratively will build a clear shared vision of the game you are trying to make, and highlight discrepancies before they cause more problems later. 

I’ll go through the steps for doing this, with examples for a collectable card game like Marvel Snap.

Identify your playtest priorities

What am I working on? 

Start by listing out all of the current development priorities – the things you are actively working on, or have recently completed. Often this already exists in a task list from JIRA or some sprint goals, to remind you of the current focus of your development effort. 

What do I expect players to do, think or understand?

For each of the current priority areas are hidden assumptions about the experience you think players will have. Implicit within your idea are assumptions about what players will think or do – which bits you think players should find difficult, or that you want players to understand or should provoke specific emotional reactions. 

For example:

“From this early battle, we expect players to learn the benefits of retreating from battles”

“From this card, we expect players to believe it’s powerful and can turn the tide of the game”

“In the final round of each match, we expect players to feel a peak of excitement”

“When encountering players using a discard heavy deck, we expect players to get stuck until they realise which cards are an effective counter” 

“We would like players to get hooked on receiving new cards, and then feel their progress is slowed past level 500, so that they consider purchasing a microtransaction around this point”

The first step for any successful playtest should be articulating those, so that you are clear what the intended experience is.

Work through your list of recent development priorities, and articulate the intended player experience. These can be captured on whiteboard software like Miro, and the discussions and agreement around them can become a powerful alignment tool for creating a shared vision across the team. 

Once you’ve got your list of ‘intended player experiences’, it’s time to rank them by risk.

Which of those are most important to de-risk?

There’s never time to playtest everything in busy development schedules. You need to make smart choices to decide what is worth playtesting now, vs what can “go on the backlog” (never to be seen again…)

The criteria I usually recommend using to rank by risk asks the following three questions:

Prioritise a playtest

“Is this a core part of the experience” – is it something that players have to do, or have to experience for the game to be a success, or is it a secondary mechanic

“Are we able to make changes” – if we found problems, could we react to them, or is it too late?

“Is this risky or untested?” – you might be an experienced team who have worked on similar mechanics or features before. If you’ve done it before, and are confident in your execution, that can be a reason to deprioritise. 

What should I test?

Using this process, you can work out what things are most important to playtest first. Try to end up with a list of 5-10 priorities for your first playtest, and run multiple playtests if necessary. 

Those priorities can be written up as ‘research objectives’ – questions you hope to answer in your playtest. 

These can often be framed around players understanding, opinions or ability – some examples can include:

  • Does this tutorial teach people when they should retreat?
  • Do players recognise the value of this card ability?
  • Do players understand how to counter decks built around discarding?
  • Do we see an emotional peak during the final round of each battle?

Focus makes successful playtests. By starting with clearly defined objectives for your playtest, you are much more likely to get useful conclusions at the end of the process.  

Segment your audience

We’re assuming for this playtest that you have a Discord community ready to use already. (if not, get my course on finding playtesters for free by signing up to my newsletter.) 

I introduced the idea of sample bias above. When using an existing community, the biggest impact on player behaviour and feedback will be whether they have played your game before – so we want to minimise the impact of the sample bias impacting your results. 

As a result, I’d recommend splitting your playtest audience in two – based on whether this is their first time, or whether they have playtested your game before. We can then match them to the right kind of objectives, that those players are best suited to answer. 

First-time playtesters

Players who are seeing your game for the first time are *much more valuable* than playtesters who have already played a previous build.

Don’t waste them – my recommendation is to save these players for qualitative research – deep one-to-one sessions probing their behaviour and understanding. This is particularly appropriate for your research objectives focused on understanding, such as ‘Does this tutorial teach people how to retreat?’ or “Do players understand what this card does?’

For each of your objectives come up with one or more tasks to set players, that will test their ability and understanding and give you some raw data about your objective – for example “pick an appropriate selection of cards” or “complete this match”.

To actually schedule the sessions, use a calendar tool like calendly to create some slots players can book. Then during their booked session, provide them with a link to the build, screen share with them over Discord, and get them to play your game while you watch.

While running the session, your role is largely to sit back be quiet, and observe. Ask bland questions to understand what’s happening in players’ heads, but be very careful of introducing new ideas or concepts artificially that players haven’t learned from the game.

Some potential questions to get players talking and expose what’s happening in their heads include:

  • What’s happening currently?
  • Why are you doing that? 
  • What led you to that conclusion?

Then wrap up your session with some questions to get their overall perspective of what they’ve seen:

  • What were your thoughts overall?
  • What stood out as particularly good?
  • What stood out as bad?
  • What was confusing?
  • What was more difficult than you expected?
  • If you could change one thing about what you’ve seen, what would it be?

You might think 1:1 sessions with players sounds like a lot of work. The good news is that you don’t have to do many of them to get tremendous value. 

For research objectives around understanding or ability, even seeing 5 players in a 1:1 setting will be much more valuable than surveying hundreds of players – any designer or developer who has seen these sessions will agree and have endless stories about how much they have taken away from each session. Learn more about how to run your first moderated playtest.

Re-using former playtesters

For players who have played your game before, you can no longer explore objectives around understanding. But we can still get useful quantitative (counting) data from them – which is appropriate for research objectives about difficulty or balance. 

This quantitative data is most useful for benchmarking and looking out for anomalies – are any missions rated much more difficult than others, or any cards rated much weaker? It is also a way of exploring opinion data at scale – where is the consensus about what players like or think?

Because we’re now looking at scale, we want feedback to be structured. Surveys offer that structure, rather than asking people to share feedback straight into a Discord channel which will create groupthink, and be a lot more difficult for you to process. The survey itself can be built on Google Forms for free, or if you have development time available – you can add the ability to give feedback within the game itself, for example with a custom button to send feedback, or in-game survey questions deploying at relevant times. 

For each of your research objectives, draft one or more questions that explore it for a survey. For example:

For the objective “Do players feel the difficulty increases appropriately in each battle?”, we could regularly ask players to rate the difficulty from too easy to too hard on a scale, and look for discrepancies.

For the objective “Do we see an emotional peak during the final round?”, we could ask players whether they agree or disagree with the statement “The final round was exciting”, and compare the response to other rounds, or gameplay sections.

An example of some generic end-of-level survey questions can be seen here, which you are free to take and adapt, but I would strongly recommend creating bespoke questions for each of your research objectives:

Battle Survey Example

Playtest – End of Battle Survey Example

I’ve written more about how to write good survey questions – here’s a full article on it. 

To give multiple players access to the build at the same time, while maintaining access control, Steam Playtest is a good way to distribute your build (learn more about tools here). You can then use your Discord to share the survey link for feedback.

Normally I wouldn’t recommend using playtests for QA, since QA is a task that can be done by a developer, whereas developers can’t anticipate the player experience and discover usability issues – however with these ‘re-used’ playtesters, I’d also be open to getting bug reports from them – and using community playtests for bug-hunts. 

Increase participation in your playtest

The biggest barrier you will face is ‘a lack of interest from people taking part’, leading to few participants and poor quality, shallow, responses. 

Here are a few tips for how to increase participation from your playtest.

  • Don’t disregard qualitative research. With first-time players, I guarantee that getting a few one-to-one sessions scheduled in with players will tell you infinitely more than a giant survey. It feels scary to talk to real players, but it is worthwhile – and easier to get players to commit and follow through on than an anonymous survey. 
  • Keep your surveys short. The advantage of prioiritising what you need to know from this playtest is that you don’t have to ask everything – you can only focus on a few key questions, and still learn relevant things to your development priorities. Shorter surveys get more responses. Less than fifteen questions please! 
  • Incentivise players. Offer something for their participation. When running studies for publishers, we usually pay people $50-100 to take part in a one-to-one session, or less for a survey, because our time (and getting the right kind of player) is valuable. That’s out of the budget range for many small teams, but some cheaper alternatives can be a gift card, a competition entry to win money, a credit in the game, a code for another game, or in-game rewards (like in-game currency). 
  • Deadlines. People like to leave things to the last minute. Create deadlines (and move them back after the deadline has passed if necessary to create a ‘second bump’), to push people from ‘I’ll participate one day” to “if I don’t participate today, I’ll miss it”. 
  • Give updates. Telling people what happened as a result of their participation in a playtest makes the process feel worthwhile, and makes people feel like they contributed.

Incentivising people, and getting higher quality engagement and participants, greatly improves the return on investment of the time and effort to organise a playtest. 

Creating meaning from your playtest data

Playtests generate a lot of data. Depending on your method, you might have objective data, such as:

  • Observations (things you’ve seen players do)
  • Stated understanding (things players have explained ‘how they work’)

and subjective data, such as:

  • Stated opinions (things players have said they like or dislike)
  • Ratings (players have given a rating in response to a question)

Each of these types of data need to be handled in a different way.

Handle objective player data

Observations are the simplest type of data to handle. You have seen the player do something that is contrary to your design intent – they didn’t spot what they were meant to do, or didn’t learn a crucial mechanic.

Playtests will also generate objective statements from players on their understanding. During the session players are going to explain how they think mechanics work – e.g. “I unlock more cards by increasing my collector level”. Sometimes their understanding is correct – great. Other times their understanding is incorrect. 

For both of these type of usability issues, each observation or statement could represent an issue that impacts many of your players. Take all of the observations that represent problems your players will experience, and for each perform a simple prioritisation to decide which observations you have the capacity to act on. 

Handle subjective player data 

Subjective data requires some extra steps. You should also remember that you are the game designer, and in control of the artistic vision. Make conscious choices about whether to act on subjective opinions, so that you retain control of the vision and don’t fall into ‘design by committee’.

For player opinions, like “this card is too powerful” you want to see where the consensus lies. Performing a thematic analysis will help identify clusters of opinions, which you can then compare to your team’s idea of the intended experience, and decide whether that opinion is the one you intended to create. 

Before acting, combine the subjective opinions with observation data, or quantitative evidence from the game itself to understand why those opinions exist, and pull the right levers to change them. 

Rating data is best used for benchmarking, and comparing against other games, other sections of gameplay, or previous versions of builds. Here’s more on analysing quantitative data, such as ratings. 

Want to spend an hour watching me analysing playtest data in a real world example? Who wouldn’t. Here’s a video of me live-streaming analysing some qualitative and quantitative playtest data. 

Many small tests > One big test

The top mistake teams make is to hold back playtesting and doing ‘one big test’ at milestones late in production.

I understand why this happens – it’s harder to get builds together, or even give it the headspace required, outside of milestone builds. However it does massively increase the risk that you’re going to be stuck with problems that it’s too late to fix.

By following the process above to design smaller, tightly focused tests around only the top priority objectives allows many mechanics to be tested and de-risked in isolation, leaving only major balance issues for the big milestone builds, which can be simpler to resolve. Moderated playtest methods allow you to help players overcome bugs or unfinished areas, focusing only on those specific objectives that are the team’s current focus.  

This unlocks rapid iteration and avoids getting stuck with baked-in problems. 

Where possible, push towards doing more frequent, smaller tests, than ‘one big playtest’.

One last thought…

Having a community of players on Discord also unlocks continuous discovery – the opportunity to always learn more about your players, and their purchase and gaming behaviour. 

Why not schedule one interview a week with a player of a competitor game, and ask them a bit about what they play, and why they play it, and how it fits into their day? 

Dripfeeding this knowledge throughout development will make the whole team smarter, inform difficult calls, and help designers and developers make better decisions first-time – reducing re-work.

 
Any questions about running your playtest on Discord? Drop me an email and I’ll get them answered.

Integrate player insight throughout development

Every month, get sent the latest article on how to plan and run efficient high quality playtests to de-risk game development. And get Steve Bromley’s free course today on how to get your first 100 playtesters for teams without much budget or time.

Plus your free early-access copy of ‘Playtest Plus’ – the essential guide to the most impactful playtests to run throughout development of your game

Free Book Free Course
Author image

Meet the author

Steve Bromley is an expert user researcher, who works with studios of all sizes to run playtests, and integrate user research into the game development process.

Learn more

Keep Exploring

Record A Mobile Playtest

A simple set-up for recording mobile games playtests

A reliable set-up for recording mobile games user research studies and playtests, that fits into every budget. 

When to run playtests

When should I run playtests?

Learn what to test throughout production, to de-risk development and release the best possible version of your game.

Choose a playtest method

Choose the right playtest method

A reliable method for picking the right playtest method, and getting data you can trust.

Master Games User Research

Free monthly new articles teaching playtesting & how to be a games user researcher. Join the most interesting conversations about games user research, discover job opportunities, and be introduced to new ways to think about game development.

Which best describes you?(Required)