Webinar recap: AI can scale support. Can it keep it human?

Go to the profile of Olivia Doboaca
Olivia Doboaca
Webinar recap: AI can scale support. Can it keep it human?

Table of Content:

  1. How to use AI in player support without the hype
  2. The feedback translation problem
  3. Scaling support without making everyone miserable
  4. What separates good support from average
  5. FAQs

Greg Posner hosted the first Player Driven workshop last week with Veronica Cherenkova from EasyBrain and Conor McGinley from Twin Harbor Interactive. The focus was on player experience and how support teams are handling the current wave of AI tools while trying to keep things human.

Veronica leads support at a mobile casual games company that scaled from startup to unicorn with over two billion installs, while Conor runs customer support for long-term strategy games where communities need to stick around for weeks or months at a time.

The conversation was refreshingly honest about what's working and what isn't when it comes to automation and scaling support operations. Watch the entire session here:

How to use AI in player support without the hype

When you're handling about 20,000 tickets a month, you already have agents to send replies. You can read tickets yourself to understand how a player is feeling.

Where AI becomes useful is reading through all those tickets and summarizing what's in them. Teams can review the summary and check if it matches what they're seeing. If the AI says players are complaining about features that don't exist, then something is wrong. When it captures the general vibe correctly, then you have quantifiable data without anyone needing to read thousands of tickets manually.

AI is useful for boring scalability tasks. You can't easily put it in a presentation and wow everyone. It does the tedious work that nobody wants to do.

The idea of AI as a full replacement is tempting for stakeholders who want to press one button and have support magically handled. That's the wrong mental model, though.

Support has an 80/20 effect. You can automate most requests, but you still spend most of your time on what's left because that's where the edge cases live. That's where emotional issues come up, and that's where product expertise is required.

AI helps with volume. Complexity stays the same.

The problem with applying human KPIs to AI is that AI can close a bunch of tickets very fast and very efficiently. The quality of those interactions matters. First replies can be overly polite but empty. They can be super confident and lead nowhere. Players end up feeling processed.

Players don't mind interacting with AI. The problem is feeling like no one read their message. Getting a solution that feels generic. Who would?

The approach that works is automating routine tasks while protecting relationships. AI is great for drafting, translating, rephrasing, tagging, and summarizing. Humans handle empathy, edge cases, emotions, and understanding product context.

Players should know when they're dealing with automation. They follow the process and either get help or reach a human agent eventually.

The real trouble happens when players think they're getting a person and feel betrayed when it turns out to be ChatGPT. That loss of trust is harder to recover from than just being upfront about automation from the start.

The feedback translation problem

Product teams ignore untranslated feedback.

There's a story about a player who messaged an executive on LinkedIn asking to change a specific functionality. The message was so well written that it worked: they changed the entire functionality of the game because of that one player.

The uncomfortable part was that support had raised this issue before; they had the signal, just didn't deliver it in a decision-ready way.

The player explained exactly what was happening, why it messed up the playing experience, and suggested a better way to do it. Support teams need to do the same thing instead of passing raw data.

The problem is translation between departments. Product teams need clear impact, clear reasoning, and a clear ask or question. When you say people hate ads and show a number of complaints, that doesn't mean much. When you say people hate ads because the ad starts at the end of a level and feels like punishment for completing it, then product can work with that information.

AI can help surface this kind of decision-ready data or completely mess it up. AI will happily tell you that complaints about ads went up 20% in the new release. For a business with an ad-based model, that information is useless without context.

There's a recurring nightmare in CS where you're in a meeting and product finally asks for feedback:

  1. You say players don't like a feature.
  2. They ask how many players.
  3. You say about 20%.
  4. They say show me.
  5. You frantically search keywords, trying to pull up numbers, and can't get them in tim,e and everyone is disappointed.

AI summaries help with this by letting you quickly go through recent tickets and present something quantifiable. It's an estimate, but based on experience, you can say this tracks and this is what players are talking about right now.

Having systems in place before those meetings happen is key. Scrambling to justify feedback in the moment doesn't work.

Scaling support without making everyone miserable

When teams start scaling, blind spots become exponentially worse. It’s a fact!

With manual support, if an agent makes a mistake, you'll find it during regular quality checks. You can talk to the agent, rectify the mistake, and train better. With automation, you need a player to tell you something went wrong. That player needs to be able to reach you. If you segment users and only certain payers, languages, or regions can contact you directly, then you might not find out about an issue for a week or even just 12 hours.

Automation can't just change instantly either. Things are connected to other things; you need to close off one area, open another, redirect traffic, and do all of that very quickly if something is burning down.

The best approach is to treat automation like a project. You need to AB test it. Working with a couple of support managers first helps: pick skeptical, detail-oriented people and run limited automation on low-risk projects. Collect feedback, iterate, and only scale after everything looks good.

The worst thing for a team is putting double work on them. When they need to fix bad replies or clean up generic tags or undo something that was supposed to help them, it feels like babysitting a robot. Teams get frustrated when you just present them with automation as a finished thing. When they participate in shaping it they don't feel like they're being punished by something new that they have to clean up after.

When automations first get implemented, a lot of players are vocally against them. They share screenshots saying this is awful, they replaced all the people with bots. What can happen is that other players in the community point out that people are using the bots wrong. The bot asked a question, and they said no, you don't need help. So the community essentially pressures complainers into accepting it.

Players complain about bots because they see it as extra steps between them and the answer they want. Testing the bots yourself helps you realize things like asking for name and email is annoying. If it's annoying you then it's annoying players.

You shouldn't work with one automation for lots of different stuff either. It becomes too generic. Teams are happier when you work with every category of requests separately.

What separates good support from average

What separates good from average is what always has. Whether you take customer support seriously as its own thing or see it as a cost to satisfy legal obligations.

Some companies think we have support because we have to have support. They send players through AI and maybe they stick around, maybe they don't.

Automation will become the new norm in the next few years. Everyone will have it so that stops being an advantage. The real advantage will be finding balance between making support faster and more effective while keeping it human. If you adopt AI you need to invest in quality assurance for your automation. Treat it like a feature release. Test it, review it, monitor it.

Pick someone who isn't just into pattern recognition and optimization. Someone who genuinely loves people and enjoys helping them. That person is going to shape how users feel when they interact with your support.

The brands that get customer experience right create customers for life. People overlook this when they think they can easily replace functions with automation.

AI is becoming table stakes. The companies that win will be the ones who use it to enhance human connection instead of replacing it.

FAQs

Where does AI help in player support?

AI works best for summarizing large amounts of tickets and identifying patterns across thousands of player interactions. Teams use it for drafting replies, translating messages, tagging tickets, and analyzing sentiment at scale. The boring repetitive stuff that nobody wants to spend hours doing manually. AI struggles with edge cases, emotional situations, and requests that need deep product knowledge. Those still need humans.

How do you get product teams to listen to support feedback?

Product teams need clear impact statements and specific asks. Saying players hate a feature with a complaint count doesn't help much. Explaining that players hate ads because they trigger at the end of levels and feel like punishment gives product something actionable. AI can help surface patterns but you need humans to translate raw data into decision ready information. Having systems to pull this data quickly before meetings helps too.

What mistakes do teams make when implementing automation?

The biggest mistake is rolling out automation to everyone at once without testing. Start with a small group of skeptical detail oriented people on low risk tickets. Collect feedback and iterate before scaling. Another problem is using one generic automation for every type of request. That makes replies too vague. Working with separate automations for each category of request works better. Teams also forget to do quality checks on automated responses the same way they would check agent work.

Read other posts from our blog:

Your players are about to quit and AI knows it first. How?

Your players are about to quit and AI knows it first. How?

Find out how AI sentiment analysis predicts player churn by analyzing app reviews. Gaming companies ...

Olivia Doboaca
Olivia Doboaca
What are the most effective tools for AI-powered review management?

What are the most effective tools for AI-powered review management?

Compare 5 AI review management tools (AppFollow, Appbot, AppTweak, Appfigures, App Radar). See must-...

Olivia Doboaca
Olivia Doboaca
Managing reviews for gaming app portfolios: a short, practical guide for 2026

Managing reviews for gaming app portfolios: a short, practical guide for 2026

Portfolio gaming companies deal with massive review volumes across multiple titles. Learn how to use...

Olivia Doboaca
Olivia Doboaca

Let AppFollow manage your
app reputation for you