DEV Community

Cover image for How We Saved 600 Hours of Support Work with AI in a Ticketing System
Dmitriy Kasperovich
Dmitriy Kasperovich

Posted on

How We Saved 600 Hours of Support Work with AI in a Ticketing System

This isn’t just another “we taught AI to answer reviews and now everything is magic ✨” story. Nope. This is about building a custom ticketing system for a support team—complete with business rules, multiple LLMs (ChatGPT and Gemini), and real humans doing quality control (because let’s be honest, AI still occasionally hallucinates
 ).

Why feedback matters

User feedback is basically free consulting. Entire industries (gaming, foodtech, SaaS, e-com, delivery—you name it) thrive or crash based on what customers type into a review box: ratings, rants, bug reports, thank-you notes, and sometimes, full-blown essays. Each one is a signal telling you whether your product is a rocket ship 🚀 or a sinking boat 🛶.

And those signals? Yeah, they’re multiplying faster than npm packages. According to HubSpot’s Annual State of Customer Service Report, 75% of support pros saw the biggest tidal wave of inquiries ever in 2025.

When request volume is chill, no problem. But drop a flagship product, or let a bug slip into your checkout flow, and boom—you’ll get more reviews than your support team has caffeine to process.

And here’s the kicker: every customer, happy or mad, expects a reply. Silence costs more than an AWS bill left running over the weekend:

  • 75% of customers will pay more for excellent service (Zendesk Benchmark Data).
  • 43% of customers say one bad support experience = never buying again (Salesforce).
  • Negative vibes put \$3.7 trillion of global sales at risk every year (Qualtrics).

So yeah - systematic feedback management isn’t a “nice-to-have,” it’s survival mode.

Ticketing systems and help desks exist to bring order to chaos. They take in requests, turn them into structured tickets, assign them to humans, track status, and spit out analytics.

But businesses still wonder:

  • “Can we find a system that hoovers up reviews from app stores, emails, smoke signals, whatever, and turns them into tickets instantly?”
  • “Do we really need two different platforms for analytics and ticketing, or can someone please mash them together?”
  • “Why the heck do we always have to choose between automation and analytics when obviously we need both?”

For our client, Malpa Games, we built a system that did exactly that: review aggregation + analytics in one place. First release? Already useful. Then came the fun question:

Could AI cut repetitive support work while boosting team productivity?

Spoiler: yup. Big time.

AI and Ticketing Systems: From Review Processing to Product Analytics

Customer support effectiveness is usually judged by three key CX metrics (2025 data drop incoming):

  • CSAT (Customer Satisfaction Score): top priority for 31% of businesses.
  • Customer retention (aka the real money metric): also 31%.
  • Response time (nobody likes waiting): 29% (HubSpot).

In gaming, reviews are basically endless:

  • Some are short “⭐️⭐️⭐️⭐️⭐️ great game lol” posts.
  • Others are multi-paragraph rage dumps.
  • Many are direct “support me please” questions.

Handling this by hand? Absolute time sink. Businesses really care about two things here: speed and consistency. If you’re slow, CSAT tanks. If your answers are inconsistent, retention tanks. Either way, you lose.

Automation is now mandatory. Even players agree: 46% say they’re cool with AI support if it just solves the problem (Salesforce).

So, Malpa Games plugged AI into their shiny ticketing system to solve four real-world challenges:

  1. Detect sentiment (negative/positive/neutral). Prioritize angry users first. Close happy ones fast.
  2. Identify root cause of bad feedback (ads, bugs, crashes). Product teams love this—it’s basically free QA.
  3. Auto-tag reviews for clean analytics. Goodbye manual labeling.
  4. Draft replies (or pull from templates) for standard cases. Humans only step in for tricky tickets that need empathy.

To test this, we wired in ChatGPT and Gemini side by side. The goal: compare classification accuracy and “reply style” in the wild. One model will eventually win the Hunger Games, but for now we’re letting them battle it out.

Full Malpa Games case study has more juicy details.

How Ticket Automation with AI Works

We started small (but impactful). Right now, AI closes:

  • Empty tickets (just a star rating + metadata).
  • Short positive reviews with zero questions.

The model scans, checks rules, and picks a reply from our directory of auto-templates.

Bonus: we store several templates per ticket group and pick randomly—because nothing screams “I’m a bot” like copy-pasting the exact same thank-you note 10,000 times.

Example flow:

  • User leaves: ⭐️⭐️⭐️⭐️⭐️ Everything’s great!
  • AI detects: English, <50 chars, rating=5, no question.
  • System replies (in user’s language), picking one of these:

    • “Thanks for the review! Glad you’re enjoying the game 😊”
    • “Thanks for the high rating! Enjoy!”
    • “Thanks for playing – it really motivates us!”

Ticket closed. No humans harmed in the process. Player gets a quick, friendly reply.

The flow:

  1. Ticket enters system. If it’s just stars, auto-reply + close.
  2. If positive rating + text → AI finds language + context → both models suggest replies.
  3. Support manager can regen/translate → copy → send → close.

Automation Rules

We didn’t let AI run wild. Instead, we gave it strict parenting rules via the auto-reply directory.

Rules control everything:

  • By language → only reply in supported languages.
  • By text length → short = auto, long = human.
  • By rating → positive = auto, negative = human.

You can also assign project-specific rules, because one-size-fits-all is for socks, not ticketing.

Integration Results

Year one stats:

  • 10,000 empty tickets closed.
  • 8,000 short reviews closed.
  • Total = 18,000 tickets out of 47,000 (~40% of flow).

Each simple ticket normally eats 2 minutes of a human’s life (open → read → tag → reply → send). AI did all that 18,000 times.

That’s 600 hours saved—basically 4 months of one full-time support manager. Not bad for a sidekick that never takes coffee breaks. ☕

Three Lessons from Automating Support with AI

Malpa Games showed us:

  1. Even simple automation matters. Closing empty/short reviews shaved hundreds of hours and stabilized response times.
  2. Classification = product insights. Sentiment + root cause tagging helps product prioritize bugs, not just support.
  3. AI augments, doesn’t replace. Draft replies cut response time, freeing humans to handle complex cases with empathy.

What’s next? Smarter stuff: forecasting workload, auto-routing tickets, dynamically generating answers from KBs or user data. That’s when AI stops being a helper bot and becomes a strategic weapon for scaling support.

Top comments (0)