DEV Community

XIAMI4XIA8478239
XIAMI4XIA8478239

Posted on

TestSprite User Research: Real User Discussions About AI Testing Tools

Notes:

  • In this report, “verified accessible” means I confirmed that the discussion thread exists publicly, is accessible, and has a live public URL.
  • This is not a set of “identity-verified users.” It is a set of real, publicly accessible, verifiable discussions.
  • To meet the requirement, I iterated in 3 passes: 1) Broad capture: gathered AI testing / QA automation / low-code / self-healing / visual testing / vendor-name related threads 2) Relevance filter: removed obvious marketing posts, dead links, duplicates, and weakly related results 3) Final pack: kept 30 of the most representative real discussions, then added a short quote, sentiment, and insight for each

========================

1. Overall Conclusions

Across these 30 threads, 6 recurring signals stand out:

  1. Users are interested in AI testing, but highly skeptical.
    Common phrases include: hype, overhyped, glorified wrapper, still manual, false positives.

  2. The most credible use cases are concentrated in:

    • natural-language / low-code test creation
    • visual regression
    • locator healing / self-healing
    • test data preparation
    • assisted generation, not fully autonomous end-to-end automation
  3. The most common complaints are:

    • still requires significant manual intervention
    • flaky / brittle problems are not truly solved
    • expensive pricing
    • strong vendor lock-in
    • sales demos look much better than real usage
  4. Enterprise buyers most often get stuck on:

    • unclear ROI
    • opaque pricing
    • security / process boundaries
    • migration cost from the current stack
  5. The traditional alternatives users keep comparing against are still:

    • Playwright
    • Selenium
    • Cypress
    • Robot Framework
  6. For many teams, AI testing is a productivity layer, not a magical replacement for engineering discipline.

========================

2. 30 Real Discussions (Final Set)

[01]
Title: What AI QA testing tools/services are you actually using in 2025? Share your experiences.
Subreddit: r/QualityAssurance
URL: https://www.reddit.com/r/QualityAssurance/comments/1l3ny32/what_ai_qa_testing_toolsservices_are_you_actually/
Tools Mentioned: Testim, mabl, Applitools, QA Wolf, Autify, Virtuoso, Eggplant, Functionize, Katalon, Percy, Chromatic
Short Quote: "most were overhyped garbage, but a few were decent."
Sentiment: Mixed -> skeptical
Insight: A classic “try many, keep few” thread. It shows that buyers are willing to trial multiple tools, but very few survive real evaluation.

[02]
Title: Has anyone here actually used AI testing tools?
Subreddit: r/QualityAssurance
URL: https://www.reddit.com/r/QualityAssurance/comments/1hv4wkh/has_anyone_here_actually_used_ai_testing_tools/
Tools Mentioned: Testim, Applitools
Short Quote: "They still need manual intervention sometimes."
Sentiment: Mixed
Insight: Users do recognize value in self-healing and visual detection, but false positives and manual cleanup remain real costs.

[03]
Title: Has anyone actually gotten AI test automation to work or is it all hype?
Subreddit: r/QualityAssurance
URL: https://www.reddit.com/r/QualityAssurance/comments/1p1aptg/has_anyone_actually_gotten_ai_test_automation_to/
Tools Mentioned: Momentic, Testim, Kplr AI (mentioned in comments)
Short Quote: "Self healing? Still hype, zero real value in practice."
Sentiment: Mostly negative
Insight: The strongest signal is not “AI is useless.” It is “do not expect AI to magically solve engineering problems.” Users are more open to AI for narrow workflow slices.

[04]
Title: Is anyone actually successfully using the so-called self-healing AI-assisted testing tools for a first release?
Subreddit: r/QualityAssurance
URL: https://www.reddit.com/r/QualityAssurance/comments/1rf0b1c/is_anyone_actually_successfully_using_the/
Tools Mentioned: Self-healing AI tools (general discussion)
Short Quote: "so-called self-healing AI-assisted testing tools"
Sentiment: Skeptical
Insight: Self-healing is one of the strongest click-driving value props, and also one of the easiest for users to challenge as “marketing ahead of reality.”

[05]
Title: Your thoughts on QA-AI testing tools?
Subreddit: r/QualityAssurance
URL: https://www.reddit.com/r/QualityAssurance/comments/1kqdk78/your_thoughts_on_qaai_testing_tools/
Tools Mentioned: QA Wolf, TOSCA, Rainforest
Short Quote: "QA wolf is a glorified playwright wrapper."
Sentiment: Negative
Insight: Users often reduce products to blunt labels. “Wrapper” is a high-frequency negative framing, which means differentiation must be extremely clear.

[06]
Title: Anyone actually using AI for test automation? What works?
Subreddit: r/QualityAssurance
URL: https://www.reddit.com/r/QualityAssurance/comments/1mna861/anyone_actually_using_ai_for_test_automation_what/
Tools Mentioned: AI testing tools (general)
Short Quote: "it's now in my KPIs."
Sentiment: Concerned / practical
Insight: Many adoption motions are top-down, not grassroots. These buyers care more about where to start than about brand storytelling.

[07]
Title: What AI tools for test automation are you actually using in 2026? (Beyond ChatGPT)
Subreddit: r/QualityAssurance
URL: https://www.reddit.com/r/QualityAssurance/comments/1qciok7/what_ai_tools_for_test_automation_are_you/
Tools Mentioned: ChatGPT, Playwright, Postman, others in comments
Short Quote: "maintenance is still manual and flakiness persists."
Sentiment: Mixed / pragmatic
Insight: Even after LLMs are added to the workflow, maintenance and flakiness remain primary pain points.

[08]
Title: Anyone used any good AI tools to help with test automation?
Subreddit: r/QualityAssurance
URL: https://www.reddit.com/r/QualityAssurance/comments/mrh8rb/anyone_used_any_good_ai_tools_to_help_with_test/
Tools Mentioned: AI automation testing tools (general)
Short Quote: "haven't been super impressed by what I've seen."
Sentiment: Negative leaning
Insight: Even older threads carry the same skepticism. The “AI testing overpromises” problem is not new.

[09]
Title: Vibecheck: Are people using AI code editors for Playwright test automation
Subreddit: r/QualityAssurance
URL: https://www.reddit.com/r/QualityAssurance/comments/1l4o396/vibecheck_are_people_using_ai_code_editors_for/
Tools Mentioned: Copilot, Trae, Windsurf, Cursor, Playwright
Short Quote: "seen success/failure with it"
Sentiment: Exploratory
Insight: Users are no longer only evaluating dedicated AI testing platforms. They are also using general AI coding tools inside testing workflows.

[10]
Title: Have You Used AI-Generated Test Cases? How Was Your Experience?
Subreddit: r/QualityAssurance
URL: https://www.reddit.com/r/QualityAssurance/comments/1jil8pw/have_you_used_aigenerated_test_cases_how_was_your/
Tools Mentioned: Testsigma, TestComplete, Tosca
Short Quote: "it really cuts down my time"
Sentiment: Mixed positive
Insight: Test case generation is one of the few categories that regularly gets positive feedback because it naturally fits an “AI assists, humans review” model.

[11]
Title: What Ai Testing Tools do you use?
Subreddit: r/softwaretesting
URL: https://www.reddit.com/r/softwaretesting/comments/1m00yqo/what_ai_testing_tools_do_you_use/
Tools Mentioned: AI testing tools (general)
Short Quote: "pushing for us to use more Ai tools"
Sentiment: Pressure-driven
Insight: Adoption is often driven by management pressure rather than tester demand.

[12]
Title: How Are You Using AI in Software Testing and Automation?
Subreddit: r/QualityAssurance
URL: https://www.reddit.com/r/QualityAssurance/comments/1kifr7a/how_are_you_using_ai_in_software_testing_and/
Tools Mentioned: AI testing tools (general)
Short Quote: "it's made a big difference."
Sentiment: Positive
Insight: Positive threads are usually about targeted productivity gains, not about AI fully replacing testing work.

[13]
Title: Exploring Self-Healing Playwright Automation with AI
Subreddit: r/QualityAssurance
URL: https://www.reddit.com/r/QualityAssurance/comments/1o67zw9/exploring_selfhealing_playwright_automation_with/
Tools Mentioned: Playwright, self-healing AI frameworks
Short Quote: "rewriting large portions of our test scripts"
Sentiment: Pain-driven
Insight: Interest in self-healing comes from a real maintenance burden, not just from chasing trends.

[14]
Title: AI testing
Subreddit: r/softwaretesting
URL: https://www.reddit.com/r/softwaretesting/comments/1s6h7lw/ai_testing/
Tools Mentioned: AI agents, test case generation, script automation (general)
Short Quote: "just use AI agents to write test cases"
Sentiment: Curious / skeptical
Insight: Many users still have a fuzzy definition of “AI testing.” Category confusion itself is a market barrier.

[15]
Title: Ai Testing Tool Recommendations for Enterprises
Subreddit: r/softwaretesting
URL: https://www.reddit.com/r/softwaretesting/comments/1oo6y14/ai_testing_tool_recommendations_for_enterprises/
Tools Mentioned: Enterprise AI testing tools (general)
Short Quote: "possibly buy this tool and use it in day to day tasks"
Sentiment: Buyer-evaluation
Insight: Enterprise buyers focus on demo quality, rollout practicality, and day-to-day suitability, not just feature checklists.

[16]
Title: Are QA teams actually seeing real benefits from AI...
Subreddit: r/QualityAssurance
URL: https://www.reddit.com/r/QualityAssurance/comments/1ni79gr/are_qa_teams_actually_seeing_real_benefits_from/
Tools Mentioned: AI tools for UAT test cases (general)
Short Quote: "Running a trial now using AI tools"
Sentiment: Cautious
Insight: Many teams are still in trial mode. The market is far from mature or default-purchase status.

[17]
Title: Low code ai automation tools
Subreddit: r/QualityAssurance
URL: https://www.reddit.com/r/QualityAssurance/comments/1jj8q7n/low_code_ai_automation_tools/
Tools Mentioned: KaneAI, mabl, Quality Works AI Test Case Generator
Short Quote: "I'm pleasantly surprised."
Sentiment: Positive but guarded
Insight: Tools like KaneAI can earn positive reactions, but users still frame them as POC-stage tools rather than fully trusted replacements.

[18]
Title: Would you try an automation tool that exactly mimics user interactions on a visual level
Subreddit: r/QualityAssurance
URL: https://www.reddit.com/r/QualityAssurance/comments/1jtkxem/would_you_try_an_automation_tool_that_exactly/
Tools Mentioned: Visual-level automation concepts
Short Quote: "rather than traditional dom related element identification"
Sentiment: Exploratory
Insight: This points to a real demand: users want less maintenance, not just fancier locators.

[19]
Title: How are folks handling end-to-end testing these days?
Subreddit: r/QualityAssurance
URL: https://www.reddit.com/r/QualityAssurance/comments/1k0sp37/how_are_folks_handling_endtoend_testing_these_days/
Tools Mentioned: E2E testing stacks (general)
Short Quote: "great in theory, flaky in practice"
Sentiment: Negative / realistic
Insight: The long history of flaky end-to-end testing is exactly the problem AI testing vendors keep promising to solve.

[20]
Title: Most promising no-code test automation solution?
Subreddit: r/QualityAssurance
URL: https://www.reddit.com/r/QualityAssurance/comments/1cba1a0/most_promising_nocode_test_automation_solution/
Tools Mentioned: No-code / low-code QA tools
Short Quote: "don't cost an arm and a leg"
Sentiment: Cost-sensitive
Insight: For smaller teams, price and ease of onboarding often matter more than the “AI” label.

[21]
Title: Test Case Management in 2025 Still Feels Broken AF
Subreddit: r/QualityAssurance
URL: https://www.reddit.com/r/QualityAssurance/comments/1k1na1q/test_case_management_in_2025_still_feels_broken_af/
Tools Mentioned: Kane AI, BrowserStack, LambdaTest test management
Short Quote: "I am not very convinced on AI reliability yet"
Sentiment: Skeptical
Insight: Even users who believe AI will improve often still see it as an unreliable add-on today.

[22]
Title: Selenium tests breaking constantly after every UI change...
Subreddit: r/QualityAssurance
URL: https://www.reddit.com/r/QualityAssurance/comments/1oklz5j/selenium_tests_breaking_constantly_after_every_ui/
Tools Mentioned: Momentic, Cypress, Playwright, Selenium
Short Quote: "maintenance time has been pretty dramatic."
Sentiment: Positive for Momentic
Insight: When the pain point is concrete, such as UI changes constantly breaking scripts, tools that reduce maintenance time can earn real positive sentiment.

[23]
Title: Are no-code tools for automation good nowdays?
Subreddit: r/QualityAssurance
URL: https://www.reddit.com/r/QualityAssurance/comments/1etjba5/are_nocode_tools_for_automation_good_nowdays/
Tools Mentioned: Testim, TestRigor
Short Quote: "most of them are just fancy and don’t work well."
Sentiment: Mixed / skeptical
Insight: These threads highlight the gap between demo quality and real deployment. Users explicitly warn others not to judge based only on the sales pitch.

[24]
Title: Suggest any AI tools for testing
Subreddit: r/QualityAssurance
URL: https://www.reddit.com/r/QualityAssurance/comments/1cmallx/suggest_any_ai_tools_for_testing/
Tools Mentioned: Kane AI
Short Quote: "it's been quite helpful."
Sentiment: Positive
Insight: Positive feedback usually clusters around fast setup, easier starting points, and quick generation of baseline work.

[25]
Title: Has anybody heard of momentic.ai?
Subreddit: r/QualityAssurance
URL: https://www.reddit.com/r/QualityAssurance/comments/1nbj9fm/has_anybody_heard_of_momenticai/
Tools Mentioned: Momentic
Short Quote: "We haven't found any reviews. No pricing either."
Sentiment: Cautious / buyer-friction
Insight: For newer vendors, lack of social proof and lack of pricing transparency are adoption barriers on their own.

[26]
Title: Can people give me the reasons why most QA or dev's are using Playwright/Selenium/Cypress over codeless/low code tools like Testim/Mabl/TestSigma etc?
Subreddit: r/QualityAssurance
URL: https://www.reddit.com/r/QualityAssurance/comments/12cwzm8/can_people_give_me_the_reasons_why_most_qa_or/
Tools Mentioned: Testim, mabl, Testsigma, Playwright, Selenium, Cypress
Short Quote: "There seems to be a lot more users who swear by playwright"
Sentiment: Comparative / skeptical
Insight: This is a critical comparison signal. AI / low-code tools are not only competing with similar vendors. They are competing against mature open-source stacks.

[27]
Title: Anyone using Testsigma for test automation? How's it?
Subreddit: r/QualityAssurance
URL: https://www.reddit.com/r/QualityAssurance/comments/1luog7p/anyone_using_testsigma_for_test_automation_hows_it/
Tools Mentioned: Testsigma
Short Quote: "The no-code approach is actually useful."
Sentiment: Positive
Insight: Testsigma’s value proposition lands best in teams that want fast coverage without heavy coding depth.

[28]
Title: any alternative to QA wolf?
Subreddit: r/QualityAssurance
URL: https://www.reddit.com/r/QualityAssurance/comments/1ojti0q/any_alternative_to_qa_wolf/
Tools Mentioned: QA Wolf
Short Quote: "Hire your own qa engineers."
Sentiment: Negative
Insight: One of QA Wolf’s core objections is category confusion: are you buying a tool, or are you outsourcing a team?

[29]
Title: Is Testsigma worth?
Subreddit: r/QualityAssurance
URL: https://www.reddit.com/r/QualityAssurance/comments/1fv5e1c/is_testsigma_worth/
Tools Mentioned: Testsigma
Short Quote: "Faster creation of testcase due to record feature"
Sentiment: Evaluation-mode
Insight: These threads often appear before a manager pushes procurement. The real question is whether the tool is worth replacing an existing stack.

[30]
Title: Any Visual Testing tool you can recommend?
Subreddit: r/QualityAssurance
URL: https://www.reddit.com/r/QualityAssurance/comments/hed1q1/any_visual_testing_tool_you_can_recommend/
Tools Mentioned: Applitools, Screener.io
Short Quote: "Applitools is the industry leader, but is very expensive."
Sentiment: Mixed
Insight: Visual testing shows one of the clearest patterns in the category: strong product recognition, but meaningful price resistance.

========================

3. Additional High-Value Threads (Optional Reference)

[A]
Title: Why does dev get all the cool AI tools? What about QA?
URL: https://www.reddit.com/r/QualityAssurance/comments/1m9o9gg/why_does_dev_get_all_the_cool_ai_tools_what_about/
Signal: Awareness of AI tooling is spreading, and QA practitioners are actively looking for their own workflow layer.

[B]
Title: Does your organisation write visual tests in functional tests?
URL: https://www.reddit.com/r/QualityAssurance/comments/1guvll3/does_your_organisation_write_visual_tests_in/
Signal: Applitools gets recognition, but some users explicitly complain about slow API calls hurting test performance.

[C]
Title: What is your opinion on low code automation testing tools?
URL: https://www.reddit.com/r/QualityAssurance/comments/1fjlxia/what_is_your_opinion_on_low_code_automation/
Signal: Users compare BrowserStack / Tricentis / low-code products directly against open-source alternatives.

[D]
Title: Which QA tools are actually useful day-to-day?
URL: https://www.reddit.com/r/QualityAssurance/comments/1ok0rbm/which_qa_tools_are_actually_useful_daytoday/
Signal: Users care about what is truly usable every day, not just what sounds compelling in marketing.

[E]
Title: Anyone actually paying for QA Wolf? exploring open...
URL: https://www.reddit.com/r/QualityAssurance/comments/1sga8ad/anyone_actually_paying_for_qa_wolf_exploring_open/
Signal: Clear pricing complaints appear, including mentions of six-figure annual cost.

========================

4. Ten Conclusions TestSprite Can Use Directly

  1. Users hate overpromising.
    Especially around claims like self-healing, autonomous testing, or “AI handles everything.”

  2. The most convincing value proposition is not “more intelligent.”
    It is “less maintenance.”

  3. Users naturally compare AI testing tools against Playwright / Selenium / Cypress.
    If a vendor cannot explain why a team should not just stay on the open-source stack, the sale gets harder.

  4. Pricing and pricing transparency are major blockers.
    Applitools and QA Wolf come up repeatedly in “is it worth it?” discussions.

  5. New tools suffer from a social proof gap.
    No reviews, no pricing, and no case studies directly hurt trial willingness.

  6. The most accepted AI use cases are:

    • test case generation
    • test data prep
    • visual testing
    • locator healing / maintenance assist
  7. The least trusted claims are:

    • fully autonomous generation of stable end-to-end tests
    • zero manual maintenance
    • true end-to-end autonomy
  8. The words “manual intervention,” “false positives,” and “flaky” recur constantly.
    Messaging and product design need to confront those head-on.

  9. Many buying motions are driven by KPIs or management pressure.
    Buyer enablement content should answer: where to start, how to roll it out, and how to quantify ROI.

  10. If TestSprite wants to enter this market effectively, the better angle is not “we also have AI.”
    It is “which maintenance cost do we concretely reduce, and where are we easier to live with than the current alternatives?”

========================

5. Best Keywords for Follow-Up Research

  • self-healing
  • false positives
  • flaky
  • maintenance time
  • overhyped
  • wrapper
  • no-code
  • worth it
  • expensive
  • pricing
  • manual intervention
  • trial / pilot / POC

Top comments (0)