DEV Community

Grigoriy Melnikov
Grigoriy Melnikov

Posted on

Russia’s Human-Like Bots Are Too Advanced - And Harder to Detect Than You Think

I’ve spent 10 years building bots that bypass anti-fraud systems. Now I fight them by building anti-bot detection systems - and most defenses don’t work.

In this article, I’ll break down how human-like bot traffic actually works - and show a simple way to make bots click on hidden links.

In Russia, bot traffic is highly industrialized

Almost every website receives large volumes of “direct” and “referral” visits that are not real users. These visits distort analytics and can negatively impact rankings in Yandex (a Russian search engine like Google).

There is a fundamental difference between bot traffic patterns in Russia and global markets:

  • In Russia, bots are primarily used to manipulate behavioral signals - pushing sites higher in search results without paying for ads.
  • In global markets, bots are optimized for revenue - ad fraud, lead fraud, affiliate abuse. Same tools. Different goals.

Why Russian-style bots matter

Russian bot operators are highly focused on mimicking real user behavior.
And you don’t need to be a developer to build these bots.

In Russia, tools like Browser Automation Studio or ZennoPoster allow users to create bots visually - like building a flowchart:

  • move the mouse to a specific element
  • click elements
  • fill out forms

An example of a bot created visually in ZennoPoster - no coding required:
An example of creating a bot using a block diagram in ZennoPoster.

No coding required. This accessibility is one of the main reasons why human-like bot traffic has scaled so aggressively.

There is no visualization of how captchas are solved

You can’t see what actually happens on the captcha page during solving.

Most solutions, including Cloudflare, don’t provide visibility into:

  • how the captcha was interacted with
  • what actions were performed on the captcha page
  • whether the behavior looked human

This creates a major blind spot. Here’s where it gets interesting.

Bots clicking hidden links

I’ve built a lot of bots - and analyzed even more. One thing becomes obvious: bot behavior on a website is never perfectly clean.

If you:

  • add invisible (hidden) links to a captcha page
  • record user sessions on that page

you start seeing very clear patterns:

  • bots scrolling over the captcha
  • bots clicking hidden links
  • bots interacting with elements no real user would ever see

This technique is especially effective when analyzing paid traffic. It does not matter how the visit is classified: bot or human. If a user clicks a hidden link or scrolls over a captcha - it’s a strong signal of bot behavior.

Fixed selectors are the real problem with captchas

Most captchas have fixed selectors and a predictable HTML structure.
For tools like BAS or Puppeteer, clicking the “I am not a robot” checkbox is trivial.
But if the captcha page is generated with:

  • dynamic HTML paths
  • randomized CSS classes then solving it becomes significantly harder.

Cloudflare captcha has static paths, so it is easy for a human-like bot to click it:
Cloudflare captcha has static paths, so it is easy for a human-like bot to click it.

A dynamic captcha is much harder to solve: no text labels, no fixed HTML paths, no fixed CSS classes, no fixed element positions:
A dynamic captcha is much harder to solve: no text labels, no fixed HTML paths, no fixed CSS classes, no fixed element positions

Why L7 DDoS bots are easy to filter - and human-like bots are not

In overall traffic, L7 DDoS activity is clearly visible. High-volume attack traffic stands out - you can see it, and you can block it.

At the moment, the three most common types of L7 attacks are:

  • repeated or similar IP ranges
  • downloading the same resource in parallels (for example, downloading a website image in many threads)
  • random URL parameters

All of this can be filtered using a WAF provided by a hosting or infrastructure provider:
the three most common types of L7 attacks.

The goal of human-like bots is not to take a site down, but to look like real users. Because of that, you can’t detect them using simple signals like IP address, user agent, language, region, screen resolution, or similar parameters.

Detecting this type of traffic requires a completely different approach.

On this approach, I built my own anti-bot system

It focuses not on user parameters, but on what software generates bot traffic. Each bot is created by specific software, and that software produces a unique snapshot. This snapshot does not depend on browser parameters inside the session.

My snapshot is not a fingerprint. A fingerprint is a set of browser parameters; a snapshot is the bot-generating program identifier.

Tools like BAS or anti-detect browsers like MoreLogin produce their own snapshots that differ from real browsers - and that’s exactly how human-like bots can be detected.

For bot visualization, I use Yandex Metrica - it’s a free and powerful web analytics tool. It’s especially useful when analyzing traffic quality.

You can see how the bot clicks a hidden link in the session recording shown in the video attached to this article. The bot thinks it is on a normal website, but it is actually on a captcha page. It first moves the mouse over the captcha, then moves the cursor to the top of the page - where hidden links are placed to look like a navigation menu.

bot taps ower captcha

If you’re curious about how this detection works in practice, I’ve shared more details and examples here: https://t.me/KillBotEng
You can also test it on your own traffic:
👉 https://my.kill-bot.net/

From my experience, most analytics and bot detection systems completely miss this type of traffic.

In the next post I’ll explain how search rankings in Google are manipulated by bots

Have you ever checked whether bots click elements on your site that real users can’t even see?

Top comments (0)