DEV Community

GokuScraper悟空爬虫
GokuScraper悟空爬虫

Posted on

Deep Decryption of OpenAI's Anti-Gray Market Registration: "Outsourcing" Risk Control and "Deterring" via Costs

Deep Decryption of OpenAI's Anti-Gray Market Registration: "Outsourcing" Risk Control and "Deterring" via Costs

In the past few years, OpenAI's payment gateway was essentially an ATM for the gray and black markets. From the mass opening of first-month free trials on Japan’s PayPal, to the replay of Apple App Store receipts for "one ticket, multiple charges," and the forced unlocking of trials via Frida hooking on Google Play—every single pathway had actors engaging in large-scale arbitrage. However, by mid-2026, these loopholes were entirely sealed, bringing the era of "zero-dollar purchases" on the payment side to an end.

But the demand did not disappear; the battlefield simply shifted. The main forces of the black market retreated from the payment side to the more foundational registration side: one strategy involves mass-registering free accounts to build a pool, accumulating small gains into a large one; another focuses on exploiting various trial events to shear free Plus quotas.

In response to this shift, OpenAI's risk control strategy completed a quiet paradigm shift. It is no longer obsessed with "identifying whether you are a machine or a real person," but instead focuses on increasing the cost of registration through risk control. It calculates a cold economic equation: when the money, time, and effort you spend registering an account exceed the price for which it can be sold, you will naturally give up.

It is from this precise angle that we launched a routine packet-capture audit of OpenAI's registration process. In the critical initialization request https://ab.chatgpt.com/v1/initialize, we captured a JSON configuration of over 3000 lines. This dynamic instruction set, relying on the Statsig feature management platform, constitutes the command center of OpenAI's global risk control.

Based on our deconstruction of this configuration, we identified the three pillars of this defense system: an email domain blacklist—cutting off resource supply at the source; differentiated verification channels—using WhatsApp to implement a dimensional strike against traditional SMS reception platforms; and multi-dimensional environmental fingerprinting with full behavior recording—leaving the traces of automated registration scripts nowhere to hide. The three are centrally orchestrated by the dynamic rollout engine driven by Statsig, empowering OpenAI with the ability to tighten or relax global strategies with a single click in the backend, without needing to release any code.

This article will break down this anti-black market defense matrix in the registration phase layer by layer.

image-20260507183049486


Chapter One: Cutting It Off at the Source - The Strictest "Email Blacklist" Strategy on the Web

The first line of defense in traditional risk control is typically set at the "verification" stage—CAPTCHA, SMS verification codes, email confirmation links. But OpenAI's approach is much more direct: it moves the defense line forward to the "registration eligibility" itself. Before a user even reaches the verification step, the system has already made a judgment through a simple field—whether the email domain you are using is on the blacklist.

image-20260507163057708

In the JSON returned by the ab.chatgpt.com/v1/initialize interface, under the configuration ID 739871931, there hides a list named disabled_domains, containing 156 email domains. This list is distributed by Statsig, meaning it can be added to or deleted from in the backend at any time, without a frontend release. It is the first gate of the entire registration process—those who hit it will not even see the verification code page.

These 156 domains were not randomly blacklisted. After breaking them down, we found that OpenAI's strike logic covers four clear dimensions, every cut severing the black market's resource lifeline.

Dimension One: A Blanket Ban on Privacy-Encrypted Emails

The most prominent category on the blacklist is email service providers that focus on privacy protection and end-to-end encryption: proton.me, protonmail.com, tutanota.com, etc., are all listed.

These types of emails are the "favorites" of black marketers hoarding accounts. They do not require a phone number to register, support unlimited aliases, and are insensitive to the registration IP, making them naturally suitable for batch automated operations. Black market teams often have tens of thousands of ProtonMail or Tutanota accounts backlogged, ready to be linked with registration bots. OpenAI's strategy is exceptionally simple: since the core selling point of such emails is "anonymity" and "difficult to trace," then to me, they equate to "untrustworthy." A blanket ban instantly turns the massive stock hoarded by the black market into garbage.

Dimension Two: Regional Email Blocking

A large number of local email service providers from specific countries appear on the list, presenting a clear geofencing logic:

Mainland China: qq.com, 163.com, 126.com, etc.

South Korea: naver.com

Japan: yahoo.co.jp

Russia: mail.ru, yandex.ru

Poland: wp.pl, op.pl

This is most likely not a random selection, but rather based on attribution analysis of registration data. OpenAI has clearly calculated the "low-quality registration rate" for email domains globally. When the proportion of spam accounts among registration requests from a country's local email exceeds the tolerance threshold, the entire domain is blacklisted. This means that the path for the black market to detour through niche local emails to bypass risk control is also blocked.

It is worth noting that the ban on qq.com and 163.com has an ancillary effect: it shows that OpenAI has no intention of leaving a loophole for users in mainland China through the traditional email registration path. To some extent, this is also part of forward-looking compliance risk management.

Dimension Three: "Mandatory SSO" for Big Tech Emails

The most surprising entries on the blacklist are gmail.com, hotmail.com, outlook.com, and icloud.com—the major tech emails with the highest global coverage.

This does not mean OpenAI is rejecting Gmail or Outlook users, but it wants you to enter in a different way—you must click the "Continue with Google/Microsoft/Apple" button and go through OAuth authorization login, rather than directly entering an email and custom password into the registration form.

image-20260507184816626

The logic behind this is clear: POSTing an email address directly to a registration endpoint is too easy to simulate; automated scripts can do it in milliseconds. But going through OAuth means OpenAI can obtain structured credit data for this account from Google or Microsoft—how long the account has been registered, whether it’s bound to a phone number, whether its behavior in the Google ecosystem is normal. The cost of simulating a plain email is close to zero, but the cost of simulating an SSO login state from a major tech company with complete credit backing is orders of magnitude higher. This essentially transfers the cost of identity verification to Google and Microsoft.

Dimension Four: Phishing Prevention for Internal Corporate and High-Net-Worth Industries

Some special entries also appear on the blacklist: openai.com, mail.openai.com—these are OpenAI's own corporate email domains. Blacklisting their own emails might seem strange, but it is standard intranet security practice. It prevents internal employees from mixing company emails in the public registration process, and it also eliminates the possibility of attackers impersonating an OpenAI identity to carry out phishing.

In addition, domains of well-known consulting and financial companies such as bcg.com (Boston Consulting Group), bain.com (Bain), citadel.com (Citadel Securities), and moodys.com (Moody's) also appear on the list. Once the emails of these high-value enterprises are maliciously registered, they can be used for social engineering attacks—such as impersonating a BCG consultant to send a phishing request to OpenAI, or using a Citadel domain to register an account for internal or external fraud. Blocking such domains in advance is taking precautions.


From these four major dimensions, it can be seen that OpenAI's email blacklist has gone far beyond the crude traditional practice of "blacklisting a few disposable emails." It is a dynamically updated, multi-dimensional, precisely targeted resource blocking system. Its core philosophy is singular: to make the black market’s resource pool largely invalid right at the registration eligibility gate.

But this is only the first line of defense. Even if the black market acquires clean emails that are not on the blacklist, an even more thorny obstacle awaits them—the "dimensional strike on channels" during the identity verification phase.


Chapter Two: Dimensional Strike in Identity Verification - The Channel Game Between SMS and WhatsApp

The email blacklist is a block at the resource level. But for those black market actors lucky enough to use an email not on the blacklist, they immediately hit the second line of defense—identity verification. In this link, OpenAI did not choose to clash head-on with the endless stream of SMS receiving platforms, but instead used a beautiful "channel replacement," shifting the battlefield to a location of its own choosing.

image-20260507163904020

In the JSON returned by the initialization interface, two arrays are hidden under the configuration ID 2516824722: sms and whatsapp, each containing a list of country/region codes. This seemingly ordinary list actually dictates which verification channel global users must go through during registration. And the allocation logic behind these two lists is the essence of OpenAI's suppression of the black market during the identity verification phase.

2.1 Core Logic: "Outsourcing" Risk Control to Meta

To understand this strategy, one must first look at how the traditional black market passes SMS verification. Virtual SIM card receiving platforms—these were once the core components of the black market's infrastructure. One machine packed with dozens of cheap SIM cards receives verification codes and automatically sends them back via API, with costs as low as a few cents per message. Faced with this infrastructure, simple SMS verification is practically useless.

But OpenAI's choice is: don't give them a chance to send an SMS.

In the configuration, countries/regions covering over 90% of the global population are forcibly assigned to go through the WhatsApp verification channel. The critical point of this decision is that, to have an account that can normally receive WhatsApp verification messages, you must meet the following conditions:

  1. A real phone number (registering for WhatsApp with a virtual number easily triggers a ban).
  2. The number must have successfully registered for WhatsApp and not been flagged by Meta's risk control system.
  3. A stable network environment, as frequently switching IPs to log into WhatsApp is also high-risk behavior.

Each of these three conditions exponentially increases the costs for the black market. Traditional SMS receiving platforms are completely useless against WhatsApp verification, because these virtual numbers have never even registered for WhatsApp. Even if they had, they would be quickly identified and banned by Meta's anti-spam system.

Looking deeper, OpenAI's move is equivalent to outsourcing a thorny technical problem—determining whether there is actually a real person behind this phone number—to Meta. WhatsApp's parent company, Meta, has the world's largest social graph and top-tier anti-spam account detection systems. A phone number's historical behavior in the WhatsApp ecosystem, its group activity, its recorded reports, and its registration duration—all these dimensions collectively form a credit evaluation system far more complex than "can it receive a verification code."

For the black market to break through OpenAI's verification, it must first pass Meta's gate. And Meta's anti-spam capabilities are forged through over a decade of time and countless millions of attacks. This move of "borrowing a knife to kill another" is clean and decisive.

2.2 The Cost and Efficiency Game of SMS vs. WhatsApp

Of course, the SMS channel has not been completely shut down. The configuration still retains 12 countries/regions that can use SMS verification, including the US, Canada, Japan, South Korea, France, etc. The logic behind this is also an economic calculation, just with the algorithm reversed.

These exempted regions share a common feature: the cost of acquiring a phone number is extremely high, and telecommunications regulations are strict. In the US and Japan, the real-name authentication threshold for a SIM card itself constitutes a natural barrier. The cost of mass-registering in these markets, for the phone number alone, already exceeds the selling price of a free account. Since it's not viable for them to bot it, there's no need to defend it too tightly.

image-20260507184925200

Conversely, in those countries forced to use WhatsApp—India, Indonesia, Brazil, Nigeria, etc.—international SMS in these regions is not only expensive, but delivery rates are also a concern. If OpenAI opened SMS verification to these regions, it would not only bear high SMS costs but also face the risk of the black market using cheap local SIM cards to mass-register accounts.

By going with the WhatsApp Business API, OpenAI pays Meta a lower fee and is completely untethered from the network quality of local operators. It's stable, cheap, and seamlessly uses Meta's risk control to block the black market—it is a win-win business.

2.3 The Deep Logic of the "Celestial Dragons" Country/Region Exemptions

So what is so special about the 12 countries/regions granted the privilege of using SMS? Breaking them down, they can be divided into three categories:

Category One: Five Eyes Alliance and Developed Economies—US (US), Canada (CA), Japan (JP), South Korea (KR), France (FR). Users in these markets have high value and a strong willingness to pay, and both the level of real-name registration and the acquisition cost of phone numbers are high. The comprehensive cost of botting accounts here far exceeds the returns, so OpenAI can safely give the green light.

Category Two: Geopolitics and Special Traffic Zones—Taiwan, China (TW), Thailand (TH). These two locations are relatively high-quality traffic pools for OpenAI in Asia, with relatively clean payment chains and low credit card fraud rates. The supporting payment risk control capabilities are strong enough, so there is no need for an extra lock at the SMS verification stage.

Category Three: A Few Small Places You'd Spend a While Looking for on a Map—Falkland Islands (FK), Niue (NU), Timor-Leste (TL), Vanuatu (VU), San Marino (SM). At first glance, this seems baffling, but the logic is simple: these places have extremely small populations, and large-scale black market SMS reception resources simply don't exist there. Since nobody is botting there, maintaining the status quo is the lowest-cost option.

From this list, it is clear that OpenAI's allocation of verification channels has no consideration of "principled fairness," only cold cost-benefit calculations. Which regions can use SMS, and which must use WhatsApp—every decision is built on a precise estimation of the abundance of local black market resources and the cost of identity falsification.

From the black market's perspective, this configuration means: unless you have a way to mass-acquire "clean," active WhatsApp accounts, the registration gateway in most parts of the world is closed to you. And the cost of mass-farming WhatsApp accounts is already beyond what traditional SMS platforms can cover.

This is the true meaning of a "dimensional strike." It's not about blocking your road; it's about forcing you onto a battlefield where building the road is too expensive.

But OpenAI does not stop there. Even if the black market handles a clean email not on the blacklist and procures a WhatsApp account that can pass verification, there is a third line of defense waiting in the backend—an "environmental fingerprint and behavior monitoring" system far beyond the imagination of traditional risk control.

Chapter Three: The Inescapable "Digital Portrait" - Full Behavior Recording and Environmental Fingerprints

The email blacklist filtered out the mass-registration resource pool, and WhatsApp verification blocked the pathways of cheap SMS receiving platforms. Up to this point, out of ten accounts the black market could push to the registration stage, maybe one survives. But OpenAI's defense is not over—even if you luckily make it here with a clean email and an active WhatsApp account, a monitoring net you can't even perceive is waiting for you in the backend.

The core of this monitoring net is not traditional CAPTCHAs, but full-dimensional collection and comparison of your "digital portrait." Every minute operational detail that you wouldn't even notice yourself is being recorded, quantified, and scored.

image-20260507164157853

3.1 Human-Machine Behavior Analysis

At the very end of the initialization configuration, there is a field that looks innocuous: session_recording_rate: 1. This is the most spine-chilling value in the entire JSON.

It means that for every user who reaches the registration page, regardless of whether they ultimately register successfully, their current session is being recorded. The recording mentioned here is not screen recording, but a technology called human-machine behavior analysis—the system is collecting your mouse movement trajectory, click frequency, the millisecond interval between two keystrokes, page scrolling speed, and even the time the cursor hovers over a certain input box.

These behavioral data mean nothing on their own, but put together, they piece together a highly recognizable "personality portrait." When a real human fills out a form, the mouse trajectory has micro-jitters and random pauses, the typing rhythm alternates between fast and slow, and there are hesitant cursor movements when switching between different fields. However, a script—whether it's a browser extension or a Playwright-driven automated registration bot—has fixed behavioral characteristics: the mouse trajectory is either absolute geometric straight lines or mechanically generated curves; keystroke intervals are uniform, and the operational rhythm between page transitions is fixed.

Traditional CAPTCHAs can't see these differences. But human-machine behavior analysis can.

This behavioral analysis is not a real-time interception sieve, but a post-event chain of evidence. Your operations are recorded entirely and sent to the backend behavioral analysis engine for scrutiny. Even if you pass all verification steps on the frontend and successfully acquire an account, if you are judged as "non-human operation" during behavioral analysis, you will still trigger a ban later. The black market will never know which mouse movement was "too straight" that got them flagged.

3.2 Environmental Contradiction Detection: The "Deduction Items" in Risk Control Models

If behavioral analysis looks at "how you operate," environmental fingerprint detection looks at "what you are operating with." In the derived_fields and evaluated_keys fields of the configuration, the system records device profiles far more detailed than typical User-Agents.

Here lies an easily misunderstood yet exceptionally important risk control concept: environmental contradiction detection is not about "determining that you are a bad actor," but about "adding points" or "deducting points" from the current session.

Take geography and language environment as an example: when a request IP is geo-located to the US, but the browser language preference is zh-HK (Hong Kong Chinese), this will trigger a "deduction item" in the risk control engine. Of course, there is perfectly reasonable explanation for this combination—a Hong Kong immigrant living in San Francisco using their accustomed Chinese system, which is very normal. But what the risk control model looks at is not "whether it's possible," but "how probable this combination is in black market samples." In the scenario of bot mass-registrations, proxy IPs concentrated in US datacenters while the browser language exposes the script developer's Chinese environment is precisely a very typical configuration anomaly. Therefore, it won't be directly judged as "you are black market," but instead recorded as a risk weight, waiting to be compounded with other signals.

There are many dimensions like this:

Does the browser version you reported match the real version exposed by the JavaScript underlying navigator object? If the User-Agent was changed via script but underlying properties were left untouched, a hard mismatch appears here, serving as a heavily weighted deduction.

What is your browser window size? 1920x1080 is completely normal, but the default window size for headless browsers commonly used by mass registration bots is either very narrow and short, or vice versa—running desktop browser fingerprints within an obvious mobile viewport is equally a warning sign.

How many registration requests has your DeviceId been associated with in a short period? If the same device fingerprint appears in registration sessions across multiple different IPs within minutes, the device reuse logic of the black market is fully exposed. This item's weight is relatively higher because a normal person would almost never use the same device to complete multiple registrations from different IPs in a short timeframe.

Take any of these metrics individually, and every one has a reasonable exception. So, to prevent false positives, it does not do black-and-white, single-point interception; instead, it lets every suspicious signal cumulatively weigh in the backend. Only when the total score crosses a threshold does it trigger an action. This makes it very hard for the black market to reverse engineer exactly which step exposed them, and also ensures that normal users don't get accidentally harmed by an isolated "coincidence."


At this point, OpenAI's three lines of defense during the registration phase are clearly visible: the email blacklist cuts off the resource supply at the source, WhatsApp verification applies channel suppression during the identity verification phase, and environmental fingerprints and behavior recording weave an omnidirectional monitoring net in the backend. But the strength of these three layers is not set in stone. What truly makes this system come alive, and keeps attackers forever struggling to catch up, is the final trump card—the dynamic gray-release engine driven by Statsig.

image-20260507184438323


Chapter Four: The Dynamic Defense Command Center - "Gray-Release Risk Control" Driven by Statsig

The email blacklists, WhatsApp verification, and environmental fingerprint monitoring described in the previous three chapters already look very tight. But if this system were hardcoded—the domain list hardcoded in the frontend, the verification channel configuration packed into a release—then the black market would only need to spend time reverse-engineering it once to find all the rule boundaries, and then bypass them one by one like solving a puzzle.

OpenAI's true trump card is making these rules alive. They are not static walls, but valves that can be tightened or loosened at will.

image-20260507165852087

The core of this dynamic capability is the Statsig feature management platform. In the JSON returned by ab.chatgpt.com/v1/initialize, besides specific blacklists and configuration items, there are also a large number of boolean switches starting with gate__, mutable parameters carried by dynamic_configs, and functional entry points named with an enable_ prefix. These fields can be modified in real-time in the backend and synchronized to the initialization requests of all global users within milliseconds. What the black market is facing is a defense matrix that can evolve itself.

4.1 From Hardcoding to Valve-Style Adjustments: The Value of feature_gates

In the captured feature_gates list, there is a seemingly ordinary switch: gate__authapi_add_phone_enforce_sms_only_country_codes: false. At present, its value is false, meaning this rule is dormant, and nothing happens.

But once it is set to true, the situation completely changes.

Chapter Two of this article detailed how OpenAI suppresses SMS-receiving platforms by forcing most countries through WhatsApp verification. But if one day, black market teams in a certain region overcome this barrier—say, by relying on large-scale account-farming factories to produce enough cheap, active WhatsApp accounts, causing garbage registrations in that area to rebound—OpenAI doesn't need to rewrite any verification logic at all. It simply locates this switch in the Statsig backend, flips it to true, and all registration requests from that region are immediately forced back onto the SMS verification path. The WhatsApp accounts the black market painfully farmed are instantly rendered useless.

This is the truly terrifying part of this switch: it's not a lock, but a lock core that can be swapped out at any time. The black market spends weeks or even months pouring resources into breaking a verification path, and OpenAI uses a single click to make those investments practically worthless. By the time the black market readjusts its scripts to accommodate SMS verification, OpenAI can flip the switch back to false. The initiative of the attack-and-defense tempo is completely out of the hands of the black market.

4.2 Cold Start Contingencies for Real-Time Leak Database Validation

Another field captured in the config is even more intriguing: enable_signup_leaked_credential_check: false.

This switch is also currently off. But its very existence is a signal: OpenAI has already integrated the ability to compare against global known data leak databases (commonly known as "social engineering databases") at the code level. Once enabled, the system will check the current email against historical internet data leak events at the first step of registration—if the email account and password have ever been made public in a breach, it will be flagged as high-risk at OpenAI's registration entrance.

When the black market conducts large-scale account sweeping, they frequently use these real emails extracted from leaked databases. They look no different from normal emails, but the user is long no longer the original owner. OpenAI keeping this switch is like burying a cold-start landmine at its own door—normally it doesn't bother anyone, but once a wave of account sweeping attacks appears in a certain region, activating this field empowers them to intercept the attack within the very first second of registration.

4.3 The Labyrinthization of Registration Paths

In addition to specific defensive rules, the dynamic configuration also contains a series of switches designed to disrupt the flow of automated scripts. Their logic is not to intercept, but to make the bots' preset paths invalid.

enable_dynamic_redirect_for_existing_username_on_signup_screen: true is one of them. When a username is already occupied, normal flow would statically redirect to a specific prompt page. But if this switch is enabled, the system can randomly change the redirect target based on the current session's risk score—sometimes it's the email verification page, sometimes it demands more supplemental info, and sometimes it redirects straight to SSO. Automated scripts rely on fixed URL redirection logic; when the path becomes a labyrinth, the scripts freeze at unpredictable forks.

Partnered with it is the enable_redirect_to_social_for_existing_email series of switches. When the system detects the current email belongs to a high-risk category, it dynamically forces the registration flow from "fill out a form to register" to "please log in with Google/Microsoft." This means the exact same email will be directed to entirely different registration paths depending on its risk assessment. If a black market script only accommodates one path, the other instantly becomes a blind spot.

4.4 The Ultimate Advantage of the Attack-Defense Time Difference

The collection of all these dynamic capabilities ultimately translates into a tactical advantage that the black market can hardly overcome: time.

For the black market to study a set of risk control rules, crack the logic, develop adaptive scripts, test, and deploy at scale—this is a cycle measured in days, if not weeks. Yet, for OpenAI to shift its defensive focus—tweaking the value of a switch in the Statsig backend—takes only a few seconds, and this change is synchronized globally via CDN to all initialization requests in milliseconds. The time difference between the two is a weapon in itself.

More importantly, this time difference means OpenAI doesn't need to chase a 100% interception rate. Traditional risk control thinking requires catching every single attacker; missing one is failure. But in the logic of dynamic defense, missing some doesn't matter—because as the comprehensive cost of registration is continuously pushed higher, the black market finds the balance of input and output starting to tilt. If spending 30 to register an account yields a sale price of only 25, nobody is willing to continue this business after just three transactions.

The defensive philosophy exhibited across the entire configuration boils down to this single sentence: Do not chase a 100% interception rate, only aim to dynamically adjust the attacker's ROI. When the cost of registration exceeds the profit of selling the account, the black market will naturally exit the field without requiring you to seal every hole yourself. And what Statsig grants OpenAI is the power to tighten this cost valve at any moment.

Conclusion: Demanding Security from Cost - Lessons for Other Big Model Providers

Having broken down this system, looking back at the entire defense matrix reveals a clear thread.

The email blacklist is not verifying if you're a real human, but invalidating the black market's resource pool at scale. Mandatory WhatsApp verification is not adding an extra lock, but pushing costs upstream towards the account-farming supply chain. Environmental fingerprints and behavioral recordings are not outright blocking registration requests, but real-time scoring every session so that interception decisions are always delayed until sufficient evidence is compounded. The gray-release engine via Statsig allows all these strategies to be combined, toggled, and tightened or loosened based on specific regions or attack patterns at any moment.

From "identifying bots" to "raising registration costs", this is not just the defensive evolution of a single company, OpenAI, but a paradigm shift occurring across the entire internet risk control domain. For large domestic model providers, this system offers several transferable perspectives:

Firstly, rather than endlessly making additions at single points (more complex CAPTCHAs, trickier slider difficulties), consider shifting the frontline to the resource supply end—the email domain blacklist is a low-cost, high-efficiency paradigm.

Secondly, utilizing existing super-app ecosystems (WeChat, Alipay, etc.) to construct multi-layered verification channels is fundamentally the same logic as OpenAI borrowing Meta's WhatsApp network: outsource identity verification to platforms that have accumulated massive strata of user credit data.

Thirdly, the dynamization of risk control rules should not just be a slogan. Whether you possess infrastructure like Statsig, which allows strategies to take global effect in seconds, dictates who holds the initiative in the tug-of-war between attack and defense.

The future of black market offense and defense will no longer hinge on who can build thicker walls, but on which party can more finely manipulate the attacker's economic ledger. When an invisible cost tag is placed behind every gray market registration account, the defender has already won beyond the scope of rules.

Appendix: Blacklist of 156 email domains attached at the end of the article.


Salute

Thank you all for supporting! If you found this interesting, don't hold back—like, watch, and share directly!

If you want to see my articles as soon as they drop in the future, don't forget to star ⭐, so you don't lose it later.

Well, that's enough for today.

image-20260507185619845

Win or lose, life is bold, we'll see you next time!

Top comments (0)