DEV Community

Cover image for iQIYI's AI Actor Database Sparks Outrage in China: Is This the Future of Entertainment?
Genra
Genra

Posted on • Originally published at genra.ai

iQIYI's AI Actor Database Sparks Outrage in China: Is This the Future of Entertainment?

iQIYI's AI Actor Database Sparks Outrage in China: Is This the Future of Entertainment?

On the morning of April 20, 2026, iQIYI -- China's largest streaming platform and the closest equivalent to Netflix in the Chinese market -- held a press event that was supposed to showcase the future of entertainment. CEO Gong Yu took the stage and unveiled what he called the "AI Celebrity Database," a collection of over 100 actors who had allegedly authorized the use of their likenesses, voices, and biometric data for AI-generated film and television productions.

The announcement was paired with the launch of Nadou Pro, iQIYI's upgraded AI production tool, positioned as a platform where AI filmmakers could quickly connect with actors willing to license their image for digital productions. The message was clear: iQIYI was building the infrastructure for a future where AI-generated entertainment content starring real actors' digital replicas would become mainstream.

By that afternoon, everything had gone sideways.

Multiple Chinese actors took to social media to publicly deny they had signed up for the database. Fan communities erupted. The hashtag "爱奇艺疯了" (iQIYI went nuts) rocketed to the #1 trending topic on Weibo, China's equivalent of Twitter/X, with hundreds of millions of views. What was meant to be a triumphant product launch became one of the most significant public backlashes against AI in China's entertainment industry to date.

This is the story of what happened, why it happened, and what it means for the global AI video industry. It's a story that touches on technology, labor rights, corporate overreach, cultural values, and the fundamental question of who owns a person's likeness in an age where that likeness can be replicated at the push of a button.

What iQIYI Actually Announced

To understand the backlash, you need to understand what iQIYI put on the table. The announcement had three core components.

The AI Celebrity Database

iQIYI presented a database of over 100 actors who had purportedly agreed to let their likenesses be used in AI-generated productions. This wasn't a vague concept -- the company described a structured system where an actor's facial features, voice patterns, and physical mannerisms would be digitized and made available to production teams using iQIYI's AI tools. The implication was that a filmmaker could select an actor from the database and generate scenes featuring that actor's digital replica without the actor needing to be physically present on set.

Nadou Pro

Nadou Pro is the upgraded version of iQIYI's existing Nadou AI production platform. The tool was positioned as an end-to-end AI filmmaking suite that could handle scripting, scene generation, character animation, voice synthesis, and post-production. The AI Celebrity Database was presented as a key feature of Nadou Pro: rather than generating generic AI characters, filmmakers could work with digital versions of recognizable, established actors.

The Vision Statement

CEO Gong Yu framed the announcement within a broader thesis about the future of entertainment production. He suggested that AI-generated content would eventually become the dominant mode of film and television production, and that traditional human-performed content could one day be considered "intangible cultural heritage" -- a phrase typically reserved for traditional crafts and art forms that are being preserved because they're no longer part of mainstream practice.

That comment, more than anything else in the presentation, would come back to haunt him.

The Market Context

It's worth noting the business pressures behind the announcement. iQIYI, which went public on NASDAQ in 2018, has faced persistent challenges with profitability. The Chinese streaming market is intensely competitive, with Tencent Video and Youku (backed by Alibaba) fighting for the same subscribers and the same content. Content costs have been rising while user growth has slowed. In this environment, AI-generated content isn't just a technological novelty -- it's a potential lifeline for a business model that has struggled to make the economics of original content production work at scale.

That financial pressure helps explain why iQIYI moved aggressively on the AI Celebrity Database. The company wasn't just showcasing technology -- it was signaling to investors and the market that it had a plan to dramatically reduce content production costs while maintaining the star power that draws subscribers. The problem was that this plan was built on a consent foundation that, by all evidence, was far shakier than the stage presentation suggested.

The Backlash: "iQIYI Went Nuts"

The reaction was swift, public, and devastating for iQIYI's messaging.

Actors Deny Involvement

Within hours of the announcement, multiple Chinese actors and their management teams posted statements on Weibo denying that they had authorized the use of their likenesses. Some stated they had never been contacted. Others said they had participated in preliminary discussions but had not signed any agreements authorizing the kind of broad AI usage iQIYI described. The gap between what iQIYI claimed on stage and what actors said behind the scenes was immediate and public.

The denials weren't quiet press statements. They were angry social media posts from actors and managers who felt their names had been used without proper authorization to lend credibility to a product launch.

The timing made things worse. By announcing the database at a high-profile press event without first publicly confirming individual actor participation, iQIYI put performers in a reactive position. Instead of actors announcing their own participation on their own terms, they were forced to scramble and issue denials to their own fan bases. The power dynamic was inverted: a platform was claiming ownership of actors' cooperation before those actors had agreed to cooperate.

Fan Communities Mobilize

Chinese fan communities -- which are highly organized, digitally savvy, and fiercely protective of their favorite actors -- treated the announcement as a direct threat. The idea that a streaming platform could generate content using an actor's likeness without that actor's active, ongoing participation struck at the core of what fans value: the human performance, the craft, the personality that makes a particular actor irreplaceable.

Fan groups coordinated hashtag campaigns, compiled evidence of actors' denials, and pressured iQIYI's corporate social media accounts. The hashtag #爱奇艺疯了# (iQIYI went nuts) accumulated hundreds of millions of views within the first 24 hours.

The "Intangible Cultural Heritage" Comment

Gong Yu's remark about human-made entertainment potentially becoming "intangible cultural heritage" acted as accelerant. In Chinese cultural context, designating something as intangible cultural heritage is an acknowledgment that it's a relic of the past -- something to be preserved in a museum, not something with a living future. Applying that framing to human acting, directing, and filmmaking felt dismissive and arrogant to an industry already anxious about AI displacement.

Critics pointed out the irony: a company that built its business on the work of human actors and directors was now suggesting those same people might become historical curiosities. Entertainment industry commentators called it tone-deaf. Some called it worse.

The comment also inadvertently undermined iQIYI's own clarification. If the AI Celebrity Database is truly just a connection platform that respects actor agency, why is the CEO publicly musing about a future where human performance is a museum piece? The disconnect between the damage control narrative ("this is about collaboration") and the CEO's vision statement ("human art is becoming heritage") was difficult to reconcile.

Industry Reaction

The China Performing Arts Association and the Beijing Actors' Association both weighed in within days, issuing statements emphasizing that performers' likeness rights are protected under Chinese civil law and that any use of an actor's image, voice, or biometric data for AI generation requires explicit, informed consent. Several prominent directors publicly criticized the announcement, with some calling for industry-wide standards on AI usage in entertainment production.

iQIYI's Damage Control

Facing a full-scale public relations crisis, iQIYI moved to contain the damage.

The "Misunderstanding" Framing

iQIYI's official response characterized the backlash as a "misunderstanding" of what was actually announced. The company insisted that the AI Celebrity Database was not a system for generating content using actors' likenesses without their involvement, but rather a matchmaking platform designed to connect AI creators with actors who might be interested in licensing their image for specific projects.

SVP Liu Wenfeng's Clarification

Senior Vice President Liu Wenfeng issued a more detailed statement clarifying the company's position. Key points included:

  • No current licensing: iQIYI is not currently licensing actor likenesses for AI-generated content without actor involvement in specific projects.
  • Connection platform: Nadou Pro is designed to "enable AI creators and actors to more quickly establish connections," not to bypass actors entirely.
  • Actor control: Actors retain full control over how their image is used and must approve each specific use case.
  • Opt-in model: Participation in the database is voluntary and actors can withdraw at any time.

The Gap Between Announcement and Clarification

The Timing Problem

iQIYI's clarification came quickly, but in the age of social media, "quickly" still means after the narrative has already been set. By the time Liu Wenfeng's statement was published, millions of Weibo users had already read actors' denials, formed their opinions, and reshared the "iQIYI went nuts" hashtag. The initial framing -- "iQIYI is using actors without their permission" -- became the dominant story regardless of the subsequent clarification.

Industry observers noted a significant gap between the tone of the original announcement and the subsequent clarification. The stage presentation emphasized AI-generated content at scale, with the celebrity database as a key differentiator. The damage control emphasized human oversight, actor consent, and a modest matchmaking function. The question many asked: which version represents iQIYI's actual roadmap?

This kind of gap -- between what a company says during a product launch and what it says during crisis management -- is becoming a recurring pattern in the AI industry. Companies announce ambitious AI capabilities to impress investors and media, then walk back the implications when the public reacts to what those capabilities actually mean for real people.

Lessons from the PR Fallout

The iQIYI situation offers a case study in how not to launch an AI product that affects real people's rights and livelihoods. Several communication failures compounded the problem:

  • Announcing before securing: Public claims about 100+ actors' participation should not have been made until every single one of those actors had confirmed, in writing, their understanding of and agreement to the specific terms being presented on stage.
  • Overreaching language: The "intangible cultural heritage" comment signaled a vision where human performers are obsolete. Even if the technology eventually enables that, saying it out loud at a product launch alienates the very people the platform depends on today.
  • Insufficient stakeholder preparation: Actors and their teams should have been briefed before the public announcement, given a chance to review the messaging, and aligned on how the database would be described.
  • Reactive rather than proactive clarification: iQIYI's damage control came after the backlash was already trending nationally. A preemptive FAQ or detailed documentation released alongside the announcement could have addressed concerns before they became a crisis.

The Bigger Question: AI vs. Human Actors

The iQIYI controversy didn't happen in a vacuum. It's the latest flashpoint in a global conversation about AI's role in entertainment that has been building for years.

The SAG-AFTRA Strike Set the Stage

In 2023, the Screen Actors Guild -- American Federation of Television and Radio Artists (SAG-AFTRA) went on strike for 118 days. While compensation and streaming residuals were major issues, AI was the existential one. Actors were concerned that studios would scan their likenesses during a single day of work and then use AI to generate performances indefinitely without further compensation or consent.

The resulting agreement included protections requiring informed consent for AI use of an actor's digital replica, with specific provisions for how likenesses could and couldn't be used. It was the first major labor agreement in any industry to address AI-generated digital replicas head-on.

The Technology Has Caught Up

What made the SAG-AFTRA concerns theoretical in 2023 is fully practical in 2026. AI video generation tools can now produce realistic human likenesses, convincing voice synthesis, and coherent scene-length performances. The cost of generating a digital performance has dropped from millions of dollars in VFX budgets to a fraction of that using AI tools.

Consider the progression. In 2023, generating a convincing 10-second clip of a recognizable person required significant technical expertise and computing resources. By mid-2025, consumer-grade tools could produce passable face-swaps and voice clones. In 2026, state-of-the-art AI video systems can generate full-body performances with accurate facial expressions, lip-synced dialogue, and natural body language from a relatively small training dataset of reference footage.

The iQIYI announcement wasn't shocking because the technology is implausible -- it was shocking because the technology is entirely plausible and the consent framework was visibly absent.

Economic Pressures Are Real

Production costs in the entertainment industry have been rising steadily. A single episode of a major streaming series can cost $10-30 million. AI-generated content promises dramatic cost reductions: no actor scheduling conflicts, no location shoots, no overtime, no reshoots. For a streaming platform like iQIYI that has been under persistent financial pressure -- the company has struggled with profitability since its founding -- the economic incentive to replace human labor with AI is enormous.

This is the tension at the heart of the controversy. The technology works. The economics favor it. But the ethical and legal frameworks haven't caught up.

The Content Volume Problem

There's another dimension that rarely gets discussed: the sheer volume of content that streaming platforms need. iQIYI, like Netflix, Amazon, and every other major streamer, faces relentless pressure to produce more original content to retain subscribers. In 2025 alone, iQIYI released over 200 original series and films. Each one requires actors, crews, sets, and months of production time.

AI-generated content promises to dramatically increase production velocity. A digital replica doesn't get tired, doesn't have scheduling conflicts, doesn't age between seasons, and can be "cast" in multiple productions simultaneously. For a platform burning through content to feed an algorithm, the appeal is obvious. But "appealing to the platform" and "acceptable to the people whose likenesses are being used" are two very different things.

Fan Culture as a Check on Corporate Power

One aspect of the iQIYI situation that Western observers may underestimate is the role of fan culture in Chinese entertainment. Chinese fan communities (known as "饭圈" or "fan circles") are extraordinarily organized. They coordinate purchasing campaigns, manage public image strategies for their favorite stars, and mobilize rapidly against perceived threats. When iQIYI announced the AI Celebrity Database, fan communities didn't just express displeasure -- they organized. They compiled and cross-referenced actor statements, identified inconsistencies in iQIYI's claims, coordinated hashtag campaigns, and pressured brands associated with affected actors to issue clarifying statements.

In this case, fan culture functioned as an accountability mechanism that no regulator or union had yet provided. It was fans, not lawyers or government officials, who forced iQIYI's rapid retreat.

This dynamic is worth watching as AI-generated entertainment becomes more prevalent globally. In markets where performer unions are weaker or regulatory enforcement is slower, fan communities may be the most effective early-warning system against corporate overreach. The iQIYI case demonstrates that in the social media age, public sentiment can move faster than legal processes -- and can impose reputational costs that are just as consequential as regulatory penalties.

Where the Lines Are Being Drawn: Global AI Likeness Regulation

Governments around the world are scrambling to establish rules for AI-generated digital replicas. Here's where things stand as of April 2026.

Region Key Regulation/Framework Status Key Provisions
United States White House National AI Policy Framework (March 2026) Framework published; legislation pending Recommends federal protections for AI-generated digital replicas. Calls for explicit consent requirements and compensation frameworks for use of a person's likeness by AI systems. Individual states (California, New York, Tennessee) have existing or pending digital replica laws.
European Union EU AI Act -- Transparency Requirements Taking effect August 2026 Requires clear labeling of AI-generated content. High-risk AI systems (which may include digital replica generation) subject to conformity assessments. GDPR provisions on biometric data processing apply to face/voice capture for AI training.
China Civil Code + Deep Synthesis Regulations (2023) + Generative AI Measures (2023) In effect Civil Code protects portrait rights (Article 1019) and voice rights. Deep synthesis rules require consent for generating identifiable individuals. Generative AI measures require content labeling and prohibit generating content that infringes on others' likeness rights.
India IT Rules 2026 In effect Requires labeling of AI-generated content. Platforms must remove AI-generated content that impersonates real individuals upon complaint. Personality rights recognized under common law and being codified in digital context.
South Korea AI Basic Act (2025) + Content Industry Promotion Act amendments In effect / partially in effect Requires disclosure of AI-generated content in entertainment. Performers' digital likeness rights explicitly protected. Consent required for AI training on an individual's voice, face, or mannerisms.
Japan AI Guidelines + Copyright Law Review (ongoing) Guidelines published; legislation under review Current copyright framework doesn't explicitly cover AI-generated likenesses. Guidelines recommend consent for commercial use of identifiable individuals. Active legislative discussions on performer digital rights.

The Pattern Across Jurisdictions

Despite different legal traditions and regulatory approaches, a clear consensus is forming around three principles:

  1. Consent is non-negotiable. Every major regulatory framework either requires or recommends explicit, informed consent before an individual's likeness can be used to generate AI content. The days of scraping public images and generating digital replicas without permission are numbered.
  2. Transparency is mandatory. AI-generated content featuring real or realistic human likenesses must be labeled as such. Audiences have a right to know when they're watching a digital replica rather than a human performance.
  3. Enforcement is lagging. Most frameworks are either newly enacted, partially implemented, or still at the recommendation stage. The technology is moving faster than the law. Companies that push boundaries -- as iQIYI did -- are essentially testing where the enforcement line actually is.

China's Existing Legal Framework

Notably, China already has laws that should have prevented the kind of confusion iQIYI created. Article 1019 of China's Civil Code explicitly protects portrait rights, prohibiting the use of a person's likeness without consent. The 2023 Deep Synthesis Provisions require consent for generating content depicting identifiable individuals. The 2023 Generative AI Measures add further requirements around content labeling and rights protection.

The legal framework exists. What's missing is the industry practice. iQIYI's announcement exposed the gap between what the law says and how companies are actually behaving when they see a competitive advantage in AI.

Cross-Border Complications

The global nature of streaming adds another layer of complexity. A production created using an AI-generated likeness in China could be distributed to audiences in the EU, US, India, and South Korea -- each with different regulatory requirements. A likeness that's legally usable in one jurisdiction may violate laws in another. Streaming platforms that operate internationally, as most major ones do, face a compliance patchwork that makes any "move fast and figure it out later" approach extremely risky.

This cross-border dimension is one reason why industry-wide standards matter more than unilateral corporate policies. An AI likeness framework that only works in one country isn't a solution -- it's a liability in every other market where the platform operates.

What This Means for AI Video Creators

Whether you're an independent filmmaker experimenting with AI tools, a content creator building a YouTube channel, or a production company exploring AI-augmented workflows, the iQIYI controversy carries practical lessons.

Consent Is the Foundation

Using someone's likeness without explicit authorization is becoming legally risky everywhere. This applies not just to celebrities but to any identifiable individual. If your AI-generated video features a recognizable person -- their face, their voice, their distinctive mannerisms -- you need documented consent. "They probably won't notice" or "it's just a short clip" are not legal strategies.

The Distinction Between Original Creation and Replication

There's an important distinction between two types of AI video creation:

  • Original creation: Generating new characters, scenes, and stories that don't replicate any real person's likeness. This is the safest and most legally straightforward use of AI video tools.
  • Likeness replication: Using AI to generate content featuring a real person's appearance or voice. This requires consent frameworks, licensing agreements, and compliance with applicable regulations.

The iQIYI controversy was entirely about the second category. The company wanted to build a marketplace for likeness replication but failed to secure the consent infrastructure before making the announcement. That's the cautionary tale.

Platform Policies Are Tightening

Beyond government regulation, platforms themselves are implementing stricter policies on AI-generated content featuring real people. YouTube, TikTok, Instagram, and major Chinese platforms including Douyin and Bilibili have all introduced or expanded rules around AI-generated likeness content in 2025-2026. Violating these policies can result in content removal, demonetization, or account suspension.

The Opportunity Is in Original Content

Here's the constructive takeaway: the explosion of AI video tools creates enormous opportunities for creators who focus on original content. AI-generated characters, worlds, and narratives that don't depend on replicating real people's likenesses face none of the consent, licensing, or regulatory complications. The creative space is wide open for original AI-generated storytelling.

Practical Checklist for AI Video Creators

If you're creating AI video content today, here are the questions to ask before publishing:

  1. Does your content depict any identifiable real person? If yes, do you have explicit written consent for the specific use case?
  2. Does your AI tool's training data include real people's likenesses? Understand what your tools were trained on and the licensing implications.
  3. Where will your content be distributed? Check the AI content policies for each platform and the regulations in each geographic market.
  4. Is your content clearly labeled as AI-generated? Transparency labeling is becoming mandatory in most jurisdictions and is already required by most major platforms.
  5. Do you have documentation of your creative process? In case of disputes, being able to demonstrate that your content is original -- or that you had proper authorization -- protects you legally.

The Industry Needs Frameworks, Not Unilateral Announcements

One of the central criticisms of iQIYI's approach was that it was unilateral. A single platform decided to announce an AI actor database without first building industry consensus on how such a system should work.

What a Responsible Framework Looks Like

Based on emerging best practices from SAG-AFTRA agreements, EU regulatory guidance, and industry proposals, a responsible AI-actor collaboration framework would include:

  • Granular consent: Actors approve each specific use of their likeness, not a blanket authorization. Consent for a 30-second commercial is different from consent for a feature-length film.
  • Compensation structures: Clear payment models for AI use of an actor's likeness, potentially including per-project fees, royalties, or ongoing licensing payments.
  • Creative approval: Actors have the right to review and approve how their digital replica is used, including the content, context, and brand associations of any AI-generated performance.
  • Revocation rights: Actors can withdraw consent and require removal of their likeness from the database and any generated content.
  • Transparency to audiences: AI-generated performances are clearly labeled so audiences know when they're watching a digital replica.
  • Data security: Biometric data (face scans, voice prints, motion capture data) is stored securely with clear policies on access, retention, and deletion.

Who Should Build These Frameworks

The answer is not individual streaming platforms acting alone. Effective frameworks need to be developed collaboratively by:

  • Performers' unions and guilds
  • Production companies and studios
  • Streaming platforms
  • AI technology providers
  • Regulators and legal experts

SAG-AFTRA's 2023 agreement is one model. South Korea's approach of embedding performer digital rights into existing content industry law is another. What doesn't work is a single company making announcements that affect thousands of performers without their input.

The Consent Infrastructure Gap

One practical challenge that often gets overlooked in these discussions is the absence of technical infrastructure for managing AI likeness consent at scale. Even if every stakeholder agrees on principles, the industry currently lacks standardized systems for:

  • Consent verification: How does a production team verify that a specific actor has consented to a specific use of their likeness? Paper contracts don't scale in an environment where AI can generate hundreds of productions per year.
  • Usage tracking: How does an actor know where and how their digital replica is being used? Without monitoring systems, consent is theoretical even when granted.
  • Revocation enforcement: If an actor revokes consent, how is that revocation propagated across all platforms and productions? Content already generated and distributed can't be easily recalled.
  • Compensation tracking: If an actor is owed royalties for AI use of their likeness, how are those uses counted and payments calculated across multiple platforms and territories?

Building this infrastructure is a non-trivial engineering and governance challenge. It's also a business opportunity: the companies that build reliable consent management platforms for AI-generated entertainment will play a critical role in the industry's future. Think of it as the equivalent of content licensing infrastructure that emerged for music streaming -- ASCAP, BMI, and similar organizations didn't exist before they were needed, but once the technology demanded them, they became essential plumbing for the entire industry.

The AI entertainment industry needs its equivalent: systems that make consent verifiable, usage trackable, compensation automatic, and revocation enforceable. Without this infrastructure, every AI actor database -- not just iQIYI's -- will face the same fundamental trust deficit that turned a product launch into a crisis.

Historical Context: Technology vs. Performers

The tension between new technology and performer rights is not new. Understanding the historical pattern provides perspective on where the current AI debate is heading.

Sound Film (1920s-1930s)

The transition from silent film to "talkies" displaced an entire generation of actors whose talents didn't translate to the new medium. Studios held the power and performers had little recourse. It took decades for labor organizing to establish basic protections.

Television (1950s)

When television emerged, film studios initially saw it as a threat. Actors who appeared on TV were sometimes blacklisted from film work. Eventually, new compensation structures and union agreements brought order to the relationship between the two mediums.

Digital Effects (1990s-2000s)

The rise of CGI raised early questions about digital performers. When a deceased actor's likeness was used in a commercial in the 1990s, it sparked debates about posthumous digital rights that continue to this day. The 2016 recreation of Peter Cushing's likeness in "Rogue One" brought these questions to mainstream attention.

Deepfakes (2017-Present)

The emergence of deepfake technology made face-swapping accessible to anyone with a computer. This democratization of likeness manipulation -- initially used primarily for non-consensual purposes -- accelerated the push for digital replica legislation worldwide.

AI Voice Cloning Controversies (2024-2025)

Before AI video likenesses became the flashpoint, AI voice cloning sparked its own wave of controversies. Multiple voice actors discovered their voices had been used to train AI systems without consent. Scarlett Johansson's public dispute with OpenAI over a voice that sounded similar to hers brought the issue to mainstream attention. These voice cloning cases established important legal and ethical precedents that directly inform the current debate over full visual likeness replication.

The Pattern

Every major media technology shift follows a similar arc: new technology emerges, industry actors (in both senses of the word) scramble for advantage, abuses occur, public backlash builds, and eventually regulatory and contractual frameworks establish new norms. AI-generated digital replicas are currently in the "scramble and backlash" phase. The frameworks are coming, but they aren't fully here yet.

The difference this time is speed. Previous technology transitions played out over decades. Sound film displaced silent film over roughly 10 years. Television took 20 years to reshape the film industry's business model. AI is compressing that timeline dramatically. The technology that seemed experimental in 2023 is production-ready in 2026. That compression means the window for establishing responsible frameworks is shorter than it was for any previous media transition.

What History Tells Us Will Happen

If past patterns hold, the current period of controversy and backlash will lead to three outcomes:

  1. New labor agreements: Performers' unions worldwide will negotiate AI-specific protections, following SAG-AFTRA's lead. China's performing arts associations are already signaling movement in this direction.
  2. Regulatory codification: The principles currently expressed as recommendations and guidelines will become binding law. The EU is furthest along; others will follow.
  3. Industry standardization: Technical standards for consent management, likeness verification, and AI content labeling will emerge, likely through a combination of industry consortia and regulatory mandate.

The question is not whether these frameworks will be established, but how much damage will occur before they are. The iQIYI controversy is a data point suggesting that the damage window is closing faster than some companies anticipated.

Genra's Perspective

At Genra, we've been watching the iQIYI situation closely because it touches on questions fundamental to our industry.

Our approach to AI video has always focused on original content creation -- generating new visuals, characters, voices, and stories rather than replicating real people's likenesses without consent. We believe that's both the ethical path and the commercially sustainable one. The iQIYI controversy demonstrates why: building a business on other people's likenesses without rock-solid consent frameworks creates existential legal and reputational risk.

The future of AI video is not about replacing human creators or using their likenesses as raw material. It's about giving creators -- whether they're independent filmmakers, marketing teams, or entertainment studios -- tools to bring their original visions to life faster and more affordably. That's a future worth building toward.

What to Watch Next

The iQIYI controversy is far from over, and its ripple effects will shape the AI entertainment landscape for years. Here are the developments to monitor in the coming months.

Regulatory Response in China

China's Cyberspace Administration (CAC) and the Ministry of Culture and Tourism are expected to weigh in. Given China's track record of swift regulatory action in the technology sector -- from gaming restrictions to algorithmic recommendation rules -- it would not be surprising to see new guidance specifically addressing AI use of performer likenesses in entertainment production. Any such guidance would likely set precedents that influence broader Asian markets.

Industry Association Standards

The China Performing Arts Association's initial statement was a signal, not a conclusion. Industry associations in China, South Korea, Japan, and India are likely developing position papers and proposed standards for AI-actor collaboration. These standards, while not legally binding, often form the basis for subsequent regulation and establish the norms that responsible companies follow voluntarily.

Other Platforms' Responses

iQIYI's competitors -- Tencent Video, Youku, and Bilibili in China, plus Netflix, Amazon, and Disney+ globally -- are all watching closely. Each has its own AI entertainment ambitions. How they position themselves in response to the iQIYI backlash will signal whether the industry learns from this episode or repeats the same mistakes with better PR.

Technology Development

AI video generation technology will continue advancing regardless of the controversy. The question is whether that advancement happens within a consent framework or outside of one. Companies developing AI video tools face a choice: build consent management into the technology from the ground up, or treat it as an afterthought that gets bolted on after the backlash arrives.

Public Sentiment

The Weibo backlash against iQIYI reflects a broader public unease with AI's encroachment on human creative work. This sentiment isn't limited to China. Surveys across major markets consistently show that while consumers are interested in AI-generated content, they have strong negative reactions to AI being used to replace human performers without consent. Companies that ignore this sentiment risk the kind of reputational damage that iQIYI is now managing.

The lesson is clear: in the AI entertainment space, moving fast and breaking things will break your brand before it breaks through the market. The next 12-18 months will determine whether the industry self-corrects or requires external force to establish responsible norms. The iQIYI controversy has made the stakes unmistakably clear.

Key Takeaways

  • iQIYI's April 20, 2026 announcement of an AI Celebrity Database claiming 100+ actors' authorization triggered immediate public backlash when multiple actors denied involvement, making "iQIYI went nuts" the #1 trending topic on Weibo.
  • The company's subsequent clarification reframed the database as a "connection platform" rather than a likeness licensing system, but the gap between the original announcement and the damage control raised questions about the company's actual intentions.
  • CEO Gong Yu's suggestion that human-made entertainment could become "intangible cultural heritage" was widely criticized as dismissive of human creative work and tone-deaf to industry anxieties about AI displacement.
  • Global regulation is converging on three principles: explicit consent for AI use of likenesses, mandatory transparency labeling, and clear compensation frameworks. The US, EU, China, India, South Korea, and Japan are all moving in this direction, though at different speeds.
  • China already has legal protections for portrait and voice rights under its Civil Code and Deep Synthesis Regulations. The iQIYI controversy exposed the gap between existing law and actual industry practice.
  • For AI video creators, the safest and most sustainable approach is original content creation -- generating new characters and stories rather than replicating real people's likenesses. Likeness replication requires robust consent frameworks that most of the industry hasn't built yet.
  • The entertainment industry needs collaborative frameworks developed by performers, studios, platforms, technology providers, and regulators together -- not unilateral announcements by individual companies.
  • The technical infrastructure for consent management at scale -- including verification, usage tracking, revocation enforcement, and compensation calculation -- does not yet exist. Building it is both a necessity and a significant business opportunity.
  • Historical precedent from sound film, television, CGI, and deepfakes suggests that the current "scramble and backlash" phase will lead to new labor agreements, regulatory codification, and industry standardization. The question is how much damage occurs before those frameworks are in place.
  • Fan communities played a critical accountability role in the iQIYI case, functioning as an enforcement mechanism before regulators or unions could act. Public sentiment against unauthorized AI likeness use is strong and growing across all major markets.

The iQIYI AI Celebrity Database controversy will be remembered as a turning point -- the moment when the AI entertainment industry learned, publicly and painfully, that technology capability without consent infrastructure is a liability, not an asset. The companies and creators that internalize that lesson now will be best positioned for the regulatory and cultural landscape that's rapidly taking shape.

Frequently Asked Questions

What is iQIYI's AI Celebrity Database?

iQIYI announced on April 20, 2026 what it called an "AI Celebrity Database" as part of its Nadou Pro AI production platform. The company claimed over 100 actors had authorized the use of their likenesses, voices, and biometric data for AI-generated film and television productions. After backlash from actors who denied involvement, iQIYI clarified that the database was intended as a connection platform between AI creators and actors, not a system for generating content without actor participation in specific projects.

Why did actors deny being part of iQIYI's AI database?

Multiple Chinese actors and their management teams publicly stated they had not authorized the broad AI usage that iQIYI described on stage. Some said they were never contacted. Others indicated they had participated in preliminary discussions but had not signed agreements for the kind of comprehensive AI likeness licensing that iQIYI's announcement implied. The discrepancy between the company's public claims and actors' actual participation was the primary trigger for the backlash.

Is it legal to use an actor's likeness for AI-generated content in China?

China's Civil Code (Article 1019) protects portrait rights and prohibits the use of a person's likeness without consent. The 2023 Deep Synthesis Provisions specifically require consent for generating content depicting identifiable individuals. The 2023 Generative AI Measures add requirements for content labeling and rights protection. Using an actor's likeness for AI-generated content without explicit, informed consent violates existing Chinese law.

How does the iQIYI controversy compare to the SAG-AFTRA strike?

The 2023 SAG-AFTRA strike in Hollywood addressed many of the same underlying issues: actor consent for AI use of their likenesses, compensation for digital replica performances, and protections against being replaced by AI-generated versions of themselves. The SAG-AFTRA agreement established contractual protections within the US entertainment industry. The iQIYI controversy shows that the same tensions exist in China's entertainment industry, but without equivalent labor agreements in place.

What regulations protect performers from unauthorized AI likeness use?

Protections vary by jurisdiction. The US White House published a National AI Policy Framework in March 2026 recommending federal digital replica protections, while states like California, New York, and Tennessee have existing or pending laws. The EU AI Act's transparency requirements take effect in August 2026. China has Civil Code portrait rights protections plus deep synthesis and generative AI regulations. India's IT Rules 2026 require AI content labeling. South Korea's AI Basic Act explicitly protects performers' digital likeness rights. Japan is currently reviewing its copyright and performer rights frameworks.

What did iQIYI's CEO mean by "intangible cultural heritage"?

CEO Gong Yu suggested that human-made entertainment content could eventually be considered "intangible cultural heritage," a term typically used in China (and internationally via UNESCO) for traditional cultural practices that are preserved because they're no longer part of mainstream contemporary life. Applied to human acting and filmmaking, the comment implied that traditional human performances might become a relic of the past as AI-generated content becomes dominant. The remark was widely criticized as dismissive and disrespectful to performers and creative professionals.

Can AI video creators safely use AI tools without risking likeness violations?

Yes, by focusing on original content creation. AI video tools that generate new characters, scenes, and narratives without replicating any real person's likeness avoid the consent, licensing, and regulatory complications entirely. When a project does require a real person's likeness, creators should obtain explicit written consent, comply with applicable local regulations, and maintain clear documentation of authorization. The simplest legal and ethical path is to create original content rather than replicate existing people.

What happens next for AI actor databases and digital replica licensing?

The industry is moving toward structured, consent-based frameworks. Expect to see more formal agreements between performers' organizations and production platforms, clearer regulatory enforcement of existing likeness protection laws, and the emergence of third-party verification services that certify actor consent for AI usage. The iQIYI controversy will likely accelerate these developments in China, much as the SAG-AFTRA strike accelerated them in the United States. The companies that build genuine consent infrastructure first will have a significant competitive advantage as regulations tighten globally.

Top comments (0)