<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: LunarDrift</title>
    <description>The latest articles on DEV Community by LunarDrift (@susiewang).</description>
    <link>https://dev.to/susiewang</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/susiewang"/>
    <language>en</language>
    <item>
      <title>How I Added AI Auto-Reply to a Free Website Chat Widget in OpenClaw</title>
      <dc:creator>LunarDrift</dc:creator>
      <pubDate>Thu, 19 Mar 2026 07:16:34 +0000</pubDate>
      <link>https://dev.to/tencent_rtc/how-i-added-ai-auto-reply-to-a-free-website-chat-widget-without-writing-a-single-line-of-code-h55</link>
      <guid>https://dev.to/tencent_rtc/how-i-added-ai-auto-reply-to-a-free-website-chat-widget-without-writing-a-single-line-of-code-h55</guid>
      <description>&lt;p&gt;I'm a non-technical product person. Our team built Knocket, a 100% free website contact widget. It works great for live chat, but it had no AI features. I wanted to add smart auto-reply — so I used &lt;a href="https://openclaw.ai/" rel="noopener noreferrer"&gt;OpenClaw&lt;/a&gt; and vibe coding to build it myself. This is the full story: what I did, what broke, and what I shipped.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is Knocket?
&lt;/h2&gt;

&lt;p&gt;Knocket is a free-forever website contact widget that combines live chat, offline forms, social links, and meeting scheduling in one embed. You paste one line of code before your &lt;code&gt;&amp;lt;/body&amp;gt;&lt;/code&gt; tag and your site gets a branded chat bubble in the bottom-right corner.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F83kqqvk5sc9zcho0p8o2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F83kqqvk5sc9zcho0p8o2.png" alt=" " width="800" height="537"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;1-minute deploy&lt;/strong&gt; — copy one &lt;code&gt;&amp;lt;script&amp;gt;&lt;/code&gt; tag, paste it into your HTML, done&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Live chat&lt;/strong&gt; — visitors message you on your site, you reply from the Inbox dashboard&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Offline forms&lt;/strong&gt; — automatically collect visitor contact info when you're away&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Social link aggregation&lt;/strong&gt; — WhatsApp, Instagram, Telegram in one tap&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Zero-code customization&lt;/strong&gt; — colors, icons, greetings, all configurable from the dashboard&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It's built on Tencent Cloud IM infrastructure. Our engineering team built the console, SDK, and frontend widget from scratch. And it's completely free — no $39/month Intercom bill, no $20/month LiveChat subscription.&lt;/p&gt;

&lt;p&gt;→ &lt;a href="https://trtc.io/solutions/knocket" rel="noopener noreferrer"&gt;Try Knocket for free&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What Was Missing? I Wanted AI Auto-Reply
&lt;/h2&gt;

&lt;p&gt;After launch, customers started embedding Knocket on their websites and using the Inbox to handle visitor messages. But there was a problem: &lt;strong&gt;Knocket has no AI features.&lt;/strong&gt; It's a pure communication tool — when a customer sends a message, you have to be watching the dashboard to reply.&lt;/p&gt;

&lt;p&gt;Even with working hours configured, no one sits in front of the Inbox console all day.&lt;/p&gt;

&lt;p&gt;As a non-technical product person, I thought: &lt;strong&gt;can I add an AI customer service agent to my own product?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Mature customer service platforms offer this — for $30–100/month. Building it as a proper feature would take significant engineering time. But what if I could use &lt;a href="https://openclaw.ai/" rel="noopener noreferrer"&gt;OpenClaw&lt;/a&gt; to control the browser and do it for me?&lt;/p&gt;

&lt;p&gt;The goal:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Customer sends a message → AI detects it → auto-replies (or notifies me first so I can decide)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So I started building. Here's what actually happened.&lt;/p&gt;

&lt;h2&gt;
  
  
  Phase 1: Browser Automation — Quick and Dirty (v1)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The Idea
&lt;/h3&gt;

&lt;p&gt;I used &lt;a href="https://openclaw.ai/" rel="noopener noreferrer"&gt;OpenClaw&lt;/a&gt;, an AI coding assistant with a browser automation skill that could control Chrome via command line — execute JavaScript, fill forms, click buttons.&lt;/p&gt;

&lt;p&gt;The initial approach was straightforward:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Every 60 seconds → read the Inbox conversation list
→ Detect new message → AI generates reply
→ Fill the textarea → click Send
→ Notify me via Telegram
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The AI assistant wrote a ~300-line shell script. It used browser eval to read page data and browser fill to type the reply.&lt;/p&gt;

&lt;h3&gt;
  
  
  It Worked! ...For About 5 Minutes
&lt;/h3&gt;

&lt;p&gt;The AI could read messages and generate replies. I was excited for approximately 5 minutes.&lt;/p&gt;

&lt;p&gt;Then everything broke.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fctdzvlhf192ouxjt9mbv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fctdzvlhf192ouxjt9mbv.png" alt=" " width="800" height="459"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The browser automation's fill and click commands weren't compatible with Knocket Inbox's React single-page app. &lt;strong&gt;The worst incident: I meant to reply to 1 customer, but the same reply got sent to all 3 active conversations.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;On top of that, every browser operation stole window focus. I couldn't use my browser for anything else while the script was running.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lesson learned:&lt;/strong&gt; CLI-wrapped browser automation isn't precise enough for complex SPAs. You can't control the timing of each step.&lt;/p&gt;

&lt;h2&gt;
  
  
  Phase 2: Raw CDP + Background Execution (v2)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Complete Architecture Rewrite
&lt;/h3&gt;

&lt;p&gt;I threw out the browser automation approach entirely. The new plan: use OpenClaw to connect directly to Chrome DevTools Protocol (CDP) using Python's websockets library.&lt;/p&gt;

&lt;p&gt;Why CDP was better:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Runs completely in the background&lt;/strong&gt; — no popups, no focus stealing, no interference&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Every step returns a precise JSON response&lt;/strong&gt; — you know if it succeeded or failed&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Full control over reconnection, retries, and timeouts&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The new flow:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Customer sends message → script detects it in background
→ AI auto-replies → CDP sends it silently → Telegram notifies you
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc0siz6uno8wl4z3yy97c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc0siz6uno8wl4z3yy97c.png" alt=" " width="800" height="353"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Stable, But I Wanted More
&lt;/h3&gt;

&lt;p&gt;No more accidental cross-sends. Silent background operation. But I realized pure auto-reply wasn't enough.&lt;/p&gt;

&lt;p&gt;For important customers or complex questions, &lt;strong&gt;I still wanted to review before responding.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A one-way webhook (notify me what happened) wasn't enough. I needed two-way communication: the bot tells me what's happening, I tell the bot how to respond.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lesson learned:&lt;/strong&gt; One-way notifications work for monitoring, not for decision-making. "Human-first, AI-fallback" requires a bidirectional bot.&lt;/p&gt;

&lt;h2&gt;
  
  
  Phase 3: Telegram Bot + Human-in-the-Loop (v3)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Why Telegram?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Bot API is completely free and full-featured&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bidirectional:&lt;/strong&gt; bot notifies me, I reply to the bot, bot forwards to the customer&lt;/li&gt;
&lt;li&gt;Works on mobile — respond from anywhere&lt;/li&gt;
&lt;li&gt;No admin approval needed — unlike enterprise messaging platforms&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The v3 Flow
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Customer sends message
  → Script detects it in background
  → Telegram notifies you: what the customer said + conversation context
  → You reply on your phone (e.g., "Tell them we ship in 3 days")
    → Script sends your reply to the customer → Telegram confirms "✅ Sent"
  → 5 minutes with no reply from you?
    → AI auto-replies as fallback → Telegram tells you "🤖 AI auto-replied"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Human-in-the-loop.&lt;/strong&gt; Important conversations get my attention. Everything else gets handled by AI.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbqz7hkrn4loorzmjudim.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbqz7hkrn4loorzmjudim.png" alt=" " width="800" height="469"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  And Then... It Broke Again at the Last Step
&lt;/h3&gt;

&lt;p&gt;Telegram notifications worked perfectly. AI reply generation worked perfectly. The failure point: &lt;strong&gt;actually sending the reply into Knocket Inbox.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It looks like three simple steps: switch to the right conversation → type the reply → click Send. In practice, I hit every possible wall.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Most Painful Part: CDP vs React — A Horror Story
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Problem 1: Background Tab Can't Focus
&lt;/h3&gt;

&lt;p&gt;Chrome has security restrictions on background tabs. JavaScript &lt;code&gt;el.focus()&lt;/code&gt; silently fails — the textarea never gets focus, so you can't type into it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fix:&lt;/strong&gt; Use CDP protocol-level &lt;code&gt;DOM.focus&lt;/code&gt; command to bypass the JS security sandbox.&lt;/p&gt;

&lt;h3&gt;
  
  
  Problem 2: React Controlled Components Ignore CDP Input
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;Input.insertText&lt;/code&gt; writes text that's visible on screen, but JavaScript &lt;code&gt;el.value&lt;/code&gt; returns empty. React's internal &lt;code&gt;_valueTracker&lt;/code&gt; mechanism doesn't detect CDP-level input.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fix:&lt;/strong&gt; Stop relying on &lt;code&gt;.value&lt;/code&gt; for verification. If focus succeeded and insertText didn't error, treat it as successful.&lt;/p&gt;

&lt;h3&gt;
  
  
  Problem 3: Text Duplicated Multiple Times
&lt;/h3&gt;

&lt;p&gt;I built 4 different input methods as fallbacks. Method 1 actually worked, but since &lt;code&gt;.value&lt;/code&gt; returned empty, the script thought it failed and ran methods 2, 3, and 4 — each writing the text again.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fix:&lt;/strong&gt; Use only one method. No fallbacks. The write probably succeeded — you just can't read it back.&lt;/p&gt;

&lt;h3&gt;
  
  
  Problem 4: Send Button Permanently Disabled
&lt;/h3&gt;

&lt;p&gt;React didn't know the textarea had content, so the Send button stayed disabled.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fix:&lt;/strong&gt; Press Enter instead. If the textarea has focus, Enter sends the message.&lt;/p&gt;

&lt;h3&gt;
  
  
  Problem 5: Conversation Index Shifted
&lt;/h3&gt;

&lt;p&gt;During the 5-minute human reply window, other conversations got added or removed. The original list index no longer pointed to the right conversation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fix:&lt;/strong&gt; Look up conversations by name instead of index. Re-search before every send.&lt;/p&gt;

&lt;h3&gt;
  
  
  How Did I Solve All This?
&lt;/h3&gt;

&lt;p&gt;Honestly — by talking to OpenClaw, round after round. My workflow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Run the script → it breaks&lt;/li&gt;
&lt;li&gt;Copy the logs and describe what happened: "Text duplicated twice, Send button never clicked"&lt;/li&gt;
&lt;li&gt;OpenClaw analyzes → rewrites the code → restart&lt;/li&gt;
&lt;li&gt;Run again → maybe breaks again → repeat&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;The core skill for non-technical vibe coding isn't reading code. It's describing problems accurately.&lt;/strong&gt; "The text duplicated twice and the send button stayed disabled" is 100x more useful than "it doesn't work."&lt;/p&gt;

&lt;p&gt;After about 15 iterations, the entire pipeline finally worked end to end.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fopdce0mmnmq3qtkexwu6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fopdce0mmnmq3qtkexwu6.png" alt=" " width="800" height="498"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Shipped: A Reusable AI Agent Skill
&lt;/h2&gt;

&lt;p&gt;After surviving all the bugs, I packaged the solution into a reusable skill — &lt;strong&gt;knocket-inbox-agent&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  What's in the Package
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;knocket-inbox-agent/
├── SKILL.md                  # Skill description (AI reads this to help you set up)
├── scripts/
│   ├── auto_reply.py         # Mode A: full auto-reply + Telegram notification
│   ├── telegram_human.py     # Mode B: Telegram human-first (recommended)
│   └── setup.sh              # One-click initialization
├── assets/
│   └── config.template.toml  # Configuration template
└── references/
    ├── usage-guide.md         # Beginner-friendly setup guide
    └── customization.md       # AI reply style customization
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Two Modes
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;Mode A: Full Auto&lt;/th&gt;
&lt;th&gt;Mode B: Human-First (Recommended)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Flow&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Customer message → AI replies immediately → Telegram notifies you&lt;/td&gt;
&lt;td&gt;Customer message → Telegram notifies you → you reply or don't → AI fallback&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Best for&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;High volume, standardized inquiries&lt;/td&gt;
&lt;td&gt;Important customers, situations needing human judgment&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  How to Use It
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Install the knocket-inbox-agent skill from &lt;a href="https://github.com/fangxinmoon/knocket-inbox-agent-EN" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Tell OpenClaw: "I want to set up auto-reply for Knocket Inbox"&lt;/li&gt;
&lt;li&gt;OpenClaw reads the skill and walks you through: pick a mode, configure environment, start monitoring&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;You don't need to write any code.&lt;/strong&gt; OpenClaw handles everything based on the skill instructions.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Learned
&lt;/h2&gt;

&lt;h3&gt;
  
  
  On Vibe Coding
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;It's not magic.&lt;/strong&gt; Don't expect "describe a feature and get perfect code." It's more like having a super-patient collaborator — OpenClaw lets you describe what's wrong, and it fixes it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Non-technical people can absolutely do this.&lt;/strong&gt; I don't understand CDP protocol or React internals. But I can describe problems clearly and identify key information in error logs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Debugging is inevitable.&lt;/strong&gt; But with OpenClaw's help, the cost of debugging drops from "stuck for days" to "iterate for a few rounds."&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  On Adding AI to Your Own Product
&lt;/h3&gt;

&lt;p&gt;Knocket is a pure communication tool with zero AI features. But with this skill added, it becomes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;You're online&lt;/strong&gt; → you make the calls, AI assists&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;You're asleep&lt;/strong&gt; → AI handles everything, Telegram notifies you on your phone&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No extra server needed.&lt;/strong&gt; No paid SaaS subscription.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;One laptop + Chrome + one Python script.&lt;/strong&gt; That's the entire stack.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you've built your own product or tool, ask yourself: &lt;strong&gt;what feature is it missing? Can vibe coding help you add it?&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>openclaw</category>
      <category>ai</category>
      <category>vibecoding</category>
      <category>programming</category>
    </item>
    <item>
      <title>5 Real-Time Tools &amp; Projects Redefining Developer Workflows in 2025</title>
      <dc:creator>LunarDrift</dc:creator>
      <pubDate>Tue, 19 Aug 2025 13:41:02 +0000</pubDate>
      <link>https://dev.to/tencent_rtc/5-real-time-tools-projects-redefining-developer-workflows-in-2025-56no</link>
      <guid>https://dev.to/tencent_rtc/5-real-time-tools-projects-redefining-developer-workflows-in-2025-56no</guid>
      <description>&lt;p&gt;Tired of juggling Jira tickets, Slack threads, and Zoom links? We all are.&lt;/p&gt;

&lt;p&gt;Studies show that a single context switch can cost a developer up to 15 minutes of focus. Imagine how much more efficient we could be if explaining a complex UI bug meant a 5-minute live pair programming session instead of a long thread of text and screenshots.&lt;/p&gt;

&lt;p&gt;That's why developers are increasingly shifting towards &lt;strong&gt;synchronous, real-time collaboration&lt;/strong&gt;, embedding communication directly into their workflows.&lt;/p&gt;

&lt;p&gt;Here are 5 top-tier tools and open-source projects that will completely change how you think about teamwork.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;The Evolution of Collaboration: From Asynchronous to Synchronous&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;In the past, we relied on async tools like Jira and Trello, which are great for tracking tasks. But when it comes to complex problems and brainstorming, the inefficiencies and delays of async communication become a bottleneck.&lt;/p&gt;

&lt;p&gt;The future of collaboration requires a balance between the structure of async and the immediacy of sync, defined by these key traits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Contextual Integration:&lt;/strong&gt; Collaboration features should be embedded in your dev environment, not as separate apps.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Low-Latency Interaction:&lt;/strong&gt; Syncing audio, video, cursors, and whiteboards must be seamless and instant, as if you're in the same room.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Developer-First Design:&lt;/strong&gt; Tools must respect developer habits and minimize interruptions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security &amp;amp; Reliability:&lt;/strong&gt; Code and conversations must be secure during transit.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;5 Tools &amp;amp; Projects Leading the Change&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;1. &lt;a href="https://visualstudio.microsoft.com/services/live-share/" rel="noopener noreferrer"&gt;VS Code Live Share&lt;/a&gt; - The IDE-Integrated Pair Programming Standard&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Almost every developer has used or heard of Live Share. It's the perfect example of "contextual integration." Without leaving your familiar VS Code environment, you can share your code, terminal, and even debugging sessions with your team.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Why it matters:&lt;/strong&gt; Live Share has proven the immense value of real-time collaborative coding. It has set the gold standard for any team looking to improve their code review and remote pair programming efficiency.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fivt7uvpt0f47a1pcbcyf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fivt7uvpt0f47a1pcbcyf.png" alt=" " width="727" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. &lt;a href="https://replit.com/" rel="noopener noreferrer"&gt;Replit&lt;/a&gt; - The Multiplayer Cloud Development Environment&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Replit moved the entire development environment to the cloud and natively supports a "multiplayer" mode. Team members can write, run, and debug code in the same project simultaneously, just like in Google Docs. This is incredibly valuable for rapid prototyping and team learning.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Why it matters:&lt;/strong&gt; Replit represents the future of development—cloud-based environments with native collaboration. It eliminates discrepancies in local setups and ensures the entire team starts on the same page.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzldnwr89hcy4hp3on0m6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzldnwr89hcy4hp3on0m6.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. &lt;a href="https://www.tldraw.com/" rel="noopener noreferrer"&gt;tldraw&lt;/a&gt; - The Minimalist &amp;amp; Powerful Open-Source Whiteboard&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Complex architectural designs and UI/UX brainstorming sessions need a good whiteboard. tldraw is a popular open-source project that offers an infinite canvas for real-time, multiplayer drawing, diagramming, and interaction. It can also be easily embedded into any web app.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Why it matters:&lt;/strong&gt; tldraw proves that real-time collaboration extends beyond just code. Visual communication is just as critical, and its open-source nature makes it a go-to choice for integrating an interactive whiteboard into internal tools or ed-tech platforms.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4. &lt;a href="https://livekit.io/" rel="noopener noreferrer"&gt;LiveKit&lt;/a&gt; - Open-Source RTC Infrastructure&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For teams that want to control their RTC (Real-Time Communication) stack from the ground up, LiveKit provides a complete, open-source infrastructure. It allows you to self-host the service and offers flexible APIs for building highly customized real-time applications.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Why it matters:&lt;/strong&gt; With a very active open-source community, LiveKit is an excellent resource for learning how RTC technology works. It empowers developers to dive deep into the mechanics of real-time communication.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;5. &lt;a href="https://trtc.io/" rel="noopener noreferrer"&gt;Tencent RTC&lt;/a&gt; - The Real-Time Engine for Next-Gen Collaboration Tools&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Ever wonder what technology powers the silky-smooth real-time sync in tools like VS Code Live Share or Figma? The answer is RTC. &lt;strong&gt;Tencent Real-Time Communication (RTC)&lt;/strong&gt; is a leader in this field. It isn't a standalone app but a powerful engine that enables developers to build their own real-time collaboration tools.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Key Features:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cross-Platform SDKs:&lt;/strong&gt; Provides SDKs for Web, iOS, Android, Desktop, and Electron, allowing you to easily integrate real-time audio and video into any application.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ultra-Low Latency:&lt;/strong&gt; Achieves an end-to-end latency of under 300ms globally, thanks to Tencent's worldwide edge nodes, making interactions feel truly in-person.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;High Packet Loss Resistance:&lt;/strong&gt; Maintains smooth audio and video communication even with up to 70% packet loss, gracefully handling poor network conditions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rich Functionality:&lt;/strong&gt; Goes beyond audio/video to offer real-time data synchronization, interactive whiteboards, and cloud recording to support complex collaborative scenarios.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Many top-tier internal DevOps platforms and developer tools choose Tencent RTC to provide their users with stable, high-quality real-time features, allowing them to focus entirely on their core business logic.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;How to Build an Effective Real-Time Collaboration Culture&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Great tools are just the beginning; you also need the right strategies:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Set Clear Boundaries:&lt;/strong&gt; Define when to use async (e.g., daily updates) versus when to jump into a sync session (e.g., urgent bugs, architectural reviews) to avoid "real-time fatigue."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Embrace Pair Programming:&lt;/strong&gt; Make it a regular practice, especially for complex modules or onboarding new members. It drastically improves code quality and knowledge sharing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Record and Review:&lt;/strong&gt; Use cloud recording to capture important technical discussions or debugging sessions, turning them into a valuable knowledge base for the team.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy6dusa2e2j1ubxrvimhp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy6dusa2e2j1ubxrvimhp.png" alt=" " width="800" height="493"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;How Tencent RTC Can Empower Your Workflow&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Integrating Tencent RTC's capabilities into your workflow means you can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Eliminate App Switching:&lt;/strong&gt; Launch HD audio/video calls directly from your CI/CD platform, project management tool, or internal documentation without jumping to a third-party app.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Build Custom Tools:&lt;/strong&gt; Create a private, self-hosted pair programming or code review tool that perfectly fits your team's habits and keeps your code and conversations secure.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Improve Incident Response:&lt;/strong&gt; When a production incident occurs, stakeholders can instantly join a "war room" embedded directly in your monitoring dashboard, syncing information in real-time and drastically reducing Mean Time to Resolution (MTTR).&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Teams that adopt Tencent RTC typically report a &lt;strong&gt;50% reduction in context switching&lt;/strong&gt; and a &lt;strong&gt;35% decrease in the time it takes to solve complex problems.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Conclusion: The Future of Collaboration is Seamless Integration&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;In 2025, the core of developer productivity is no longer just about optimizing individual coding speed but about elevating the entire team's collaborative efficiency.&lt;/p&gt;

&lt;p&gt;The most successful teams will be those that seamlessly integrate real-time collaboration into their culture and toolchain—whether by adopting finished tools like VS Code Live Share or by using a powerful engine like &lt;strong&gt;Tencent RTC&lt;/strong&gt; to build their own unique, customized platforms.&lt;/p&gt;

&lt;p&gt;Ready to transform your development workflow? Explore how &lt;strong&gt;&lt;a href="https://trtc.io/contact" rel="noopener noreferrer"&gt;Tencent RTC&lt;/a&gt;&lt;/strong&gt; can bring a smooth, in-person collaborative experience to your team, minimizing communication overhead and giving you more time to create. Your path to enhanced productivity starts here.&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>devops</category>
      <category>webdev</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Build a Conversational AI Fandom APP</title>
      <dc:creator>LunarDrift</dc:creator>
      <pubDate>Mon, 02 Jun 2025 02:29:40 +0000</pubDate>
      <link>https://dev.to/susiewang/build-a-conversational-ai-fandom-app-465e</link>
      <guid>https://dev.to/susiewang/build-a-conversational-ai-fandom-app-465e</guid>
      <description>&lt;p&gt;Today’s fans demand more than just passive viewing. They seek deep immersion, real-time connection, and the opportunity to interact with their favorite idols in everyday life. Conversational AI opens the door to these possibilities, offering 24/7 interactive virtual personas, personalized companion modes, and seamless integration with physical merchandise. This advanced technology brings artists, influencers, and IP characters to life in a dynamic digital ecosystem.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Evolution of Fan Platforms
&lt;/h2&gt;

&lt;p&gt;Traditional fan platforms relied primarily on text chats and simple subscription models, often lacking the depth of interaction and emotional connection fans now crave. As a result, many fans found little reason to return or engage on a deeper level.&lt;/p&gt;

&lt;h3&gt;
  
  
  Traditional Fan Engagement
&lt;/h3&gt;

&lt;p&gt;• Limited Interactivity: Artists and influencers have limited time for live streaming or direct fan contact.&lt;/p&gt;

&lt;p&gt;• Low User Stickiness: Minimal emotional linkage leads to fewer repeat visits.&lt;/p&gt;

&lt;p&gt;• Basic Monetization: Simple subscription models miss out on the full potential of fan enthusiasm.&lt;/p&gt;

&lt;h3&gt;
  
  
  AI-Powered Fan Engagement
&lt;/h3&gt;

&lt;p&gt;• 24/7 Interaction: Conversational AI empowers fans with always-on chat, voice, and video engagement.&lt;/p&gt;

&lt;p&gt;• Immersive AI Merchandise: AI-integrated toys, devices, and other real-world products create deeper emotional experiences.&lt;/p&gt;

&lt;p&gt;• Diverse Revenue Streams: Offer premium AI companions, IP licensing, limited-edition collectibles, and virtual goods.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbuz8q4al7m2l4bt16ol0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbuz8q4al7m2l4bt16ol0.png" alt="Image description" width="636" height="423"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Round-the-Clock Connection: Real-Time Virtual Personas
&lt;/h2&gt;

&lt;p&gt;Conversational AI enables fans to interact with virtual iterations of celebrities, influencers, or IP-based characters at any time. Fans can choose various “companion modes”—from “friend” to “mentor”—to get personalized content and emotional support. This fosters fan loyalty and continually boosts the value and visibility of the IP or celebrity brand.&lt;/p&gt;

&lt;h2&gt;
  
  
  Licensing IP for Conversational AI
&lt;/h2&gt;

&lt;p&gt;Entertainment companies can license the voice, appearance, and personality of a star or influencer to create AI assistants or tutorial services. Whether it’s language learning or motivational tidbits, fans can transform their devotion into daily functionality. Imagine your favorite singer coaching you in a new language or sharing words of encouragement—bridging fandom and real-world value.&lt;/p&gt;

&lt;h2&gt;
  
  
  Fusing Digital Emotion with Physical Connection
&lt;/h2&gt;

&lt;p&gt;While virtual personas in the digital realm are compelling, integrating them into physical products makes the experience all the more tangible. Through AI toys, AI-enabled headphones, speakers, and wearable tech, fans can enjoy the feeling of real companionship with virtual personalities in everyday situations.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AI Keychains/Figurines: Preloaded with a celebrity’s voice, these AI items offer comforting, conversational support.&lt;/li&gt;
&lt;li&gt;AI Headphones &amp;amp; Speakers: With real-time translation or voice prompts, fans can “chat” with idols in any language.&lt;/li&gt;
&lt;li&gt;Wearable Devices: Smart bracelets or glasses deliver personalized notifications, event updates, and daily reminders from virtual personas.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F97sbctpxs9kh2yb5vgh8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F97sbctpxs9kh2yb5vgh8.png" alt="Image description" width="640" height="414"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Future of AI Fan Engagement
&lt;/h2&gt;

&lt;p&gt;A glimpse into the next generation of fan interaction comes from BLACKPINK’s Jisoo and her AI companion technology:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deeper Connections: Jisoo’s AI interacts with fans in real time, tailoring conversations and emotional support.&lt;/li&gt;
&lt;li&gt;Seamless Integration: The AI memorizes fan preferences and schedules, offering everything from wake-up calls in Jisoo’s voice to evening livestream reminders.&lt;/li&gt;
&lt;li&gt;AI Merchandise: Toys and wearables synchronize with the virtual idol, providing haptic feedback and tracking her latest music releases, event news, and even TV appearances—ensuring fans never miss a beat.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Expanding Revenue Models through Conversational AI
&lt;/h2&gt;

&lt;p&gt;Conversational AI opens up multi-layered revenue avenues beyond simple subscription plans:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Paid AI Modes: Tiered virtual companions, from casual “friend” to premium “mentor.”&lt;/li&gt;
&lt;li&gt;IP Licensing: Deploy star voices and likenesses across games, apps, and other products.&lt;/li&gt;
&lt;li&gt;Digital-Physical Bundling: Combine in-app interaction with AI-driven merchandise to deliver extra value.&lt;/li&gt;
&lt;li&gt;White-Label Solutions: Record labels, agencies, and independent creators can quickly launch their own AI platforms.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Fandom as a Service: A Multi-Billion Dollar Opportunity
&lt;/h2&gt;

&lt;p&gt;With a global fan economy on track to surpass billions of dollars, having a robust real-time communication and conversational AI infrastructure is crucial to thriving in this space.&lt;/p&gt;

&lt;p&gt;Enter Tencent RTC—a trusted technology partner that specializes in ultra-low-latency communication and streamlined development processes. From AI-powered celebrity avatars to beloved IP characters, Tencent RTC enables deep fan engagement through immersive interactions.&lt;/p&gt;

&lt;p&gt;For businesses looking to integrate these AI personas into hardware devices, Tencent RTC’s AIConversationKit offers a comprehensive software-hardware solution, effortlessly embedding conversational AI into toys, merchandise, and beyond. Guided by Tencent RTC’s innovative approach, fans evolve from distant observers into active participants, while artists and brands reap the rewards of stronger loyalty, enriched storytelling, and diverse monetization opportunities.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Build an iOS Conversational App with a UI
&lt;/h2&gt;

&lt;p&gt;Now, let’s take a closer look at how to integrate the AIConversationKit within just 20 minutes. By following this guide, you’ll navigate essential setup steps, culminating in a fully functional conversational AI project complete with a polished user interface—all in a short timeframe. &lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Environment Preparation&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Minimum compatibility with iOS 13. Recommend using iOS 13 and above versions.&lt;/p&gt;

&lt;p&gt;Xcode 13 and above versions.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 1: Activating Service&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Before initiating a conversation using AiConversationKit, you need to go to the console to activate conversational AI service for AiConversationKit. For specific steps, please refer to &lt;a href="https://trtc.io/document/69002?product=rtcengine&amp;amp;menulabel=serverfeaturesapis#" rel="noopener noreferrer"&gt;Activate Service&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 2: Download the AIConversationKit Component&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Go to &lt;a href="https://github.com/Tencent-RTC/AIConversationKit" rel="noopener noreferrer"&gt;Github&lt;/a&gt; to download the zip file. After decompressing it, you will see the &lt;strong&gt;AIConversationKit&lt;/strong&gt; directory, the &lt;strong&gt;AIConversationKit.podspec&lt;/strong&gt; file, and the &lt;strong&gt;Resource&lt;/strong&gt; directory.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F49xjet4qlqmixcgrm69h.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F49xjet4qlqmixcgrm69h.PNG" alt="Image description" width="800" height="402"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Step 3: Configuration&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Copy the &lt;strong&gt;AIConversationKit&lt;/strong&gt;, &lt;strong&gt;podspec&lt;/strong&gt; and &lt;strong&gt;Resource&lt;/strong&gt; in the directory decompressed in the above &lt;a href="https://trtc.io/document/69005?product=rtcengine&amp;amp;menulabel=serverfeaturesapis#step2" rel="noopener noreferrer"&gt;step two&lt;/a&gt; to your project:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhr5bhgolh3r27zsrldbi.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhr5bhgolh3r27zsrldbi.PNG" alt="Image description" width="800" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then, in the &lt;code&gt;Podfile&lt;/code&gt; of your project, add the following dependencies:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;   &lt;span class="n"&gt;pod&lt;/span&gt; &lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="nc"&gt;AIConversationKit&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="o"&gt;,:&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="nc"&gt;AIConversationKit&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;podspec&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the configuration completed, execute &lt;code&gt;pod install&lt;/code&gt; to complete the installation of dependencies;&lt;/p&gt;

&lt;p&gt;Use XCode to open &lt;code&gt;.xcworkspace&lt;/code&gt; and configure the certificate information:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwyje6r2fx4ng5gzo370e.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwyje6r2fx4ng5gzo370e.PNG" alt="Image description" width="800" height="228"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Step 4: Sign In&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Add the following code to your project. It enables the component to log in by calling relevant APIs in &lt;code&gt;TUILogin&lt;/code&gt;. &lt;strong&gt;This step is critical&lt;/strong&gt; because only after logging in can &lt;code&gt;AIConversationkit&lt;/code&gt; features be used properly. Please patiently check if the relevant parameters are configured correctly.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight swift"&gt;&lt;code&gt;&lt;span class="kd"&gt;import&lt;/span&gt; &lt;span class="kt"&gt;TUICore&lt;/span&gt;
&lt;span class="kd"&gt;import&lt;/span&gt; &lt;span class="kt"&gt;AIConversationKit&lt;/span&gt;

&lt;span class="kd"&gt;func&lt;/span&gt; &lt;span class="nf"&gt;application&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;_&lt;/span&gt; &lt;span class="nv"&gt;application&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;UIApplication&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;didFinishLaunchingWithOptions&lt;/span&gt; &lt;span class="nv"&gt;launchOptions&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="kt"&gt;UIApplication&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="kt"&gt;LaunchOptionsKey&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;Any&lt;/span&gt;&lt;span class="p"&gt;]?)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="kt"&gt;Bool&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nv"&gt;sdkAppId&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1600000001&lt;/span&gt;  &lt;span class="c1"&gt;// Please replace with your sdkAppId&lt;/span&gt;
        &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nv"&gt;userId&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"people"&lt;/span&gt;      &lt;span class="c1"&gt;// Please replace with your UserID&lt;/span&gt;
        &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nv"&gt;secretKey&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"xxx"&lt;/span&gt;      &lt;span class="c1"&gt;// Please replace with your sdkSecretKey&lt;/span&gt;
        &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nv"&gt;userSig&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kt"&gt;GenerateTestUserSig&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;genTestUserSig&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;sdkAppId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;sdkAppId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;identifier&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;userId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;secrectkey&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;secretKey&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="kt"&gt;TUILogin&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;login&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;Int32&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sdkAppId&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="nv"&gt;userID&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;userId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;userSig&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;userSig&lt;/span&gt;&lt;span class="p"&gt;){&lt;/span&gt;
            &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"login success"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="nv"&gt;fail&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;code&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt;
            &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"login failed, code: &lt;/span&gt;&lt;span class="se"&gt;\(&lt;/span&gt;&lt;span class="n"&gt;code&lt;/span&gt;&lt;span class="se"&gt;)&lt;/span&gt;&lt;span class="s"&gt;, error: &lt;/span&gt;&lt;span class="se"&gt;\(&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="p"&gt;??&lt;/span&gt; &lt;span class="s"&gt;"nil"&lt;/span&gt;&lt;span class="se"&gt;)&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Step 5: Start Your First Conversational AI project&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Upon success of &lt;a href="https://trtc.io/document/54843#07002a30-aca6-4e6e-991f-0d8b65e315a6" rel="noopener noreferrer"&gt;TUILogin.login&lt;/a&gt;, see the following code to initiate conversational AI.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight swift"&gt;&lt;code&gt;&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nv"&gt;startParams&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kt"&gt;StartAIConversationParams&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nv"&gt;sdkAppId&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1600000001&lt;/span&gt;  &lt;span class="c1"&gt;// 1,Replace your sdkAppId&lt;/span&gt;
&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nv"&gt;secretKey&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"xxx"&lt;/span&gt;      &lt;span class="c1"&gt;// 2,Replace your sdkSecretKey&lt;/span&gt;
&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nv"&gt;aiRobotId&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"robot_&lt;/span&gt;&lt;span class="se"&gt;\(&lt;/span&gt;&lt;span class="kt"&gt;TUILogin&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getUserID&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;??&lt;/span&gt; &lt;span class="s"&gt;""&lt;/span&gt;&lt;span class="se"&gt;)&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;
&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nv"&gt;aiRobotSig&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kt"&gt;GenerateTestUserSig&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;genTestUserSig&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;sdkAppId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;sdkAppId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;identifier&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;aiRobotId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;secrectkey&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;secretKey&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;startParams&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;agentConfig&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kt"&gt;AIConversationDefine&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="kt"&gt;AgentConfig&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;generateDefaultConfig&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;aiRobotId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;aiRobotId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;aiRobotSig&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;aiRobotSig&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;startParams&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;secretId&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"xxx"&lt;/span&gt;           &lt;span class="c1"&gt;// 3,Replace your secretId&lt;/span&gt;
&lt;span class="n"&gt;startParams&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;secretKey&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"xxx"&lt;/span&gt;          &lt;span class="c1"&gt;// 4,Replace your secretKey&lt;/span&gt;

&lt;span class="c1"&gt;// 5,Replace your llmConfig&lt;/span&gt;
&lt;span class="n"&gt;startParams&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;llmConfig&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"{&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;LLMType&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;:&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;openai&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;,&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;Model&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;:&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;hunyuan-turbo-latest&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;,&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;SystemPrompt&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;:&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;You are a personal assistant&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;,&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;APIUrl&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;:&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;https://hunyuan.cloud.tencent.com/openai/v1/chat/completions&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;,&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;APIKey&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;:&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;xxxx&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;,&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;History&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;:5,&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;Streaming&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;:true}"&lt;/span&gt;
&lt;span class="c1"&gt;// 6,Replace your ttsConfig&lt;/span&gt;
&lt;span class="n"&gt;startParams&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ttsConfig&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"{&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;TTSType&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;:&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;tencent&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;,&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;AppId&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;:&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;xxx&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;,&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;SecretId&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;:&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;xxx&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;,&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;SecretKey&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;:&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;xxx&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;,&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;VoiceType&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;:&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;502001&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;,&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;Speed&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;:1.25,&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;Volume&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;:5,&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;PrimaryLanguage&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;:1,&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;FastVoiceType&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;:&lt;/span&gt;&lt;span class="se"&gt;\"\"&lt;/span&gt;&lt;span class="s"&gt;}"&lt;/span&gt;

&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nv"&gt;aiViewController&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kt"&gt;AIConversationViewController&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;aiParams&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;startParams&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;navigationController&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;pushViewController&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;aiViewController&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;animated&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use the data obtained in &lt;a href="https://trtc.io/document/69005?product=rtcengine&amp;amp;menulabel=serverfeaturesapis#07002a30-aca6-4e6e-991f-0d8b65e315a6" rel="noopener noreferrer"&gt;step four&lt;/a&gt; for sdkAppId and sdkSecretKey.&lt;/p&gt;

&lt;p&gt;On the application details page, select &lt;strong&gt;RTC-Engine&lt;/strong&gt; &lt;strong&gt;&amp;gt; Conversational AI&lt;/strong&gt;, refer to &lt;a href="https://trtc.io/document/68137?product=rtcengine&amp;amp;menulabel=serverfeaturesapis" rel="noopener noreferrer"&gt;No-Code Quick Integration Of Conversational AI Feature&lt;/a&gt; configure conversational AI service parameters, including basic configuration, STT, LLM, TTS, click lower right corner &lt;strong&gt;Quick Interation&lt;/strong&gt;, switch to iOS, obtain SecretId, SecretKey and Config parameters.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7zwezbbt3si2qher1udw.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7zwezbbt3si2qher1udw.PNG" alt="Image description" width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Copy the SecretId and SecretKey of TencentCloud API to &lt;code&gt;startParams.secretId&lt;/code&gt; and &lt;code&gt;startParams.secretKey&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Copy the Config information to a JSON parsing tool, such as &lt;a href="https://www.json.cn/" rel="noopener noreferrer"&gt;JsonUtil&lt;/a&gt;. Copy the string value corresponding to LLMConfig to &lt;code&gt;startParams.llmConfig&lt;/code&gt;, and copy the string value corresponding to TTSConfig to &lt;code&gt;startParams.ttsConfig&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;This hands-on approach helps you efficiently bring AI companions and virtual personas to life on iOS devices, taking your fan engagement strategy to the next level.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Through AI-driven personalized communication and physical product tie-ins, creators and brands can build tighter, longer-lasting relationships with their audiences. In contrast to older, text-based fan platforms, the new era of fan engagement thrives on emotional resonance, real-time responsiveness, and immersive experiences.&lt;/p&gt;

&lt;p&gt;My name is Susie. I am a writer and Media Service Product Manager. I work with startups across the globe to build real-time communication solutions using SDK and APIs.&lt;/p&gt;

&lt;p&gt;If you want to discover the AI-driven personalized communication, welcome to &lt;a href="https://sc-rp.tencentcloud.com:8106/t/RA" rel="noopener noreferrer"&gt;&lt;strong&gt;contact us&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>ai</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>How to build a Conversational AI APP in Faith Tech</title>
      <dc:creator>LunarDrift</dc:creator>
      <pubDate>Mon, 02 Jun 2025 02:22:18 +0000</pubDate>
      <link>https://dev.to/susiewang/how-to-build-a-conversational-ai-app-in-faith-tech-683</link>
      <guid>https://dev.to/susiewang/how-to-build-a-conversational-ai-app-in-faith-tech-683</guid>
      <description>&lt;p&gt;As technology continues to advance, more faith communities are embracing Conversational AI to deepen connections, enrich worship services, and enhance overall community cohesion. &lt;a href="https://exponential.org/7-insights-on-how-churches-are-adopting-ai-in-2024/" rel="noopener noreferrer"&gt;&lt;strong&gt;Recent statistics&lt;/strong&gt;&lt;/a&gt; reveal that in 2023, around 37% of churches occasionally utilized AI tools. By 2024, this figure is projected to surpass 66%, indicating a rapid growth in the fusion of technology and faith.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5fhxvokosv1uj8azt3fa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5fhxvokosv1uj8azt3fa.png" alt="Image description" width="800" height="472"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How Conversational AI Elevates the Faith Experience
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://trtc.io/solutions/conversational-ai" rel="noopener noreferrer"&gt;&lt;strong&gt;Conversational AI&lt;/strong&gt;&lt;/a&gt; excels at interpreting sacred texts, guiding prayers, and delivering culturally diverse and uplifting messages—all of which provide emotional and spiritual support. This leads to a more meaningful &lt;a href="https://trtc.io/solutions/faith-spiritual-wellness" rel="noopener noreferrer"&gt;&lt;strong&gt;faith journey&lt;/strong&gt;&lt;/a&gt; that caters to individual needs and preferences. Below are several practical examples showcasing the potential of voice-based Conversational AI in the faith-tech space.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. AI Astrologer: Convenient Astrology Consultations
&lt;/h3&gt;

&lt;p&gt;A multilingual “virtual astrologer” can analyze planetary positions in real time, offering personalized horoscopes and insights. When users pose questions, the system can instantly provide astrological predictions, responding in multiple languages through voice interaction. This immediate and adaptable approach not only reduces communication barriers but also makes astrology guidance more accessible to a wider audience.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Virtual Mentor: Round-the-Clock Spiritual Guidance
&lt;/h3&gt;

&lt;p&gt;A &lt;a href="https://trtc.io/solutions/conversational-ai" rel="noopener noreferrer"&gt;&lt;strong&gt;Conversational AI-powered&lt;/strong&gt;&lt;/a&gt; virtual mentor can offer scriptural references and spiritual counsel on demand. Using Natural Language Processing (NLP) and voice support in multiple languages, users simply ask questions, and the mentor suggests relevant passages or context-based interpretations. This online model helps individuals maintain spiritual discipline in their busy schedules and provides readily available support without geographic limitations.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Intelligent Guardian: Accessible Wisdom at Home
&lt;/h3&gt;

&lt;p&gt;An intelligent guardian is a specialized hardware device integrated with &lt;a href="https://trtc.io/solutions/conversational-ai" rel="noopener noreferrer"&gt;&lt;strong&gt;Conversational AI&lt;/strong&gt;&lt;/a&gt; that delivers daily readings, answers faith-related questions, and provides spiritual encouragement. Through cloud-based learning and real-time translation, it automatically adjusts to personal preferences, offering an experience closely aligned with each user’s needs. For those seeking regular religious practice or a moment of reflection, this blend of technology and tradition can serve as a helpful resource.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. AI Avatars for Faith-Based Shopping: Adding Cultural Context to Purchases
&lt;/h3&gt;

&lt;p&gt;An AI avatar can recommend faith-oriented items—such as sacred books, prayer beads, or other devotional products—while sharing relevant spiritual insights. By supporting voice interactions in multiple languages, the avatar educates a global community of believers on each product’s cultural or religious significance. When integrated with live-streaming events or online gatherings, these avatars further encourage interaction between religious leaders, fellow believers, and online shoppers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building a Web-Based Real-Time Conversation App with the Tencent RTC SDK
&lt;/h2&gt;

&lt;p&gt;Below is an overview of how you can implement a no-UI, real-time chat application in a web environment using the &lt;a href="https://trtc.io/document/68337?product=rtcengine&amp;amp;menulabel=serverfeaturesapis" rel="noopener noreferrer"&gt;&lt;strong&gt;Tencent RTC SDK&lt;/strong&gt;&lt;/a&gt;. This approach focuses on maximizing audio quality while minimizing interface complexities, helping you create a streamlined solution for voice-based AI interactions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5wuha0p718nag2dumkiz.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5wuha0p718nag2dumkiz.PNG" alt="Image description" width="800" height="765"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Prerequisites&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://trtc.io/document/69002?product=rtcengine&amp;amp;menulabel=serverfeaturesapis" rel="noopener noreferrer"&gt;Activate the Service&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;I. Integrating the RTC Engine SDK&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Import the RTC Engine SDK Into the Project and Enter the Room&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://trtc.io/document/59649?platform=web&amp;amp;product=rtcengine&amp;amp;menulabel=sdk" rel="noopener noreferrer"&gt;Integration Guide for Web &amp;amp; H5 Without UI&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Releasing the Audio Stream&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Use the &lt;a href="https://web.sdk.qcloud.com/trtc/webrtc/v5/doc/zh-cn/TRTC.html#startLocalAudio" rel="noopener noreferrer"&gt;trtc.startLocalAudio()&lt;/a&gt; method to enable the microphone and release the audio stream to the room.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="n"&gt;await&lt;/span&gt; &lt;span class="n"&gt;trtc&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;startLocalAudio&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;II. Initiating an AI Conversation&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Starting an AI Conversation: StartAIConversation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Call the &lt;a href="https://trtc.io/document/64963?product=rtcengine&amp;amp;menulabel=serverfeaturesapis" rel="noopener noreferrer"&gt;StartAIConversation&lt;/a&gt; API in the backend to add the chatbot to the room and initiate an AI conversation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Descriptions of currently supported&lt;/strong&gt;  &lt;strong&gt;&lt;code&gt;STTConfig&lt;/code&gt;  &lt;code&gt;LLMConfig&lt;/code&gt;&lt;/strong&gt;  &lt;strong&gt;and&lt;/strong&gt;  &lt;strong&gt;&lt;code&gt;TTSConfig&lt;/code&gt;&lt;/strong&gt;  &lt;strong&gt;configurations:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://trtc.io/document/69592?product=rtcengine&amp;amp;menulabel=serverfeaturesapis" rel="noopener noreferrer"&gt;Speech-to-Text(STTConfig)&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://trtc.io/document/68338?product=rtcengine&amp;amp;menulabel=serverfeaturesapis#" rel="noopener noreferrer"&gt;Large Language Model Configuration(LLMConfig)&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://trtc.io/document/68340?product=rtcengine&amp;amp;menulabel=serverfeaturesapis#" rel="noopener noreferrer"&gt;Text-To-Speech Configuration(TTSConfig)&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It is recommended to verify the parameters of&lt;/strong&gt; &lt;strong&gt;&lt;code&gt;LLMConfig&lt;/code&gt;&lt;/strong&gt; &lt;strong&gt;and&lt;/strong&gt; &lt;strong&gt;&lt;code&gt;TTSConfig&lt;/code&gt;&lt;/strong&gt;  &lt;strong&gt;in the following ways before you call the &lt;a href="https://trtc.io/document/64963?product=rtcengine&amp;amp;menulabel=serverfeaturesapis" rel="noopener noreferrer"&gt;StartAIConversation&lt;/a&gt; API for the first time. The detailed information is as follows:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/notedit/trtc-ai-api-check" rel="noopener noreferrer"&gt;Conversational AI Parameter Verification&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;III. Receiving the Data of AI Conversation Subtitles and Chatbot Status&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;You can use the &lt;a href="https://trtc.io/document/47866" rel="noopener noreferrer"&gt;Receive Custom Messages&lt;/a&gt; provided by the RTC Engine SDK to listen to callbacks in the client to receive the data such as real-time subtitles and chatbot statuses. &lt;strong&gt;The cmdID value is fixed at 1&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Receiving Real-Time Subtitles&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Message format:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="s"&gt;"type"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;10000&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// 10000 indicates the subtitles are real-time subtitles.&lt;/span&gt;
  &lt;span class="s"&gt;"sender"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"user_a"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// User ID of the sender (speaker).&lt;/span&gt;
  &lt;span class="s"&gt;"receiver"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="o"&gt;[],&lt;/span&gt; &lt;span class="c1"&gt;// List of receivers' user IDs. The message is actually broadcasted in the room.&lt;/span&gt;
  &lt;span class="s"&gt;"payload"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
     &lt;span class="s"&gt;"text"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="s"&gt;""&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// Text recognized by speech recognition.&lt;/span&gt;
     &lt;span class="s"&gt;"translation_text"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="s"&gt;""&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// Translated text.&lt;/span&gt;
     &lt;span class="s"&gt;"start_time"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="s"&gt;"00:00:01"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// Start time of a sentence.&lt;/span&gt;
     &lt;span class="s"&gt;"end_time"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="s"&gt;"00:00:02"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// End time of a sentence.&lt;/span&gt;
     &lt;span class="s"&gt;"roundid"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"xxxxx"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// Unique ID of a conversation round.&lt;/span&gt;
     &lt;span class="s"&gt;"end"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="c1"&gt;// If the value is true, the sentence is a complete one.&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Receiving the Chatbot Status Data&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Message format:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="s"&gt;"type"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;10001&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// Chatbot status.&lt;/span&gt;
  &lt;span class="s"&gt;"sender"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"user_a"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// User ID of the sender, which is the chatbot ID in this case.&lt;/span&gt;
  &lt;span class="s"&gt;"receiver"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="o"&gt;[],&lt;/span&gt; &lt;span class="c1"&gt;// List of receivers' user IDs. The message is actually broadcasted in the room.&lt;/span&gt;
  &lt;span class="s"&gt;"payload"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s"&gt;"roundid"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"xxx"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// Unique ID of a conversation round.&lt;/span&gt;
    &lt;span class="s"&gt;"timestamp"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;123&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
    &lt;span class="s"&gt;"state"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;      &lt;span class="c1"&gt;// 1: listening; 2: thinking; 3: speaking; 4: interrupted.&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Example Code&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="n"&gt;trtcClient&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;Tencent&lt;/span&gt; &lt;span class="no"&gt;RTC&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;EVENT&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;CUSTOM_MESSAGE&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;let&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;TextDecoder&lt;/span&gt;&lt;span class="o"&gt;().&lt;/span&gt;&lt;span class="na"&gt;decode&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
    &lt;span class="n"&gt;let&lt;/span&gt; &lt;span class="n"&gt;jsonData&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;JSON&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;parse&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
    &lt;span class="n"&gt;console&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;log&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="err"&gt;`&lt;/span&gt;&lt;span class="n"&gt;receive&lt;/span&gt; &lt;span class="n"&gt;custom&lt;/span&gt; &lt;span class="n"&gt;msg&lt;/span&gt; &lt;span class="n"&gt;from&lt;/span&gt; &lt;span class="err"&gt;$&lt;/span&gt;&lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;userId&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="nl"&gt;cmdId:&lt;/span&gt; &lt;span class="err"&gt;$&lt;/span&gt;&lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;cmdId&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="nl"&gt;seq:&lt;/span&gt; &lt;span class="err"&gt;$&lt;/span&gt;&lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;seq&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="nl"&gt;data:&lt;/span&gt; &lt;span class="err"&gt;$&lt;/span&gt;&lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;&lt;span class="err"&gt;`&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;jsonData&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;type&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="mi"&gt;10000&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="n"&gt;jsonData&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;payload&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;end&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="c1"&gt;// Subtitle intermediate state&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;jsonData&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;type&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="mi"&gt;10000&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="n"&gt;jsonData&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;payload&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;end&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
       &lt;span class="c1"&gt;// That is all for this sentence. &lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;IV. Sending Custom Messages&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Custom messages of RTC Engine are uniformly sent via the client, &lt;strong&gt;cmdID is fixed at 2&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;See &lt;a href="https://trtc.io/document/47866" rel="noopener noreferrer"&gt;Sending and Receiving Messages&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You can skip the ASR（STT） process by sending custom text and communicate directly with the AI service through text.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;  &lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="s"&gt;"type"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;20000&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// Send a custom text message in the client.&lt;/span&gt;
  &lt;span class="s"&gt;"sender"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"user_a"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// User ID of the sender. The server will check if this user ID is valid.&lt;/span&gt;
  &lt;span class="s"&gt;"receiver"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"user_bot"&lt;/span&gt;&lt;span class="o"&gt;],&lt;/span&gt; &lt;span class="c1"&gt;// List of receivers' user IDs. You need to enter the chatbot user ID only. The server will check if this user ID is valid.&lt;/span&gt;
  &lt;span class="s"&gt;"payload"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s"&gt;"id"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"uuid"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// Message ID for troubleshooting. You can use the UUID.&lt;/span&gt;
    &lt;span class="s"&gt;"message"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"xxx"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// Message content.&lt;/span&gt;
    &lt;span class="s"&gt;"timestamp"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;123&lt;/span&gt; &lt;span class="c1"&gt;// Timestamp for troubleshooting.&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can send an interruption signal to perform interruption.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="s"&gt;"type"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;20001&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// Send an interruption signal in the client.&lt;/span&gt;
  &lt;span class="s"&gt;"sender"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"user_a"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// User ID of the sender. The server will check if this user ID is valid.&lt;/span&gt;
  &lt;span class="s"&gt;"receiver"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"user_bot"&lt;/span&gt;&lt;span class="o"&gt;],&lt;/span&gt; &lt;span class="c1"&gt;// List of receivers' user IDs. You need to enter the chatbot user ID only. The server will check if this user ID is valid.&lt;/span&gt;
  &lt;span class="s"&gt;"payload"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s"&gt;"id"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"uuid"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// Message ID for troubleshooting. You can use the UUID.&lt;/span&gt;
    &lt;span class="s"&gt;"timestamp"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;123&lt;/span&gt; &lt;span class="c1"&gt;// Timestamp for troubleshooting.&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can send an interruption signal to perform interruption.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="s"&gt;"type"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;20001&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// Send an interruption signal in the client.&lt;/span&gt;
  &lt;span class="s"&gt;"sender"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"user_a"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// User ID of the sender. The server will check if this user ID is valid.&lt;/span&gt;
  &lt;span class="s"&gt;"receiver"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"user_bot"&lt;/span&gt;&lt;span class="o"&gt;],&lt;/span&gt; &lt;span class="c1"&gt;// List of receivers' user IDs. You need to enter the chatbot user ID only. The server will check if this user ID is valid.&lt;/span&gt;
  &lt;span class="s"&gt;"payload"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s"&gt;"id"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"uuid"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// Message ID for troubleshooting. You can use the UUID.&lt;/span&gt;
    &lt;span class="s"&gt;"timestamp"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;123&lt;/span&gt; &lt;span class="c1"&gt;// Timestamp for troubleshooting.&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Example Code&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="s"&gt;"type"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;20001&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
  &lt;span class="s"&gt;"sender"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"user_a"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
  &lt;span class="s"&gt;"receiver"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"user_bot"&lt;/span&gt;&lt;span class="o"&gt;],&lt;/span&gt;
  &lt;span class="s"&gt;"payload"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s"&gt;"id"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"uuid"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
    &lt;span class="s"&gt;"timestamp"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;123&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;};&lt;/span&gt;

&lt;span class="n"&gt;trtc&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;sendCustomMessage&lt;/span&gt;&lt;span class="o"&gt;({&lt;/span&gt;
  &lt;span class="nl"&gt;cmdId:&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
  &lt;span class="nl"&gt;data:&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;TextEncoder&lt;/span&gt;&lt;span class="o"&gt;().&lt;/span&gt;&lt;span class="na"&gt;encode&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="no"&gt;JSON&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;stringify&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="o"&gt;)).&lt;/span&gt;&lt;span class="na"&gt;buffer&lt;/span&gt;
&lt;span class="o"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;V. Stopping the AI Conversation and Exiting the RTC Engine Room&lt;/strong&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Stop the AI conversation task in the server.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Call the &lt;a href="https://trtc.io/document/65296?product=rtcengine&amp;amp;menulabel=serverfeaturesapis" rel="noopener noreferrer"&gt;StopAIConversation&lt;/a&gt; API through the backend and terminate this conversation.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Exit the RTC Engine room in the client. For details, see:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://trtc.io/document/59649?platform=web&amp;amp;product=rtcengine&amp;amp;menulabel=sdk" rel="noopener noreferrer"&gt;Integration Guide for Web &amp;amp; H5 Without UI&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhjwwd369wxwfnhg81e4b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhjwwd369wxwfnhg81e4b.png" alt="Image description" width="800" height="496"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://trtc.io/solutions/conversational-ai" rel="noopener noreferrer"&gt;&lt;strong&gt;Conversational AI&lt;/strong&gt;&lt;/a&gt; is opening up new possibilities for faith-based communities, making spiritual practices more inclusive, personalized, and engaging. As more congregations adopt these innovative solutions, the future of religious and spiritual life will likely witness even greater breakthroughs in cultural diversity, one-on-one personalization, and interactive experiences.&lt;/p&gt;

&lt;p&gt;To learn more about integrating Tencent RTC’s voice AI features into your application or hardware, explore our Conversational AI Engine.&lt;/p&gt;

&lt;p&gt;My name is Susie. I am a writer and Media Service Product Manager. I work with startups across the globe to build real-time communication solutions using SDK and APIs.&lt;/p&gt;

&lt;p&gt;If you want to discover the new opportunities conversational AI can bring to faith and technology, welcome to &lt;a href="https://sc-rp.tencentcloud.com:8106/t/RA" rel="noopener noreferrer"&gt;&lt;strong&gt;contact us&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Innovating Music Social Experiences with Real-Time Beauty Effects</title>
      <dc:creator>LunarDrift</dc:creator>
      <pubDate>Sat, 24 May 2025 13:51:05 +0000</pubDate>
      <link>https://dev.to/susiewang/innovating-music-social-experiences-with-real-time-beauty-effects-4n37</link>
      <guid>https://dev.to/susiewang/innovating-music-social-experiences-with-real-time-beauty-effects-4n37</guid>
      <description>&lt;p&gt;Many people have their eyes set on short video platforms, but StarMaker—a music social app—has quietly risen to global prominence. Featuring “K-singing + social,” StarMaker incorporates voice rooms, live streams, and casual games. Since its launch in 2016, the platform has rapidly amassed over 310 million users worldwide, thanks to its high-quality audio/video experience, varied interaction features, and a comprehensive music library. It now consistently ranks among the top music/audio apps in more than 130 countries and regions.&lt;/p&gt;

&lt;p&gt;However, with explosive user growth comes new challenges in performance and user experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  CHALLENGE
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Optimizing Low-End Device Compatibility &amp;amp; Beauty Performance
&lt;/h3&gt;

&lt;p&gt;StarMaker’s user base is mainly in regions like the Middle East, Southeast Asia, and South America, where affordable Android phones are widespread. In some cases, customers still use five-year-old devices with CPUs of only 1.8–2.4GHz and OS versions below Android 11. Running online karaoke, live streaming, and beauty effects on these underpowered phones often leads to lag, overheating, or frequent disconnects.&lt;/p&gt;

&lt;p&gt;Ensuring satisfying &lt;a href="https://trtc.io/solutions/face-filters" rel="noopener noreferrer"&gt;&lt;strong&gt;beauty-effects&lt;/strong&gt;&lt;/a&gt; performance on low-spec handsets became a pressing issue for StarMaker. Initially, the team attempted an in-house solution, but adapting to countless device models and optimizing each filter (face slimming, eye enlargement, stickers, etc.) proved extremely labor-intensive.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. From Hardware Compatibility to Localized Beauty Styles
&lt;/h3&gt;

&lt;p&gt;As the StarMaker user demographic became more diverse, the app needed to address variations in skin tones, facial structures, and cultural aesthetics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Face recognition and effects fitting:&lt;/strong&gt; By training and testing algorithms on multiple ethnic facial features, the SDK is able to deliver more precise and natural beauty outcomes.&lt;/li&gt;
&lt;li&gt;A rich library of stickers and effect materials that reflect global cultural events and everyday life scenes. Regular updates ensure users can always find trending, creative options aligned with their personal style.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flm93wavdfoggjudvvh1v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flm93wavdfoggjudvvh1v.png" alt="Image description" width="800" height="598"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  CORE STRATEGY: Maintaining Beauty Quality While Maximizing Performance
&lt;/h2&gt;

&lt;p&gt;After evaluating resource allocation and product demands, StarMaker decided to integrate the &lt;a href="https://trtc.io/products/beauty" rel="noopener noreferrer"&gt;&lt;strong&gt;Tencent RTC Beauty AR SDK&lt;/strong&gt;&lt;/a&gt;. Within just two weeks, they completed in-depth optimization on both performance and visual effects, enabling devices over a decade old to maintain a stable 30 fps while applying beauty filters.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Streamlined Rendering Pipeline&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://trtc.io/products/beauty" rel="noopener noreferrer"&gt;&lt;strong&gt;Tencent RTC SDK&lt;/strong&gt;&lt;/a&gt; examined basic beautification, makeup, and dynamic effects, merging redundant rendering processes and minimizing extra rendering nodes to accelerate overall performance.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Lightweight Makeup&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Traditional “full makeup” was broken into modular components, each using minimal configuration files for combination. This reduces runtime overhead, while supporting flexible layering of other special effects, resulting in lower CPU and memory usage.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Dynamic Sticker Loading and Cache Optimization&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;By improving cache hit rates and refining loading logic, the SDK cuts down on frequent parsing, enhances performance for large animated overlays, and expedites sticker loading and playback.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Aside from general improvements, Tencent RTC pursued deeper optimizations for low-end devices:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Different AI models are chosen according to scene requirements—lightweight models for basic beautification and high-precision models for more intensive makeup effects.&lt;/li&gt;
&lt;li&gt;A custom fast-skin-smoothing algorithm plus flexible face-mesh configurations further reduce CPU and memory usage, automatically toggling or disabling certain features when not needed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Thanks to this multi-layered, targeted optimization, the SDK’s per-frame processing time on low-end phones dropped by up to 50%. Whether broadcasting at 1080p, 720p, or 480p, users can enjoy smoother &lt;a href="https://trtc.io/solutions/face-filters" rel="noopener noreferrer"&gt;&lt;strong&gt;beauty effects&lt;/strong&gt;&lt;/a&gt; at approximately 30 fps.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to build a real-time beautification app on the web
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://trtc.io/document/68963?platform=web&amp;amp;product=beautyar" rel="noopener noreferrer"&gt;&lt;strong&gt;Beauty AR Web&lt;/strong&gt;&lt;/a&gt; allows you to implement AR beautification, filters, makeup, stickers, and other features on websites. &lt;a href="https://trtc.io/document/68963?platform=web&amp;amp;product=beautyar" rel="noopener noreferrer"&gt;&lt;strong&gt;This guide&lt;/strong&gt;&lt;/a&gt; describes how to quickly run a web application that supports real-time beautification locally.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 1. Create a license&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. Applying for a trial license&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Ensure you have created a web license as instructed in &lt;a href="https://trtc.io/document/60218?platform=web&amp;amp;product=beautyar#" rel="noopener noreferrer"&gt;Applying for a trial license&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Fill in the project name and enter the local development address for the domain, taking &lt;code&gt;test.webar.com&lt;/code&gt; as an example. Click Confirm to complete the creation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Getting the App ID 、license key, and token.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After the creation, you can see the information of the created test project in the project list and get the &lt;strong&gt;&lt;code&gt;License Token&lt;/code&gt;&lt;/strong&gt; for Beauty AR Web and the &lt;strong&gt;&lt;code&gt;License key&lt;/code&gt;&lt;/strong&gt; for the test project. You also can get the &lt;strong&gt;&lt;code&gt;App ID&lt;/code&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8hk34muwnufvy7w3wx7z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8hk34muwnufvy7w3wx7z.png" alt="Image description" width="800" height="204"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Step 2. Run locally&lt;/strong&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt; Pull &lt;a href="https://github.com/tencentcloud-webar/web-demo-en/tree/master/quick-start" rel="noopener noreferrer"&gt;sample project&lt;/a&gt; code.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt; &lt;span class="n"&gt;git&lt;/span&gt; &lt;span class="n"&gt;clone&lt;/span&gt; &lt;span class="nl"&gt;https:&lt;/span&gt;&lt;span class="c1"&gt;//github.com/tencentcloud-webar/web-demo-en.git&lt;/span&gt;
 &lt;span class="n"&gt;cd&lt;/span&gt; &lt;span class="n"&gt;quick&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;start&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;replace the specified configuration items in &lt;code&gt;index.js&lt;/code&gt; with the license key, token, and &lt;code&gt;APPID&lt;/code&gt; obtained in &lt;a href="https://trtc.io/document/68963?platform=web&amp;amp;product=beautyar#.E6.AD.A5.E9.AA.A41.EF.BC.9A.E5.88.9B.E5.BB.BA-license.5B.5D(id.3Astep1)" rel="noopener noreferrer"&gt;step 1&lt;/a&gt; as shown below:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="cm"&gt;/** ----- Authentication configuration ----- */&lt;/span&gt;

&lt;span class="cm"&gt;/**
 * Tencent Cloud account's APPID
 * 
 * You can also view your APPID in the [Account Center](https://console.tencentcloud.com/developer).
 */&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="no"&gt;APPID&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="err"&gt;''&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// Set it to your Tencent Cloud account APPID.&lt;/span&gt;

&lt;span class="cm"&gt;/**
 * Web LicenseKey
 * 
 * obtained from Before You Start
 */&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="no"&gt;LICENSE_KEY&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="err"&gt;''&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// Set it to your license key.&lt;/span&gt;

&lt;span class="cm"&gt;/**
 * The token used to calculate the signature.
 * 
 * Note: This method is only suitable for debugging. In a production environment, you should store the token and calculate the signature on your server. The front end can get the signature by calling an API. For details, see
 * https://trtc.io/document/50099?platform=web&amp;amp;product=beautyar#e4ce3483-41f7-4391-83ae-f4b61e221ea0
 */&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="n"&gt;token&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="err"&gt;''&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// Set it to your token.&lt;/span&gt;

&lt;span class="cm"&gt;/** ----------------------- */&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Run the project in the local development environment.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Run the following commands successively in the project directory and access &lt;code&gt;localhost:8090&lt;/code&gt; in the browser to try out the capabilities of Beauty AR Web.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="nc"&gt;Install&lt;/span&gt; &lt;span class="n"&gt;dependencies&lt;/span&gt;
&lt;span class="n"&gt;npm&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; 

&lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="nc"&gt;Compile&lt;/span&gt; &lt;span class="n"&gt;and&lt;/span&gt; &lt;span class="n"&gt;run&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;code&lt;/span&gt;
&lt;span class="n"&gt;npm&lt;/span&gt; &lt;span class="n"&gt;run&lt;/span&gt; &lt;span class="n"&gt;dev&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After following the steps above, you can try out the filters and effects of the SDK for desktop browser. You can use the built-in materials to try out various makeup filters and effects as instructed in &lt;a href="https://trtc.io/document/68777?platform=web&amp;amp;product=beautyar" rel="noopener noreferrer"&gt;&lt;strong&gt;Start Integration&lt;/strong&gt;&lt;/a&gt;, or use more capabilities of Beauty AR Web such as custom stickers, makeup, and filters.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;From tackling performance on entry-level hardware to aligning aesthetics across different cultures, StarMaker’s journey in music-based social networking is powered by a constant drive to refine its user experience. Its success underscores that, by genuinely focusing on user needs and perfecting product features, music and social interaction can blend seamlessly—delivering a new note of “sing-and-chat” leisure to audiences worldwide.&lt;/p&gt;

&lt;p&gt;My name is Susie. I am a writer and Media Service Product Manager. I work with startups across the globe to build real-time communication solutions using SDK and APIs.&lt;/p&gt;

&lt;p&gt;If you want to build face filter or other special effect into your app, welcome to &lt;a href="https://sc-rp.tencentcloud.com:8106/t/RA" rel="noopener noreferrer"&gt;&lt;strong&gt;contact us&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>webdev</category>
      <category>programming</category>
    </item>
    <item>
      <title>AI Pets: Harnessing Conversational AI for Emotional Companionship</title>
      <dc:creator>LunarDrift</dc:creator>
      <pubDate>Sat, 24 May 2025 10:30:47 +0000</pubDate>
      <link>https://dev.to/susiewang/ai-pets-harnessing-conversational-ai-for-emotional-companionship-2mb8</link>
      <guid>https://dev.to/susiewang/ai-pets-harnessing-conversational-ai-for-emotional-companionship-2mb8</guid>
      <description>&lt;p&gt;In the post-pandemic era, more and more people have been paying attention to the trend of “companion technology,” with AI pets emerging as one of the highlights. These cute and intelligent “robot pets” are gradually finding their way into people’s lives, offering emotional solace to busy urban dwellers, individuals living alone, and even the elderly.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. The Rise of AI Pets: From Loneliness to Companionship
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1）Emotional Needs After the Pandemic
&lt;/h3&gt;

&lt;p&gt;During COVID-19, many countries implemented social distancing measures, drastically reducing face-to-face interactions. Meanwhile, the widespread adoption of remote work left many feeling “socially deprived” and “lonely.” AI pets have since become a highly appealing companion solution: they offer an interactive experience akin to real pets, can remain on standby 24/7, and help fulfill people’s needs for emotional connections and instant companionship.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs8hmfb7ps3zb36s81k3p.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs8hmfb7ps3zb36s81k3p.jpg" alt="Image description" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  2）Typical Examples: Moflin and LOVOT
&lt;/h3&gt;

&lt;p&gt;In Japan, Ms. Haruka Uto, who lives with three robot pets (two Moflins and one LOVOT), provides an illustrative example. Moflin starts out like a baby—fragile and naive—but soon learns to “move boldly” and exhibit clear emotional tendencies. By recognizing the owner’s voice, these robots can respond accordingly. Another robot called LOVOT, resembling a small “penguin” on wheels with round, attentive eyes, can roam freely around the home, sing, and even show “signs of jealousy” in its interactions.&lt;/p&gt;

&lt;h3&gt;
  
  
  3）Diverse Forms and Significant Market Potential
&lt;/h3&gt;

&lt;p&gt;As robotics hardware and AI algorithms advance, the “social robots” market has expanded rapidly. According to IMARC Group, the global social robots market was already approaching &lt;a href="https://www.imarcgroup.com/social-robots-market" rel="noopener noreferrer"&gt;&lt;strong&gt;US$7 billion in 2023&lt;/strong&gt;&lt;/a&gt; and could reach US$57 billion by 2032.&lt;/p&gt;

&lt;p&gt;These devices come in many shapes and sizes, from &lt;a href="https://us.softbankrobotics.com/pepper" rel="noopener noreferrer"&gt;&lt;strong&gt;the bionic robot Pepper&lt;/strong&gt;&lt;/a&gt; to the desktop robot Emo, and even cutesy companions like Moflin and LOVOT. By relying on high-performance microprocessors—and the ability to recognize voices and facial expressions—they can “read” human emotions and interact in real-time, safely, and efficiently.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Emotional Intelligence and Technical Challenges of AI Pets
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1）Emotional Recognition and Expression
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Sensors and Algorithms:&lt;/strong&gt; Cameras and microphones capture voice tones and facial expressions to identify emotional states. Coupled with big data and machine learning algorithms, the system then generates appropriate “emotional feedback.”&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Battery Life and Computing Power:&lt;/strong&gt; Robot pets are typically small devices and must balance battery efficiency with computational demands. Large-scale use of LLMs (large language models) for in-depth conversation remains limited at this stage, so finding the right balance between functionality and portability is key.&lt;/p&gt;

&lt;h3&gt;
  
  
  2）Broader Target Groups: Elderly, Office Workers, and Students
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Elderly:&lt;/strong&gt; In Japan, robots are already being used in nursing homes and communities to alleviate loneliness and offer companionship and emotional support.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Office Workers:&lt;/strong&gt; The pressure-filled routine of commuting between work and home leaves many professionals craving a companion for real-time interaction and care.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Students:&lt;/strong&gt; In dorms or rental accommodations, AI pets can help ease academic and social pressures by providing comfort and emotional backing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ma4gy3uvriep2c6xnsr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ma4gy3uvriep2c6xnsr.png" alt="Image description" width="800" height="496"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Future Prospects and Other Possibilities for AI Pet Services
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1）Industry Trends: Shifting From “Interactive” to “Heartwarming”
&lt;/h3&gt;

&lt;p&gt;Today’s AI pets are far beyond being mere mechanical devices. They leverage advanced AI solutions for emotional decision-making and memory consolidation, even displaying “personalized” reactions to their owners. Many in-development robots incorporate tactile warmth or empathic design to enhance the authenticity of human-machine interactions.&lt;/p&gt;

&lt;h3&gt;
  
  
  2）Extending to Multiple Scenarios: Elderly Care, Rehabilitation, Healthcare, and Education
&lt;/h3&gt;

&lt;p&gt;Beyond companionship in daily life, AI pets will increasingly expand into professional sectors such as elderly care, rehabilitation, psychological counseling, and educational support. By integrating robust dialogue engines or remote collaboration features, AI pets can help healthcare workers or teachers improve individual case tracking and interactive efficiency.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Tencent RTC’s Conversational AI: Letting AI Pets “Talk” and Show Warmth
&lt;/h2&gt;

&lt;p&gt;At the heart of AI pet interaction lies real-time, stable, low-latency voice and facial recognition capabilities, combined with seamless emotional dialogue. To achieve this, the Tencent RTC solution—integrated with Conversational AI—offers key benefits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;High Concurrency and Low Latency:&lt;/strong&gt; A global network of nodes paired with intelligent routing ensures real-time communication and interaction among pet robots and users.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Intelligent Speech Recognition and Emotion Analysis:&lt;/strong&gt; By combining speech AI and LLM models with expandable emotional computation, AI pets gain richer “listen–evaluate–respond” capabilities.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Intelligent Dialog Management:&lt;/strong&gt; Multi-level intent detection and dialog logic management allow AI pets to engage in more natural, continuous interactions—never “too intrusive” and always there when users need them most.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Open APIs and Scenario Customization:&lt;/strong&gt; The Tencent RTC platform supports various integration methods and can be highly customized, allowing developers to seamlessly implement AI capabilities into robot pets to suit different hardware and application needs.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  5. How to Integrate the AIConversationKit Component Effortlessly
&lt;/h2&gt;

&lt;p&gt;Now I will guide you on how to integrate the &lt;code&gt;AIConversationKit&lt;/code&gt; component in a short time. Following &lt;a href="https://trtc.io/document/69004?product=rtcengine&amp;amp;menulabel=serverfeaturesapis" rel="noopener noreferrer"&gt;&lt;strong&gt;this guide&lt;/strong&gt;&lt;/a&gt;, you will complete the following key steps within 20 minutes and implement a conversational AI project with a complete UI.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm5j65ottaprnx1uc98az.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm5j65ottaprnx1uc98az.png" alt="Image description" width="800" height="1734"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 1: Activating Service&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Before initiating a conversation with AiConversationKit, you need to go to the console to activate conversational AI service for AIConversationKit. For specific steps, please see &lt;a href="https://trtc.io/document/69002?product=rtcengine&amp;amp;menulabel=serverfeaturesapis#" rel="noopener noreferrer"&gt;Activate Service&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 2: Download the TUIRoomKit Component&lt;/strong&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Clone the code in &lt;a href="https://github.com/Tencent-RTC/AIConversationKit" rel="noopener noreferrer"&gt;Github&lt;/a&gt;, then copy the &lt;code&gt;aiconversatonkit&lt;/code&gt; subdirectory under the &lt;code&gt;Android&lt;/code&gt; directory to the same-level directory of the app in your current project, as shown in the figure below:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyq0g3wgqmzkunvlntbjs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyq0g3wgqmzkunvlntbjs.png" alt="Image description" width="402" height="272"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 3: Configuration&lt;/strong&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Find the &lt;code&gt;setting.gradle (or settings.gradle.kts)&lt;/code&gt; file in the project root directory, and add the following code in it. It enables importing the &lt;code&gt;aiconversationkit&lt;/code&gt; component into your current project.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="n"&gt;include&lt;/span&gt; &lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;aiconversationkit&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Find the &lt;code&gt;build.gradle (or build.gradle.kts)&lt;/code&gt; file under the app directory, and add the following code in it. It enables the current &lt;code&gt;app&lt;/code&gt; to depend on the newly added &lt;code&gt;aiconversationkit&lt;/code&gt; component.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="n"&gt;api&lt;/span&gt; &lt;span class="nf"&gt;project&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;aiconversationkit&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since we use Java's reflection features within the SDK, some classes in the SDK need to be added to the non-obfuscation list. Therefore, you need to add the following code in the &lt;a href="http://proguard-rules.pro/" rel="noopener noreferrer"&gt;proguard-rules.pro&lt;/a&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="p"&gt;-&lt;/span&gt;&lt;span class="n"&gt;keep&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="err"&gt;com.tencent.** { *; }
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Find the AndroidManifest.xml file under the app directory. Add &lt;code&gt;tools:replace="android:allowBackup"&lt;/code&gt; in the &lt;code&gt;application&lt;/code&gt; node. Override the settings within the component and use your own settings.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;
  &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;application&lt;/span&gt;
    &lt;span class="n"&gt;android&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;".DemoApplication"&lt;/span&gt;
    &lt;span class="n"&gt;android&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;allowBackup&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"false"&lt;/span&gt;
    &lt;span class="n"&gt;android&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;icon&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"@drawable/app_ic_launcher"&lt;/span&gt;
    &lt;span class="n"&gt;android&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;label&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"@string/app_name"&lt;/span&gt;
    &lt;span class="n"&gt;android&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;largeHeap&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"true"&lt;/span&gt;
    &lt;span class="n"&gt;android&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;theme&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"@style/AppTheme"&lt;/span&gt;
    &lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;replace&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"android:allowBackup"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Step 4: Sign In&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Add the following code to your project. It enables the login of the component by calling relevant APIs in &lt;code&gt;TUILogin&lt;/code&gt;. &lt;strong&gt;This step is critical&lt;/strong&gt; because only after logging in can &lt;code&gt;AIConversationkit&lt;/code&gt; features be used normally. Please patiently check if the relevant parameters are configured correctly.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;userId&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"denny"&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// Please replace with your UserID&lt;/span&gt;
&lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;sdkAppId&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1400000001&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// Please replace with the sdkAppId obtained in step 1&lt;/span&gt;
&lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;sdkSecretKey&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"xxxx"&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// Please replace with the sdkSecretKey obtained in step 1&lt;/span&gt;
&lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;userSig&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;GenerateTestUserSig&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;genTestUserSig&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sdkAppId&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;userId&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;sdkSecretKey&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;span class="nc"&gt;TUILogin&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;login&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;sdkAppId&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;userId&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;userSig&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;TUICallback&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nd"&gt;@Override&lt;/span&gt; &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;onSuccess&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{}&lt;/span&gt;

    &lt;span class="nd"&gt;@Override&lt;/span&gt; &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;onError&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;errorCode&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;errorMessage&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{}&lt;/span&gt;
&lt;span class="o"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Step 5: Start Your First AI Conversation&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Upon success of &lt;a href="https://trtc.io/document/69004?product=rtcengine&amp;amp;menulabel=serverfeaturesapis#07002a30-aca6-4e6e-991f-0d8b65e315a6" rel="noopener noreferrer"&gt;TUILogin.login&lt;/a&gt;, see the following code to initiate conversational AI service.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nc"&gt;AIConversationDefine&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;StartAIConversationParams&lt;/span&gt; &lt;span class="n"&gt;startParams&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;AIConversationDefine&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;StartAIConversationParams&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;

&lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;sdkAppId&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1400000001&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;      &lt;span class="c1"&gt;// 1,Replace your sdkAppId&lt;/span&gt;
&lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;sdkSecretKey&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"xxxx"&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;   &lt;span class="c1"&gt;// 2,Replace your sdkSecretKey&lt;/span&gt;
&lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;aiRobotId&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"robot_"&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nc"&gt;TUILogin&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getUserId&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
&lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;aiRobotSig&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;GenerateTestUserSig&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;genTestUserSig&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sdkAppId&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;aiRobotId&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;sdkSecretKey&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;startParams&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;agentConfig&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;AIConversationDefine&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;AgentConfig&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;generateDefaultConfig&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;aiRobotId&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;aiRobotSig&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

&lt;span class="n"&gt;startParams&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;secretId&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"xxx"&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;  &lt;span class="c1"&gt;// 3,Replace your secretId&lt;/span&gt;
&lt;span class="n"&gt;startParams&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;secretKey&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"xxx"&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// 4,Replace your secretKey&lt;/span&gt;

&lt;span class="c1"&gt;// 5,Replace your  llmConfig&lt;/span&gt;
&lt;span class="n"&gt;startParams&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;llmConfig&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"{\"LLMType\":\"openai\",\"Model\":\"hunyuan-turbo-latest\",\"SystemPrompt\":\"You are a private assistant\",\"APIUrl\":\"https:xxx\",\"APIKey\":\"xxx\",\"History\":5,\"Streaming\":true}"&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt; 
&lt;span class="c1"&gt;// 6,Replace your  ttsConfig&lt;/span&gt;
&lt;span class="n"&gt;startParams&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;ttsConfig&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"{\"TTSType\":\"tencent\",\"AppId\":\"xxx\",\"SecretId\":\"xxx\",\"SecretKey\":\"xxx\",\"VoiceType\":\"502001\",\"Speed\":1.25,\"Volume\":5,\"PrimaryLanguage\":1,\"FastVoiceType\":\"\"}"&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

&lt;span class="nc"&gt;Intent&lt;/span&gt; &lt;span class="n"&gt;intent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Intent&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;AIConversationActivity&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;intent&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;putExtra&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="no"&gt;KEY_START_AI_CONVERSATION&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;startParams&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;startActivity&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;intent&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Use the data obtained in &lt;a href="https://trtc.io/document/69004?product=rtcengine&amp;amp;menulabel=serverfeaturesapis#07002a30-aca6-4e6e-991f-0d8b65e315a6" rel="noopener noreferrer"&gt;step four&lt;/a&gt; for sdkAppId and sdkSecretKey.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;On the application details page, select &lt;strong&gt;RTC-Engine&lt;/strong&gt; &lt;strong&gt;&amp;gt; Conversational AI&lt;/strong&gt;, refer to &lt;a href="https://trtc.io/document/68137?product=rtcengine&amp;amp;menulabel=serverfeaturesapis" rel="noopener noreferrer"&gt;No-Code Quick Integration Of Conversational AI Feature&lt;/a&gt; configure conversational AI service parameters, including basic configuration, STT, LLM, TTS, click lower right corner &lt;strong&gt;Quick Interation&lt;/strong&gt;, switch to Android, obtain SecretId, SecretKey and Config parameters.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo7erlqq5qolccwh2065d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo7erlqq5qolccwh2065d.png" alt="Image description" width="800" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Copy the SecretId and SecretKey of TencentCloud API to &lt;code&gt;startParams.secretId&lt;/code&gt; and &lt;code&gt;startParams.secretKey&lt;/code&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Copy the Config info to a JSON parsing tool, such as &lt;a href="https://www.json.cn/" rel="noopener noreferrer"&gt;JsonUtil&lt;/a&gt;. Copy the string value corresponding to LLMConfig to &lt;code&gt;startParams.llmConfig&lt;/code&gt;, and copy the string value corresponding to TTSConfig to &lt;code&gt;startParams.ttsConfig&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;As AI algorithms continue to advance and hardware technology keeps evolving, AI pets will further enhance social companionship, emotional support, and household entertainment ecosystems. Meanwhile, Tencent RTC’s Conversational AI solution will serve as a crucial driving force behind the transition “from loneliness to companionship,” enabling everyone to enjoy a warmer and more intelligent AI partner.&lt;/p&gt;

&lt;p&gt;My name is Susie. I am a writer and Media Service Product Manager. I work with startups across the globe to build real-time communication solutions using SDK and APIs.&lt;/p&gt;

&lt;p&gt;If you want to build face filter or other special effect into your app, welcome to &lt;a href="https://sc-rp.tencentcloud.com:8106/t/RA" rel="noopener noreferrer"&gt;&lt;strong&gt;contact us&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
    </item>
    <item>
      <title>How to Add a Host PK Feature in a Live Streaming App | Android Tutorial</title>
      <dc:creator>LunarDrift</dc:creator>
      <pubDate>Thu, 22 May 2025 14:06:01 +0000</pubDate>
      <link>https://dev.to/susiewang/how-to-add-a-host-pk-feature-in-a-live-streaming-app-android-tutorial-4m43</link>
      <guid>https://dev.to/susiewang/how-to-add-a-host-pk-feature-in-a-live-streaming-app-android-tutorial-4m43</guid>
      <description>&lt;p&gt;Many popular live streaming platforms include a “PK” feature (short for “Player Kill” or “versus battle”), where hosts compete to receive gifts from viewers, and the host who gets the most gifts wins. In some cases, a last-second gift from a fan can overturn what seemed like a sure result, providing a thrilling experience for the audience. For the streaming platform, the PK feature typically grants hosts advanced permissions and is a key driver of engagement.&lt;/p&gt;

&lt;p&gt;This article will guide you through integrating &lt;a href="https://trtc.io/document/67283?platform=android&amp;amp;product=live&amp;amp;menulabel=uikit" rel="noopener noreferrer"&gt;&lt;strong&gt;a PK feature&lt;/strong&gt;&lt;/a&gt; into an Android live streaming app. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqsvzbrp68u4j1sbyh6hm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqsvzbrp68u4j1sbyh6hm.png" alt="Image description" width="800" height="1732"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We’ll build upon an existing live streaming setup using Tencent RTC’s &lt;a href="https://trtc.io/document/60037?platform=android&amp;amp;product=live&amp;amp;menulabel=uikit" rel="noopener noreferrer"&gt;**TUILiveKit component&lt;/a&gt;.**&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Custom Functionality&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Custom Battle waiting countdown style&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you need the custom battle waiting countdown style, please refer to the following path for changes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="c1"&gt;// File location: tuilivekit/src/main/java/com/trtc/uikit/livekit/view/liveroom/view/anchor/component/livestreaming/battle/&lt;/span&gt;

&lt;span class="err"&gt;├──&lt;/span&gt; &lt;span class="nc"&gt;BattleCountdownBackView&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;java&lt;/span&gt;    &lt;span class="c1"&gt;// Battle waiting countdown background style&lt;/span&gt;
&lt;span class="err"&gt;└──&lt;/span&gt; &lt;span class="nc"&gt;BattleCountdownView&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;java&lt;/span&gt;        &lt;span class="c1"&gt;// Battle waiting countdown foreground style&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Custom Definition Dual Battle Score Style&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you need the custom Definition Dual battle score style, please refer to the following path for changes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="c1"&gt;// File location: tuilivekit/src/main/java/com/trtc/uikit/livekit/view/liveroom/view/common/battle/SingleBattleScoreView.java&lt;/span&gt;

&lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;SingleBattleScoreView&lt;/span&gt; &lt;span class="n"&gt;extends&lt;/span&gt; &lt;span class="nc"&gt;FrameLayout&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="o"&gt;..&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Custom Definition Multi Battle Score Style&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you need the custom Definition Multi Battle score style, please refer to the following path for changes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="c1"&gt;// File location: tuilivekit/src/main/java/com/trtc/uikit/livekit/view/liveroom/view/common/battle/BattleMemberInfoView.java&lt;/span&gt;

&lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;BattleMemberInfoView&lt;/span&gt; &lt;span class="n"&gt;extends&lt;/span&gt; &lt;span class="nc"&gt;FrameLayout&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="o"&gt;..&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Custom Definition Battle Score Result Style&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you need a Custom Definition Battle Score Result Style, please refer to the following path for changes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="c1"&gt;// File location: tuilivekit/src/main/java/com/trtc/uikit/livekit/view/liveroom/view/common/battle/BattleInfoView.java&lt;/span&gt;

&lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;BattleInfoView&lt;/span&gt; &lt;span class="n"&gt;extends&lt;/span&gt; &lt;span class="nc"&gt;BasicView&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="o"&gt;..&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
    &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="n"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;showBattleResult&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;int&lt;/span&gt; &lt;span class="n"&gt;type&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="c1"&gt;// Battle Result Display&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Key code&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Anchor Battle&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/Tencent-RTC/TUILiveKit" rel="noopener noreferrer"&gt;&lt;strong&gt;TUILiveKit&lt;/strong&gt;&lt;/a&gt; Anchor battle feature is mainly based on &lt;code&gt;BattleService&lt;/code&gt;. You can obtain the battle management object through &lt;code&gt;store.serviceCenter.battleService&lt;/code&gt; and call the relevant battle API functions to implement the battle feature. For example, in the interaction between Anchor A and B, refer to the diagram below for the specific interaction sequence.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6z4y2lxwyxcbqz4yfyam.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6z4y2lxwyxcbqz4yfyam.png" alt="Image description" width="800" height="534"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Anchor A initiates Battle&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Anchor A initiates a battle by calling &lt;code&gt;requestBattle&lt;/code&gt;, passing the maximum Battle duration in parameter &lt;code&gt;config&lt;/code&gt;, whether the inviter needs to reply with accept/reject, and passing anchor B's userId in parameter &lt;code&gt;userIdList&lt;/code&gt;, and passing the battle invitation wait duration in parameter &lt;code&gt;timeout&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="c1"&gt;// File location: tuilivekit/src/main/java/com/trtc/uikit/livekit/manager/controller/BattleController.java &lt;/span&gt;

&lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="n"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;requestBattle&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;List&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;roomIdList&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;int&lt;/span&gt; &lt;span class="n"&gt;timeout&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nc"&gt;TUILiveBattleManager&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;BattleConfig&lt;/span&gt; &lt;span class="n"&gt;config&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;TUILiveBattleManager&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;BattleConfig&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;duration&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;BattleState&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;BATTLE_DURATION&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;needResponse&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;mBattleState&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;mNeedResponse&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;extensionInfo&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;""&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="n"&gt;mLiveService&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;requestBattle&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;roomIdList&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;timeout&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;TUILiveBattleManager&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;BattleRequestCallback&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nd"&gt;@Override&lt;/span&gt;
        &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="n"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;onSuccess&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;TUILiveBattleManager&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;BattleInfo&lt;/span&gt; &lt;span class="n"&gt;battleInfo&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                              &lt;span class="nc"&gt;Map&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;TUILiveBattleManager&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;BattleCode&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;map&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;mBattleState&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;mBattleId&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;battleInfo&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;battleId&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="n"&gt;mBattleState&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;mBattleConfig&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;copy&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
            &lt;span class="nc"&gt;List&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;BattleState&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;BattleUser&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;sendRequests&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;mBattleState&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;mSentBattleRequests&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
            &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;Map&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Entry&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;TUILiveBattleManager&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;BattleCode&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;entry&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;map&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;entrySet&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;key&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;entry&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getKey&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
                &lt;span class="nc"&gt;TUILiveBattleManager&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;BattleCode&lt;/span&gt; &lt;span class="n"&gt;code&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;entry&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getValue&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
                &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;code&lt;/span&gt; &lt;span class="p"&gt;==&lt;/span&gt; &lt;span class="nc"&gt;TUILiveBattleManager&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;BattleCode&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;SUCCESS&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;ConnectionState&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;ConnectionUser&lt;/span&gt; &lt;span class="n"&gt;user&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;mConnectionState&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;connectedUsers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;TextUtils&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;equals&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;userId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                            &lt;span class="n"&gt;sendRequests&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;BattleState&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;BattleUser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
                            &lt;span class="k"&gt;break&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
                        &lt;span class="p"&gt;}&lt;/span&gt;
                    &lt;span class="p"&gt;}&lt;/span&gt;
                &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                    &lt;span class="nf"&gt;notifyToast&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;convertCodeToString&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;entry&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getValue&lt;/span&gt;&lt;span class="p"&gt;()));&lt;/span&gt;
                &lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="n"&gt;mBattleState&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;mSentBattleRequests&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sendRequests&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="nd"&gt;@Override&lt;/span&gt;
        &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="n"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;onError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;TUICommonDefine&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Error&lt;/span&gt; &lt;span class="n"&gt;error&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;s&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="nc"&gt;ErrorHandler&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;onError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;error&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
            &lt;span class="n"&gt;mBattleState&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;mSentBattleRequests&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;clear&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Anchor A can receive the request acceptance callback via &lt;code&gt;onBattleRequestAccept&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Anchor receives a battle request&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Anchor B receives the battle request callback via &lt;code&gt;onBattleRequestReceived&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="c1"&gt;// File location: tuilivekit/src/main/java/com/trtc/uikit/livekit/manager/observer/LiveBattleManagerObserver.java&lt;/span&gt;

&lt;span class="nd"&gt;@Override&lt;/span&gt;
&lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="n"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;onBattleRequestReceived&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;BattleInfo&lt;/span&gt; &lt;span class="n"&gt;battleInfo&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;BattleUser&lt;/span&gt; &lt;span class="n"&gt;inviter&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;BattleUser&lt;/span&gt; &lt;span class="n"&gt;invitee&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nc"&gt;LiveKitLog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;mTag&lt;/span&gt; &lt;span class="p"&gt;+&lt;/span&gt; &lt;span class="s"&gt;" onBattleRequestReceived:[battleInfo:"&lt;/span&gt; &lt;span class="p"&gt;+&lt;/span&gt; &lt;span class="n"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Gson&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;toJson&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;battleInfo&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;+&lt;/span&gt; &lt;span class="s"&gt;", inviter:"&lt;/span&gt; &lt;span class="p"&gt;+&lt;/span&gt; &lt;span class="n"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Gson&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;toJson&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;inviter&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;+&lt;/span&gt; &lt;span class="s"&gt;", invitee:"&lt;/span&gt; &lt;span class="p"&gt;+&lt;/span&gt; &lt;span class="n"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Gson&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;toJson&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;invitee&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;+&lt;/span&gt; &lt;span class="s"&gt;"]"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="n"&gt;mBattleController&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;onBattleRequestReceived&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;battleInfo&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;inviter&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Anchor B accepts the battle request by calling &lt;code&gt;acceptBattle&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="c1"&gt;// File location: tuilivekit/src/main/java/com/trtc/uikit/livekit/manager/controller/BattleController.java &lt;/span&gt;

&lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="n"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;accept&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
   &lt;span class="n"&gt;mLiveService&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;acceptBattle&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;mBattleState&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;mBattleId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;TUIRoomDefine&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;ActionCallback&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
       &lt;span class="nd"&gt;@Override&lt;/span&gt;
       &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="n"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;onSuccess&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

       &lt;span class="p"&gt;}&lt;/span&gt;

       &lt;span class="nd"&gt;@Override&lt;/span&gt;
       &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="n"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;onError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;TUICommonDefine&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Error&lt;/span&gt; &lt;span class="n"&gt;error&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;s&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

       &lt;span class="p"&gt;}&lt;/span&gt;
   &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Anchors A, B, and the audience in the room can receive the battle start callback through &lt;code&gt;onBattleStarted&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="c1"&gt;// File location: tuilivekit/src/main/java/com/trtc/uikit/livekit/manager/observer/LiveBattleManagerObserver.java&lt;/span&gt;

&lt;span class="nd"&gt;@Override&lt;/span&gt;
&lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="n"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;onBattleStarted&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;BattleInfo&lt;/span&gt; &lt;span class="n"&gt;battleInfo&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nc"&gt;LiveKitLog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;mTag&lt;/span&gt; &lt;span class="p"&gt;+&lt;/span&gt; &lt;span class="s"&gt;" onBattleStarted:[battleInfo:"&lt;/span&gt; &lt;span class="p"&gt;+&lt;/span&gt; &lt;span class="n"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Gson&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;toJson&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;battleInfo&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;+&lt;/span&gt; &lt;span class="s"&gt;"]"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="n"&gt;mBattleController&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;onBattleStarted&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;battleInfo&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;The anchor exits the battle&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For example, when anchor B exits the battle, the interaction sequence can be referenced from the diagram below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F51qhjybqs66pya2wku9l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F51qhjybqs66pya2wku9l.png" alt="Image description" width="800" height="434"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Anchor B calls &lt;code&gt;exitBattle&lt;/code&gt; to exit the battle.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="c1"&gt;// File location: tuilivekit/src/main/java/com/trtc/uikit/livekit/manager/controller/BattleController.java &lt;/span&gt;

&lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="n"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;exitBattle&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;mLiveService&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;exitBattle&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;mBattleState&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;mBattleId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;TUIRoomDefine&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;ActionCallback&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nd"&gt;@Override&lt;/span&gt;
        &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="n"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;onSuccess&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;mBattleState&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;mSentBattleRequests&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;clear&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
            &lt;span class="n"&gt;mBattleState&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;mBattledUsers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;clear&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
            &lt;span class="nf"&gt;removeBattleRequestReceived&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="nd"&gt;@Override&lt;/span&gt;
        &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="n"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;onError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;TUICommonDefine&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Error&lt;/span&gt; &lt;span class="n"&gt;error&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;s&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Anchors A, B, and the room audience receive the &lt;code&gt;onBattleEnded&lt;/code&gt; callback and the battle end notification.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="c1"&gt;// File location: tuilivekit/src/main/java/com/trtc/uikit/livekit/manager/observer/LiveBattleManagerObserver.java&lt;/span&gt;

&lt;span class="nd"&gt;@Override&lt;/span&gt;
&lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="n"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;onBattleEnded&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;BattleInfo&lt;/span&gt; &lt;span class="n"&gt;battleInfo&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;BattleStoppedReason&lt;/span&gt; &lt;span class="n"&gt;reason&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nc"&gt;LiveKitLog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;mTag&lt;/span&gt; &lt;span class="p"&gt;+&lt;/span&gt; &lt;span class="s"&gt;" onBattleEnded:[battleInfo:"&lt;/span&gt;
        &lt;span class="p"&gt;+&lt;/span&gt; &lt;span class="n"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Gson&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;toJson&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;battleInfo&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;+&lt;/span&gt; &lt;span class="s"&gt;", reason:"&lt;/span&gt; &lt;span class="p"&gt;+&lt;/span&gt; &lt;span class="n"&gt;reason&lt;/span&gt; &lt;span class="p"&gt;+&lt;/span&gt; &lt;span class="s"&gt;"]"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="n"&gt;mBattleController&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;onBattleEnded&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;battleInfo&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you can add a host PK feature to your live streaming app! For more details on additional features, please &lt;a href="https://trtc.io/document/67283?platform=android&amp;amp;product=live&amp;amp;menulabel=uikit" rel="noopener noreferrer"&gt;&lt;strong&gt;refer to the documentation&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;My name is Susie. I am a writer and Media Service Product Manager. I work with startups across the globe to build real-time communication solutions using SDK and APIs.&lt;/p&gt;

&lt;p&gt;If you want to build face filter or other special effect into your app, welcome to &lt;a href="https://sc-rp.tencentcloud.com:8106/t/RA" rel="noopener noreferrer"&gt;&lt;strong&gt;contact us&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>From AI NPC to AI Game Companion</title>
      <dc:creator>LunarDrift</dc:creator>
      <pubDate>Mon, 19 May 2025 14:06:58 +0000</pubDate>
      <link>https://dev.to/susiewang/from-ai-npc-to-ai-game-companion-7l3</link>
      <guid>https://dev.to/susiewang/from-ai-npc-to-ai-game-companion-7l3</guid>
      <description>&lt;p&gt;In recent years, the explosive advancement of AI technology has profoundly influenced numerous industries. For the gaming sector, AI not only boosts production efficiency—such as in graphics generation, narrative design, and level creation—but also drives unprecedented innovation in player interaction. From AI NPCs that can “chat” with players, to AI teammates that demonstrate human-like cooperation, and AI assistants capable of recognizing player emotions and providing real-time responses, these emerging gameplay elements redefine both the nature and boundaries of games. This article explores the convergence of AI and gaming, highlighting innovative examples from popular and conceptual titles and offering insights into future trends.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Transformation of AI NPCs—From “Tool-Like” to “Native Resident”
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The Limitations of Traditional NPCs: Scripted Interactions Everywhere
&lt;/h3&gt;

&lt;p&gt;Previously, NPCs were often limited to short, fixed lines of dialogue and repetitive quest prompts; unsurprisingly, after multiple encounters, players would find them dull.&lt;/p&gt;

&lt;p&gt;The rigidity of NPC behavior stems from a logic system heavily reliant on pre-scripted instructions or behavior trees, without real-time perception and analysis of player actions or communication.&lt;/p&gt;

&lt;h3&gt;
  
  
  A New Generation of AI NPCs: Genuine Emotions and Instant Conversations
&lt;/h3&gt;

&lt;p&gt;Through natural language processing and large language models, NPCs can now “understand” the user’s intent and emotional tone within a conversation—even “remembering” past choices made by the player—then respond accordingly.&lt;/p&gt;

&lt;p&gt;Such NPCs no longer provide a simple binary “yes/no” reply; rather, they converse in more descriptive, personality-driven language and tone.&lt;/p&gt;

&lt;p&gt;Taking Whispers from the Star as an example, players communicate with the protagonist, Stella, via voice or text, guiding her decision-making in a space-survival setting. Stella’s emotions and personality traits evolve based on these interactions, as if truly journeying with a living, breathing character. The game narrative moves beyond superficial “command input” toward “narrative co-creation.”&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/l1IyKiFHRUo"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  The Value AI NPCs Bring
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Significantly Heightened Replayability:&lt;/strong&gt; Every conversation could lead to new branches or endings.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;More Nuanced Character Portrayal:&lt;/strong&gt; AI NPCs can feel anger or joy and exhibit rational thought, offering players a sense of “companionship” and “immersion.”&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Breaking Conventional World-Building Boundaries:&lt;/strong&gt; By engaging in free-form dialogue, players can create experiences beyond a fixed script, greatly enhancing the game’s reputation and user loyalty.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The Evolution of AI Assistants—From “Manual” to “External Brain”
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Complex Game Progression and New Player Support
&lt;/h3&gt;

&lt;p&gt;In popular competitive genres such as MOBA or FPS, game content has grown considerably: a typical MOBA might feature over a hundred heroes with thousands of gear options; an FPS might have a dozen maps and multiple weapon combinations.&lt;/p&gt;

&lt;p&gt;If new players rely solely on text-based guides or rigid tutorials, they typically experience limited immersion and a steep learning curve. An AI assistant can provide on-demand, voice-based instruction anytime, enabling players to “learn while playing.”&lt;/p&gt;

&lt;p&gt;For instance, Honor of Kings introduced an AI Coaching module that offers real-time hero-specific advice on laning, gear/skill combos, and teamfight tactics. A multi-language system and hero-voice TTS are also included, providing an “immersive” tutorial experience to help newcomers quickly adapt.&lt;/p&gt;

&lt;h3&gt;
  
  
  Supporting Global Operations with Multilingual Capabilities
&lt;/h3&gt;

&lt;p&gt;By combining large language models with TTS technology, an &lt;a href="https://trtc.io/solutions/conversational-ai" rel="noopener noreferrer"&gt;&lt;strong&gt;AI assistant&lt;/strong&gt;&lt;/a&gt; can seamlessly support multiple languages, enabling game developers to deliver unique tutorials or customer service options in various regional markets.&lt;/p&gt;

&lt;p&gt;Players can effortlessly switch between different languages while utilizing an intelligent translation feature that facilitates cross-server matchmaking and cross-lingual communication. For game developers, this presents vast potential for user expansion and market growth.&lt;/p&gt;

&lt;h3&gt;
  
  
  More Empathetic, Emotionally Engaging Support
&lt;/h3&gt;

&lt;p&gt;An AI assistant can &lt;a href="https://trtc.io/solutions/game-voice-chat" rel="noopener noreferrer"&gt;&lt;strong&gt;offer encouragement&lt;/strong&gt;&lt;/a&gt; when sensing a player’s disappointment or frustration, or it can propose aggressive strategies when the player has an advantageous lead.&lt;/p&gt;

&lt;p&gt;Such “emotional intervention” not only strengthens player retention but also sparks greater discussion and sharing within the community.&lt;/p&gt;




&lt;h2&gt;
  
  
  AI Teammates—From “Bot Control” to “Reliable Partner”
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The Challenge of Virtual Teammates in Team Games
&lt;/h3&gt;

&lt;p&gt;Many players who go solo or prefer less social interaction can only queue with AI teammates. However, “bot teammates” are often criticized for poor decision-making. With limited flexibility in communication, they mainly rely on simple, fixed commands—insufficient for increasingly complex tactical requirements.&lt;/p&gt;

&lt;h3&gt;
  
  
  Distinguishing Features of AI Teammates
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Natural Language Comprehension: If a player says, “You go to the first floor; I’ll flank from the back door,” AI teammates can factor in the map layout and overall context to either execute the plan or suggest a better approach.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;• Real-Time Coordination and Feedback: AI can proactively monitor enemy positions, ally health, and resource distribution, dispensing items or warning players about the shrinking safe zone as needed.&lt;/p&gt;

&lt;p&gt;In &lt;a href="https://www.youtube.com/watch?v=gNZ7fGl5CHc" rel="noopener noreferrer"&gt;&lt;strong&gt;Dark Zone Breakout&lt;/strong&gt;&lt;/a&gt;, for example, the AI companion F.A.C.U.L. parses statements like “Keep them occupied in front, charge after three seconds” and infers urgency and priority from the user’s tone. &lt;a href="https://trtc.io/solutions/conversational-ai" rel="noopener noreferrer"&gt;&lt;strong&gt;Subsequent exchanges&lt;/strong&gt;&lt;/a&gt; may feature the AI proactively suggesting defensive measures if the user repeatedly complains about “too many enemies.” PlayerUnknown’s Battlegrounds (Peacekeeper Elite) offers a similar experience: players can pick AI teammates with varying personalities, each capable of voice-channel chat and even providing ammo or gear. The newbie-friendly mode also reminds players to “mind the safe zone” or “watch for nearby footsteps,” improving overall experience and enhancing that sense of “partnership” with the AI.&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/gNZ7fGl5CHc"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  Reshaping the Social Ecosystem
&lt;/h3&gt;

&lt;p&gt;Players no longer face “lonely solo queues” if they lack human teammates, as AI partners can deliver near-human cooperative enjoyment.&lt;/p&gt;

&lt;p&gt;For those who prefer solitary immersion, AI teammates offer engaging companionship without the pressure of social interaction.&lt;/p&gt;




&lt;h2&gt;
  
  
  Future Outlook—From Immersion to a Surge in Creativity
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Opportunities Sparked by AI × Games
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;A Creative Explosion:&lt;/strong&gt; Beyond simply &lt;a href="https://trtc.io/demo/homepage/#/detail?scene=ai" rel="noopener noreferrer"&gt;&lt;strong&gt;chatting with AI characters&lt;/strong&gt;&lt;/a&gt;, players could use AI to customize characters, personalities, and narratives, essentially turning the game into a massive platform for user-generated content.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Personalized Distribution:&lt;/strong&gt; AI can adjust difficulty, style, or storyline according to each player’s preferences, achieving a truly “tailor-made” experience that enhances user engagement.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Industry Challenges
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Computational Power and Cost:&lt;/strong&gt; Large-scale real-time dialog and multi-language audio processing require significant cloud resources and a robust low-latency network.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Content Regulation and Ethics:&lt;/strong&gt; Developers must ensure that AI-generated dialog, narratives, or feedback do not breach sensitive content guidelines, requiring robust moderation systems and user protection.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Balancing Game Fun:&lt;/strong&gt; Overpowered or error-prone AI can disrupt game balance, necessitating continuous fine-tuning by operators.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Tencent RTC GME AI Real-Time Conversational Solution
&lt;/h2&gt;

&lt;p&gt;Tencent RTC, leveraging Tencent’s extensive experience in game development and operation, previously released a one-stop voice solution specialized for gaming—&lt;a href="https://trtc.io/products/gme" rel="noopener noreferrer"&gt;&lt;strong&gt;Game Multimedia Engine (GME)&lt;/strong&gt;&lt;/a&gt;. With breakthroughs in AI, Tencent Cloud TRTC integrates GME with &lt;a href="https://trtc.io/solutions/conversational-ai" rel="noopener noreferrer"&gt;&lt;strong&gt;AI real-time conversation&lt;/strong&gt;&lt;/a&gt; capabilities, offering game developers the most optimized, low-latency pipeline. It also incorporates Tencent Cloud ASR, combined with LLM/TTS solutions for deeper optimization. This approach lowers the total AI conversational latency to around 1000ms—comparable to a human’s reaction time—and adds pioneering features like voiceprint recognition, semantic segmentation, background audio, bridging phrases for more natural, lifelike dialogs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgl5nfy28d0ng4ge8cb9k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgl5nfy28d0ng4ge8cb9k.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Building on high-efficiency acquisition, processing, and transmission of audio-video data, the GME AI real-time conversational solution adds key features crucial to game scenarios, such as intelligent noise suppression, interruption handling, and context management. With minimal development overhead, one can quickly achieve fully open-mic AI voice interaction—a straightforward pathway for game developers to integrate AI NPC, AI game assistants, AI teammates, and other advanced AI-driven features, offering a new dimension of in-game interaction.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Global AI Conversational Latency Under 1000ms&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The GME &lt;a href="https://trtc.io/solutions/conversational-ai" rel="noopener noreferrer"&gt;&lt;strong&gt;AI real-time conversation&lt;/strong&gt;&lt;/a&gt; solution applies extensive optimizations specifically for social and gaming environments, achieving industry-leading performance in bitrate, latency, and resource consumption. By using streaming-based segmentation and connection pooling, &lt;strong&gt;global end-to-end audio-video latency is kept under 300ms&lt;/strong&gt;, and total AI conversational latency stays below 1000ms. With coverage across six continents, 3200+ acceleration nodes, and robust multi-service redundancy, global operations remain consistently high in quality and stability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fully Open Microphone, AI Dialogs Rivaling Human-Like Communication&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Proprietary echo cancellation and noise suppression algorithms significantly reduce typical background noises such as keyboard or mouse clicks in gaming. A new noise reduction engine &lt;a href="https://trtc.io/demo/homepage/#/detail?scene=trtc&amp;amp;active=ai" rel="noopener noreferrer"&gt;&lt;strong&gt;powered by AI&lt;/strong&gt;&lt;/a&gt; enhances ASR accuracy in real time. Addressing casual speech and specialized in-game terminology, the solution uses labeled audio data from real gaming scenarios for model fine-tuning, plus user-defined hotword libraries. &lt;strong&gt;It supports precise ASR in English, Spanish, Japanese, Korean, Chinese, and 23 dialects across 130 international languages.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The system also &lt;strong&gt;builds in double-talk interrupt and context management&lt;/strong&gt; among its core &lt;a href="https://trtc.io/demo/homepage/#/detail?scene=ai" rel="noopener noreferrer"&gt;AI real-time dialog&lt;/a&gt; capabilities. Players can keep their microphones always open to chat freely with AI. Without specifying the end of a statement, the AI can still accurately interpret user intentions—and if the AI is speaking, the user can interrupt at any time to shift topics or issue new commands. This offers conversation quality comparable to human interaction.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3D Spatial Audio, Opening New Frontiers for In-Game AI Voice&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The GME AI real-time conversation solution &lt;strong&gt;uniquely supports &lt;a href="https://trtc.io/demo/homepage/#/detail?scene=trtc" rel="noopener noreferrer"&gt;3D audio technology&lt;/a&gt;&lt;/strong&gt;, restoring spatial audio details through range attenuation, human voice blurring, atmospheric attenuation, etc., allowing players to locate teammates by voice alone and delivering an immersive audio experience. It also &lt;strong&gt;integrates seamlessly with the Wwise audio engine&lt;/strong&gt;, innovatively preventing background music from cutting out during open-mic sessions while enabling more varied audio interplay. Additionally, GME’s AI solution supports &lt;a href="https://trtc.io/demo/homepage/#/detail?scene=trtc&amp;amp;active=voice" rel="noopener noreferrer"&gt;&lt;strong&gt;voice cloning&lt;/strong&gt;&lt;/a&gt;, allowing voice timbre replication and customizable volume and speech speed, infusing each game character with a unique vocal identity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multi-Platform Compatibility, Low-Cost Integration&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;GME AI real-time conversation merges and optimizes the full AI dialog pipeline in a single solution, enabling developers to embed AI conversational features into various games without dealing with complex technical details—significantly shortening development cycles.&lt;/p&gt;

&lt;p&gt;The GME solution supports mainstream consoles and deeply integrates with UE, Unity, and Cocos engines across iOS, Android, Windows, macOS, Web, and Flutter—covering more than 20,000 device models. It’s highly open, letting enterprises bring their own large language models (LLMs) and TTS services. By configuring relevant account credentials, any third-party LLM or TTS can be seamlessly integrated within the service backend.&lt;/p&gt;

&lt;p&gt;Tencent RTC also provides a &lt;a href="https://console.trtc.io/ai-conversation" rel="noopener noreferrer"&gt;&lt;strong&gt;no-code quick testing environment for AI real-time conversation&lt;/strong&gt;&lt;/a&gt;, enabling users to rapidly configure and test AI conversational functionality with zero coding skill required. For any questions during the AI conversation testing process or to learn more about GME’s AI real-time conversation solution, feel free to &lt;a href="https://sc-rp.tencentcloud.com:8106/t/RA" rel="noopener noreferrer"&gt;&lt;strong&gt;contact us&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;




&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;When AI ceases to be merely a “behind-the-scenes tool” and seamlessly pervades NPCs, teammates, and assistants across the game’s ecosystem, the barrier between player and virtual world diminishes further. Increasing numbers of studios are now experimenting with AI-powered real-time voice, natural language processing, and large language models across different products, ushering “digital life” into the player’s world.&lt;/p&gt;

&lt;p&gt;For a gaming industry continually striving for innovation and immersion, the development of AI unlocks unprecedented opportunities—liberating creative capacity, enhancing narrative approaches, extending product life cycles, and forging deeper social and emotional resonance. What other surprises might arise from the fusion of AI × Games in the future? Perhaps it’s time for you to jump in and co-create that answer yourself.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>gamedev</category>
    </item>
    <item>
      <title>RTC Audio-Video Transmission Weak Network Countermeasure Technologies</title>
      <dc:creator>LunarDrift</dc:creator>
      <pubDate>Mon, 19 May 2025 07:17:06 +0000</pubDate>
      <link>https://dev.to/susiewang/rtc-audio-video-transmission-weak-network-countermeasure-technologies-22i0</link>
      <guid>https://dev.to/susiewang/rtc-audio-video-transmission-weak-network-countermeasure-technologies-22i0</guid>
      <description>&lt;p&gt;This document shares an “Overview of &lt;a href="https://trtc.io/document/59655?product=rtcengine&amp;amp;platform=web" rel="noopener noreferrer"&gt;&lt;strong&gt;Weak Network&lt;/strong&gt;&lt;/a&gt; Countermeasure Technologies for RTC Audio-Video Transmission.”&lt;/p&gt;

&lt;p&gt;With the rapid growth of RTC (Real-Time Communication) audio-video technology, new applications emerge, penetrating various use cases. While cutting-edge technologies are driving substantial growth in online scenarios, user expectations for a better experience—lower latency, higher definition, and smoother playback—are escalating.&lt;/p&gt;

&lt;p&gt;These three user-experience factors correspond to three core RTC metrics: real-time performance, clarity, and smoothness.&lt;/p&gt;

&lt;p&gt;However, it’s often impossible to have it all. In scenarios requiring extremely low latency, video clarity may be sacrificed to ensure minimal delay, whereas scenarios emphasizing high definition might tolerate extra latency to secure higher-quality audio-video data.&lt;/p&gt;

&lt;p&gt;To achieve better performance, we usually seek lower latency, higher clarity, and improved fluency through network transport optimizations. The main adversary here is the weak network, which causes congestion, &lt;a href="https://trtc.io/blog/details/what-is-packet-loss" rel="noopener noreferrer"&gt;&lt;strong&gt;packet loss&lt;/strong&gt;&lt;/a&gt;, and &lt;a href="https://trtc.io/blog/details/Jitter-and-Jitter-Buffer" rel="noopener noreferrer"&gt;&lt;strong&gt;jitter&lt;/strong&gt;&lt;/a&gt;—major culprits behind poor user experience.&lt;/p&gt;

&lt;p&gt;“&lt;a href="https://trtc.io/document/59655?product=rtcengine&amp;amp;platform=web" rel="noopener noreferrer"&gt;&lt;strong&gt;Weak network&lt;/strong&gt;&lt;/a&gt; countermeasure technology” is a broad term referring to strategies for addressing the aforementioned network deterioration and other network-related issues.&lt;/p&gt;




&lt;h2&gt;
  
  
  On the Choice of Transport-Layer Protocol (TCP/IP)
&lt;/h2&gt;

&lt;p&gt;First, a brief introduction to transport-layer protocols:&lt;/p&gt;

&lt;p&gt;In the TCP/IP layered model, the transport layer sits below the application layer and is typically provided by the operating system. It offers two principal protocols: &lt;a href="https://trtc.io/document/35164?product=rtcengine&amp;amp;platform=web" rel="noopener noreferrer"&gt;&lt;strong&gt;TCP and UDP&lt;/strong&gt;&lt;/a&gt;. TCP is a connection-oriented, reliable transport protocol that guarantees data integrity and ordering; UDP is a connectionless, unreliable protocol whose data reliability is left entirely to the application layer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In real-time audio-video scenarios, using UDP first has become common practice across the industry&lt;/strong&gt;, primarily because:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;a href="https://trtc.io/document/35164?product=rtcengine&amp;amp;platform=web" rel="noopener noreferrer"&gt;&lt;strong&gt;TCP&lt;/strong&gt;&lt;/a&gt; was not designed for real-time audio-video applications. Its built-in congestion and error-control mechanisms focus on reliability and throughput, leading to higher latency. Under poor network conditions, latency can increase dramatically. According to ITU Standard G.114, when end-to-end latency exceeds 400ms, user interaction is noticeably impaired.&lt;/li&gt;
&lt;li&gt;TCP’s congestion-control and error-control mechanisms reside in the operating system, which the application layer cannot optimize or adjust per scenario. This severely limits flexibility.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://trtc.io/document/35164?product=rtcengine&amp;amp;platform=web" rel="noopener noreferrer"&gt;&lt;strong&gt;UDP&lt;/strong&gt;&lt;/a&gt; has less overhead than TCP, and transport-control strategies can be implemented entirely at the application layer, providing the flexibility real-time media often requires.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Hence, the weak-network challenges and countermeasures discussed here assume the use of the UDP protocol and, specifically, RTP/RTCP atop UDP, which is widely adopted in audio-video applications.&lt;/p&gt;




&lt;h2&gt;
  
  
  Primary Weak Network Issues and Corresponding Countermeasures
&lt;/h2&gt;

&lt;p&gt;In essence, a “weak network problem” for audio-video transmission refers to adverse network conditions that undermine the user experience, chiefly network congestion, &lt;a href="https://trtc.io/blog/details/what-is-packet-loss" rel="noopener noreferrer"&gt;&lt;strong&gt;packet loss&lt;/strong&gt;&lt;/a&gt;, and &lt;a href="https://trtc.io/blog/details/Jitter-and-Jitter-Buffer" rel="noopener noreferrer"&gt;&lt;strong&gt;jitter&lt;/strong&gt;&lt;/a&gt;. These problems cause stutters and poor real-time performance. Given the complex, heterogeneous nature of network environments, the severity of weak network problems can vary widely. Ensuring smooth communication under such conditions has always been a key focus in RTC.&lt;/p&gt;

&lt;h3&gt;
  
  
  Congestion Issues
&lt;/h3&gt;

&lt;p&gt;When the traffic in the network exceeds the bottleneck capacity, congestion arises.&lt;/p&gt;

&lt;p&gt;Congestion directly causes sudden packet loss or bursts of jitter. If the system fails to detect and respond (e.g., by reducing the send rate) quickly, receivers will see stuttering, high latency, or poor image quality.&lt;/p&gt;

&lt;p&gt;To combat congestion, the primary approach is to design a “congestion-control algorithm” that promptly detects congestion and recovers from it as quickly as possible, thereby minimizing impact on the user experience.&lt;/p&gt;

&lt;h3&gt;
  
  
  1) Requirements for Congestion-Control Algorithms
&lt;/h3&gt;

&lt;p&gt;RFC8836 comprehensively summarizes the needs of congestion-control algorithms for interactive real-time audio-video applications. A simplified overview:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Latency:&lt;/strong&gt; The algorithm should keep latency—particularly any latency it introduces—as low as possible while still providing feasible throughput.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Throughput:&lt;/strong&gt; Maximize throughput for the given scenario.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fairness:&lt;/strong&gt; The algorithm should share bandwidth fairly with other real-time flows and TCP flows.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Avoid “starvation”:&lt;/strong&gt; Media flows should not starve or be starved by competing TCP flows.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Convergence speed:&lt;/strong&gt; The algorithm should reach a stable state as quickly as possible when a media flow begins.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Network support:&lt;/strong&gt; The algorithm should not rely on special network features.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stability:&lt;/strong&gt; The algorithm should remain stable even when media flows change, for example, if a temporary interruption occurs in media transmission.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rapid response:&lt;/strong&gt; The algorithm should swiftly adapt to changes in network conditions, such as bottleneck bandwidth or link-latency variations.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In short, congestion control faces two core tasks: (1) fast and accurate detection of network congestion, and (2) appropriate control measures to prevent or mitigate congestion and resume normal states quickly.&lt;/p&gt;

&lt;h3&gt;
  
  
  2) Congestion-Detection Algorithms
&lt;/h3&gt;

&lt;p&gt;Congestion-detection algorithms can be divided into two categories based on measured indicators:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Loss-based: Detect congestion by monitoring &lt;a href="https://trtc.io/blog/details/what-is-packet-loss" rel="noopener noreferrer"&gt;&lt;strong&gt;packet-loss&lt;/strong&gt;&lt;/a&gt; events.&lt;/li&gt;
&lt;li&gt;Delay-based: Detect congestion by measuring delay variation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For interactive, real-time audio-video applications, delay-based methods are often preferred, primarily because they detect congestion earlier—before serious packet loss occurs—and thus help avoid the quality issues associated with dropping packets.&lt;/p&gt;

&lt;p&gt;Furthermore, loss-based methods often probe for link capacity by continuously increasing the send rate until packet loss occurs, which can lead to unbounded queueing delays in the network, especially where large buffers exist, sometimes causing multi-second latency spikes.&lt;/p&gt;

&lt;p&gt;However, because delay-based methods react so quickly to latency increases, they can risk “starving” themselves when competing for bandwidth with loss-based flows. Proper strategies are needed to share network resources more fairly.&lt;/p&gt;

&lt;p&gt;Delay-based methods generally use RTT (round-trip time) or OWD (one-way delay) measurements to gauge congestion. RTT measurement is more straightforward but includes reverse-direction effects that can mask forward-path congestion. OWD delay measurement avoids that. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fti8zhq2fxp3wzfkxajm5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fti8zhq2fxp3wzfkxajm5.png" alt="Image description" width="800" height="576"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As shown in the figure, OWD is estimated by observing the differences between send intervals and receive intervals, indicating how queueing delays fluctuate.&lt;/p&gt;

&lt;h3&gt;
  
  
  3) Congestion Response Measures
&lt;/h3&gt;

&lt;p&gt;In short, congestion response involves calculating an appropriate send rate based on the current congestion state. With regard to other weak network measures on the send side, possible rate-throttling methods include adjusting &lt;a href="https://trtc.io/products/call" rel="noopener noreferrer"&gt;&lt;strong&gt;audio-video&lt;/strong&gt;&lt;/a&gt; encoder rates, controlling Automatic Repeat Request bandwidth, or modifying FEC redundancy, which we explore later. If the sender is a forwarding node, then further strategies such as SVC layering or multi-stream adaptation may also apply.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6l6mvkqcbbt49lls3ft3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6l6mvkqcbbt49lls3ft3.png" alt="Image description" width="731" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  4) A Typical Congestion-Control Framework
&lt;/h3&gt;

&lt;p&gt;Below is the early congestion-control framework once used in Google’s WebRTC:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fam3wwe12fzxus2sdbuxx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fam3wwe12fzxus2sdbuxx.png" alt="Image description" width="800" height="304"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It adopts a hybrid sender- and receiver-driven approach. The sender uses a loss-based algorithm:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If packet loss &amp;lt; 2%, increase sending bandwidth by 8%.&lt;/li&gt;
&lt;li&gt;If packet loss is between 2% and 10%, keep sending bandwidth unchanged.&lt;/li&gt;
&lt;li&gt;If packet loss &amp;gt; 10%, reduce sending bandwidth = current rate × (1 - 0.5 × loss).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The receiver uses a delay-based algorithm, estimating one-way delay with a Kalman filter, then combining that estimate with actual receive throughput to derive an optimal target bandwidth and feeding it back to the sender via RTCP. The sender then takes the minimum of the loss-based and delay-based estimates as the final target.&lt;/p&gt;

&lt;p&gt;Below is WebRTC’s improved congestion-control framework:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F72y5ezas9g98mh9gcczn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F72y5ezas9g98mh9gcczn.png" alt="Image description" width="800" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here, the entire congestion-control logic resides at the sender; the receiver merely provides measurement feedback.&lt;/p&gt;

&lt;p&gt;The new framework refines the delay-estimation mechanism by applying linear regression to one-way delay variation. It discerns three trends—delay is rising, stable, or decreasing—and then uses that assessment along with the current send rate to determine the best target bandwidth.&lt;/p&gt;

&lt;p&gt;In addition to refining detection, the new framework also introduces “proactive bandwidth probing,” further enhancing overall performance. It improves startup convergence as well as responsiveness to changing network conditions.&lt;/p&gt;




&lt;h2&gt;
  
  
  Packet Loss Issues
&lt;/h2&gt;

&lt;p&gt;As noted, real-time interactive media typically runs over RTP/UDP, leaving packet-loss handling to the application layer.&lt;/p&gt;

&lt;p&gt;From a network-transport standpoint, the primary techniques to handle packet loss are Automatic Repeat Request (ARQ) and Forward Error Correction (FEC). On the encoding side, depending on content or codec design, certain robustness features can offset loss—for instance, using B-frames in video encoding to reduce the impact of missing frames. Below, we focus on network-transport measures.&lt;/p&gt;

&lt;h3&gt;
  
  
  1) ARQ (Automatic Repeat Qequest)
&lt;/h3&gt;

&lt;p&gt;In RTP/RTCP, the concept is straightforward: the receiver infers missing packets by gaps in sequence numbers and sends RTCP “NACK” requests for the sender to retransmit the missing data. The overall flow appears as follows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz9ol1u7omwhwz5yrhfsp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz9ol1u7omwhwz5yrhfsp.png" alt="Image description" width="752" height="412"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When it detects a gap (say seq# 101 or 102), the receiver sends RTCP NACK(req 101, 102). The sender retransmits those packets, typically on a separate SSRC if possible.&lt;/p&gt;

&lt;p&gt;Key details include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Initial request delay:&lt;/strong&gt; Decide whether to request a missing packet immediately or wait—possibly factoring in FEC usage.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Interval between repeated requests:&lt;/strong&gt; For the same packet, do not resend the request before a full RTT elapses.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Limit on total requests:&lt;/strong&gt; Calculate the maximum allowable requests based on RTT and permissible overall delay.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ARQ bandwidth limit:&lt;/strong&gt; Automatic Repeat Request consumes part of total bandwidth and cannot exceed available capacity.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Means of returning retransmitted packets:&lt;/strong&gt; Sending them via a dedicated RTP SSRC is recommended, facilitating separate tracking of &lt;a href="https://trtc.io/blog/details/what-is-packet-loss" rel="noopener noreferrer"&gt;&lt;strong&gt;packet loss&lt;/strong&gt;&lt;/a&gt; and ARQ bandwidth.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2) FEC (Forward Error Correction)
&lt;/h3&gt;

&lt;p&gt;In real-time audio-video, FEC is widely used due to its immediacy in recovering from lost packets.&lt;/p&gt;

&lt;p&gt;The fundamental idea is that the sender appends extra repair packets containing redundant data computed over source packets. The receiver then uses these redundancy packets to recover any missing source packets, as shown below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqxuehuegumg0h9273oy7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqxuehuegumg0h9273oy7.png" alt="Image description" width="800" height="544"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Algorithms for generating repair data range from simple XOR-based methods to matrix-based or other coding techniques, which we will not detail here.&lt;/p&gt;

&lt;p&gt;Below is the basic transmitting-side FEC framework:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqg6azsp023fy96x4gewz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqg6azsp023fy96x4gewz.png" alt="Transmitter-Side FEC" width="800" height="544"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;“ADU” stands for Application Data Unit, i.e. an audio or video packet. The FEC encoder produces repair (redundancy) packets at a certain protection ratio. These repair packets, along with the source packets and metadata describing their protection relationship, are sent to the receiver.&lt;/p&gt;

&lt;p&gt;Below is the receiving-side FEC framework:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feq0fh3d4bvwi601q1rfe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feq0fh3d4bvwi601q1rfe.png" alt="Receiver-Side FEC" width="800" height="577"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The receiver passes both source and repair packets to the FEC decoder. If any source packet is missing, the decoder reconstructs it using the available source and repair packets, then delivers the recovered audio-video data to the upper layer for decoding and playback.&lt;/p&gt;

&lt;p&gt;In short, the “protection relationship” indicates which source packets each repair packet helps protect, typically signaled in a specialized format.&lt;/p&gt;

&lt;p&gt;Under the RTP/RTCP framework, two standards define such formats:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ULPFEC (RFC5109): “Uneven Level Protection,” which allows using different protection levels for more or less important packets.&lt;/li&gt;
&lt;li&gt;FlexFEC (RFC8627): “Flexible Forward Error Correction,” which supports both interleaved and non-interleaved parity-based FEC encoding for RTP packets.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3) Combining ARQ and FEC
&lt;/h3&gt;

&lt;p&gt;Compared with FEC, ARQ’s shortcoming is added latency, but its advantage is higher bandwidth efficiency. Generally, the goal is to meet latency requirements while minimizing extra bandwidth and processing overhead for adequate protection.Hence, when combining ARQ and FEC, consider:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use ARQ under moderate latency constraints. Based on RTT and the max permissible latency, calculate the maximum number of retransmissions.&lt;/li&gt;
&lt;li&gt;If retransmissions alone can drive residual packet loss below ~1%, FEC might be unnecessary.&lt;/li&gt;
&lt;li&gt;If you do need FEC, the FEC protection ratio should be based on the residual loss probability after retransmissions, providing the final safety net.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Below is an example showing how ARQ and FEC might work together at different RTT values. For RTT &amp;lt; 20ms and max delay &amp;lt; 100ms, on a link experiencing 30% loss, ARQ alone can reduce loss below 1%. Thus, ARQ suffices here. As RTT grows, FEC coverage grows accordingly. Ultimately, under high RTT, FEC alone handles lost packets.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8hgx7hq9uukakr517wee.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8hgx7hq9uukakr517wee.png" alt="Image description" width="800" height="502"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Jitter Issues
&lt;/h2&gt;

&lt;p&gt;In short, &lt;a href="https://trtc.io/blog/details/Jitter-and-Jitter-Buffer" rel="noopener noreferrer"&gt;&lt;strong&gt;jitter&lt;/strong&gt;&lt;/a&gt; means variation in network delay over time. Greater jitter indicates more fluctuation in transmission delay.&lt;/p&gt;

&lt;p&gt;Jitter causes stutter, skip-ahead playback, and other issues that severely impact the quality of audio-video communication. It arises from multiple sources—new flows competing for the same bandwidth, unstable send rates, and general network volatility.&lt;/p&gt;

&lt;p&gt;A common approach is to compensate at the receiver with a Jitter Buffer, as shown below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhdugqhu7cfsfldrds8cx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhdugqhu7cfsfldrds8cx.png" alt="Image description" width="800" height="467"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By introducing a “Jitter Delay,” the buffer homogenizes incoming data to enable smooth playback.&lt;/p&gt;

&lt;p&gt;Estimating the optimal buffer delay is crucial. If it’s too large, it adds extra delay; if it’s too small, it won’t fully absorb the jitter. In WebRTC, Google employs different approaches for audio and video:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Audio Jitter: The NetEQ module uses a histogram-based method with a forgetting factor, taking the 95th percentile of arrival intervals (Iat) as the buffer delay. This allows it to track variations over time while discarding outdated data.&lt;/li&gt;
&lt;li&gt;Video Jitter: WebRTC uses a separate approach. It measures variations in frame size and latency, applying a Kalman filter for dynamic delay estimation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;However, WebRTC is primarily designed for one-on-one calls. In multi-party conferencing scenarios with media forwarding nodes in between, the end-to-end latency variation can be different, so further optimizations may be needed.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Tencent RTC’s Weak Network Countermeasure Optimizations in Practice&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://trtc.io/" rel="noopener noreferrer"&gt;&lt;strong&gt;TRTC (Tencent RTC)&lt;/strong&gt;&lt;/a&gt; excels in weak network environments. Its core technical optimizations and real-world test data demonstrate high performance even under complex network conditions: global end-to-end latency stays under 300ms, packet loss tolerance exceeds 80%, and it manages jitter above 1000ms. Details of its weak network resilience are as follows:&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;1. Transport-Layer Optimization&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;By using a protocol stack based on &lt;a href="https://trtc.io/document/59667?platform=web&amp;amp;product=rtcengine&amp;amp;menulabel=sdk" rel="noopener noreferrer"&gt;&lt;strong&gt;UDP&lt;/strong&gt;&lt;/a&gt; + RTP/RTCP, it avoids the queue blocking and congestion-control latency seen in TCP under poor network conditions.&lt;/p&gt;

&lt;p&gt;Adopting an NACK retransmission mechanism: An NACK request is only sent upon detecting packet-sequence discontinuities, enabling precise retransmission of missing packets and reducing the round-trip latency overhead from frequent acknowledgments.&lt;/p&gt;

&lt;p&gt;Integrating FEC (Forward Error Correction): A small number of redundant packets are added at the sender, allowing the receiver to recover missing packets from the redundancy and drastically lowering latency caused by waiting for retransmissions.&lt;/p&gt;

&lt;p&gt;TRTC can flexibly switch between or combine ARQ (retransmissions) and &lt;a href="https://trtc.io/blog/details/what-is-packet-loss" rel="noopener noreferrer"&gt;&lt;strong&gt;FEC&lt;/strong&gt;&lt;/a&gt;, adjusting the number of retransmissions and FEC overhead in real time based on network conditions.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;2. Adaptive Bandwidth Control (QoS)&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://trtc.io/blog/details/qos-vs-qoe" rel="noopener noreferrer"&gt;&lt;strong&gt;QoS&lt;/strong&gt;&lt;/a&gt; dynamically measures packet loss rate, RTT, and jitter to predict available bandwidth in real time.&lt;/p&gt;

&lt;p&gt;Based on these predictions, it adjusts encoder bitrate, FEC redundancy ratios, and retransmission rates, responding quickly to changes in network status and avoiding both congestion from overly aggressive transmission and wasted overhead from unnecessary redundancy.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;3. Improvements at the Codec Layer&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;TRTC modifies the video-frame reference structure so it is no longer fully dependent on the “previous frame,” reducing the impact a single lost frame can have on the subsequent decoding process, and thereby bolstering packet loss resilience.&lt;/p&gt;

&lt;p&gt;For audio, TRTC employs PLC (Packet Loss Concealment). When a loss is detected, it estimates and interpolates the missing samples from preceding and following audio frames, preserving natural sound quality even with high packet loss.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;4. Jitter Buffer and Synchronization Strategies&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;By setting an appropriate &lt;a href="https://trtc.io/blog/details/what-is-packet-loss" rel="noopener noreferrer"&gt;&lt;strong&gt;Jitter Buffer&lt;/strong&gt;&lt;/a&gt; at the receiver, TRTC dynamically adjusts buffer size based on arrival times and playback schedules, balancing minimal latency with jitter compensation.&lt;/p&gt;

&lt;p&gt;It also provides an audio-video synchronization mechanism: detecting discrepancies and jitter between audio and video arrivals, then using variable-speed playback to align audio with video, mitigating the negative impact of desynchronization on the viewing experience.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;5. Echo Cancellation and “Double-Talk” Optimization&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Relying on Tencent’s in-house TRAE (Tencent Real-time Audio Engine), TRTC applies &lt;a href="https://trtc.io/document/47635" rel="noopener noreferrer"&gt;&lt;strong&gt;acoustic echo suppression&lt;/strong&gt;&lt;/a&gt; (AEC) between the speaker and microphone to prevent acoustic echo loops.&lt;/p&gt;

&lt;p&gt;It further optimizes “double-talk” scenarios (where both parties speak simultaneously), eliminating issues like feedback squeals and improving clarity so that both ends can hear each other distinctly.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;6. Cloud-Side Decision-Making and Big Data Analysis&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;TRTC deploys a &lt;a href="https://trtc.io/blog/details/qos-vs-qoe" rel="noopener noreferrer"&gt;&lt;strong&gt;QoE&lt;/strong&gt;&lt;/a&gt; (Quality of Experience) assessment framework, gathering packet loss, latency, MOS scores, and other metrics in real time to accurately gauge users’ subjective experiences.&lt;/p&gt;

&lt;p&gt;The cloud decision system dynamically fine-tunes parameters for different users and networks, ensuring every participant receives the optimal transmission strategy and the best possible audio-video quality.&lt;/p&gt;

&lt;p&gt;By orchestrating these multiple techniques, TRTC consistently delivers low-latency, high-quality performance in harsh network environments characterized by high packet loss and severe jitter, making it ideal for online education, remote work, interactive live streaming, and similar scenarios.&lt;/p&gt;

</description>
      <category>network</category>
      <category>transmission</category>
      <category>webdev</category>
    </item>
    <item>
      <title>How to Add Beauty Effects to Your Android Live-streaming App</title>
      <dc:creator>LunarDrift</dc:creator>
      <pubDate>Sun, 18 May 2025 02:06:01 +0000</pubDate>
      <link>https://dev.to/susiewang/how-to-add-beauty-effects-to-your-android-live-streaming-app-3d85</link>
      <guid>https://dev.to/susiewang/how-to-add-beauty-effects-to-your-android-live-streaming-app-3d85</guid>
      <description>&lt;p&gt;Previously, we explored how to build a live streaming app and how to implement beauty features. In this tutorial, we’ll show you how to add beauty effects to your Android live-streaming application. First, we’ll walk you through setting up a fully functional live streaming app, then we’ll integrate the &lt;a href="https://trtc.io/document/60216?platform=android&amp;amp;product=beautyar" rel="noopener noreferrer"&gt;Tencent Effects Beauty&lt;/a&gt; SDK. The Tencent RTC SDK provides a comprehensive, built-in UI, enabling you to quickly achieve sharp, real-time streaming with minimal effort.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ifv2dpt9jaxyukgdb6s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ifv2dpt9jaxyukgdb6s.png" alt="Image description" width="800" height="845"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Step 1. Activate the service&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Before using the Audio and Video Services provided by Tencent Cloud, you need to go to the Console and activate the service for your application. For detailed steps, refer to &lt;a href="https://trtc.io/document/60033#" rel="noopener noreferrer"&gt;&lt;strong&gt;Activate the service&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Step 2: Download the TUILiveKit component&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Clone/download the code in &lt;a href="https://github.com/tencentyun/TUILiveRoom" rel="noopener noreferrer"&gt;Github&lt;/a&gt;, and then copy the TUILiveKit subdirectory in the Android directory to the same level directory as the app in your current project, as shown below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu428cs0suhow2khp0ovj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu428cs0suhow2khp0ovj.png" alt="Image description" width="225" height="374"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Step 3: Configure the project&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In the root directory of the project, add the jitpack repository URL to the &lt;code&gt;settings.gradle.kts (or settings.gradle)&lt;/code&gt; file: Add the jitpack repository dependency (to download the SVG animation library for playing gifts, SVGAPlayer):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="nf"&gt;dependencyResolutionManagement&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;repositoriesMode&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;RepositoriesMode&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;FAIL_ON_PROJECT_REPOS&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;repositories&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nf"&gt;google&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="nf"&gt;mavenCentral&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

        &lt;span class="c1"&gt;// Add the jitpack repository URL&lt;/span&gt;
        &lt;span class="nf"&gt;maven&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;url&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;uri&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"https://jitpack.io"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the root directory of the project, add the following code to the &lt;code&gt;settings.gradle.kts (or settings.gradle)&lt;/code&gt; file. It will import the tuilivekit component downloaded in &lt;a href="https://trtc.io/document/60037#step2" rel="noopener noreferrer"&gt;Step 2&lt;/a&gt; into your current project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="nf"&gt;include&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;":tuilivekit:livekit"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;include&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;":tuilivekit:component:barrage"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the app directory, find the &lt;code&gt;build.gradle.kts (or build.gradle)&lt;/code&gt; file and add the following code. It declares the dependency of the current app on the newly added tuilivekit component:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="nf"&gt;api&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;project&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;":tuilivekit:livekit"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As the SDK uses Java's reflection feature internally, you need to add certain classes in the SDK to the obfuscation allowlist by adding the following code to the &lt;code&gt;proguard-rules.pro&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="p"&gt;-&lt;/span&gt;&lt;span class="n"&gt;keep&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="err"&gt;com.tencent.** { *; }
-keep class com.trtc.uikit.livekit.livestreamcore.** { *; }
-keep class com.trtc.uikit.livekit.component.gift.store.model.** { *; }
-keep class com.squareup.wire.** { *; }
-keep class com.opensource.svgaplayer.proto.** { *; }

-keep class com.tcmediax.** { *; }
-keep class com.tencent.** { *; }
-keep class com.tencent.xmagic.** { *; }
-keep class androidx.exifinterface.** {*;}
-keep class com.google.gson.** { *;}
# &lt;/span&gt;&lt;span class="nc"&gt;Tencent&lt;/span&gt; &lt;span class="nc"&gt;Effect&lt;/span&gt; &lt;span class="nc"&gt;SDK&lt;/span&gt; &lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="n"&gt;beauty&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt;&lt;span class="n"&gt;keep&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="err"&gt;com.tencent.xmagic.** { *;}
-keep class org.light.** { *;}
-keep class org.libpag.** { *;}
-keep class org.extra.** { *;}
-keep class com.gyailib.**{ *;}
-keep class com.tencent.cloud.iai.lib.** { *;}
-keep class com.tencent.beacon.** { *;}
-keep class com.tencent.qimei.** { *;}
-keep class androidx.exifinterface.** { *;}
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the app directory, find the &lt;code&gt;AndroidManifest.xml&lt;/code&gt; file, and add tools:replace="android:allowBackup" and android:allowBackup="false" to the application node to override the component's settings with your own.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt;  // app/src/main/AndroidManifest.xml

  &lt;span class="nt"&gt;&amp;lt;application&lt;/span&gt;
    &lt;span class="err"&gt;...&lt;/span&gt;

    &lt;span class="err"&gt;//&lt;/span&gt; &lt;span class="err"&gt;Add&lt;/span&gt; &lt;span class="err"&gt;the&lt;/span&gt; &lt;span class="err"&gt;following&lt;/span&gt; &lt;span class="err"&gt;configuration&lt;/span&gt; &lt;span class="err"&gt;to&lt;/span&gt; &lt;span class="err"&gt;override&lt;/span&gt; &lt;span class="err"&gt;the&lt;/span&gt; &lt;span class="err"&gt;settings&lt;/span&gt; &lt;span class="err"&gt;in&lt;/span&gt; &lt;span class="err"&gt;the&lt;/span&gt; &lt;span class="err"&gt;dependent&lt;/span&gt; &lt;span class="err"&gt;SDK&lt;/span&gt;
    &lt;span class="na"&gt;android:allowBackup=&lt;/span&gt;&lt;span class="s"&gt;"false"&lt;/span&gt;
    &lt;span class="na"&gt;tools:replace=&lt;/span&gt;&lt;span class="s"&gt;"android:allowBackup"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Step 4. Log in&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Add the following code to your project. It calls the relevant interfaces in TUICore to complete the TUI component log in to. This step is crucial; you can only use the features provided by TUILiveKit after successfully logging in.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="nc"&gt;TUILogin&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;login&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;applicationContext&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="mi"&gt;1400000001&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;     &lt;span class="c1"&gt;// Please replace with the SDKAppID obtained in step one&lt;/span&gt;
    &lt;span class="s"&gt;"denny"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;        &lt;span class="c1"&gt;// Please replace with your UserID&lt;/span&gt;
    &lt;span class="s"&gt;"xxxxxxxxxxx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;// You can calculate a UserSig in the console and fill it in here&lt;/span&gt;
    &lt;span class="kd"&gt;object&lt;/span&gt; &lt;span class="err"&gt;: &lt;/span&gt;&lt;span class="nc"&gt;TUICallback&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;override&lt;/span&gt; &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;onSuccess&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="nc"&gt;Log&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;i&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;TAG&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"login success"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="k"&gt;override&lt;/span&gt; &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;onError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;errorCode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Int&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;errorMessage&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="nc"&gt;Log&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;e&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;TAG&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"login failed, errorCode: $errorCode msg:$errorMessage"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use the SecretKey obtained in &lt;a href="https://trtc.io/document/60037#step-1.-activate-the-service" rel="noopener noreferrer"&gt;Step One&lt;/a&gt;, Step 3 to encrypt information such as SDKAppID and UserID to obtain UserSig, which is a token for authentication used by Tencent Cloud to identify whether the current user can use TRTC services. You can generate a temporarily usable UserSig through the &lt;a href="https://trtc.io/login?s_url=https://console.trtc.io/usersig" rel="noopener noreferrer"&gt;Auxiliary Tools&lt;/a&gt; in the console. For more information, see &lt;a href="https://trtc.io/document/35166?lang=en&amp;amp;pg=" rel="noopener noreferrer"&gt;UserSig&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step5. Integrating tebeautykit
&lt;/h2&gt;

&lt;p&gt;Now lets utilizes Tencent Effects Beauty for advanced beauty effects.&lt;/p&gt;

&lt;p&gt;Copy the &lt;code&gt;Android/tebeautykit&lt;/code&gt; folder to your project, at the same level as the app folder.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fej0kexmzykgvptf06v17.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fej0kexmzykgvptf06v17.png" alt="Image description" width="450" height="680"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Please edit your project's &lt;code&gt;settings.gradle&lt;/code&gt; file and add the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;  &lt;span class="n"&gt;include&lt;/span&gt; &lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;tebeautykit&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Step 6: Authorization &amp;amp; Setting Beauty Resources&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Apply for authorization and obtain &lt;code&gt;LicenseUrl&lt;/code&gt; and &lt;code&gt;LicenseKey&lt;/code&gt;. Please refer to the &lt;a href="https://trtc.io/document/60219?platform=flutter&amp;amp;product=beautyar" rel="noopener noreferrer"&gt;License Guide&lt;/a&gt; for more information.&lt;/p&gt;

&lt;p&gt;In the initialization section of your business logic (typically in the same location as the &lt;a href="https://trtc.io/document/60037#step4" rel="noopener noreferrer"&gt;login&lt;/a&gt; process), Add the following authorization code and replace it with the &lt;code&gt;Beauty Package ID&lt;/code&gt;, &lt;code&gt;LicenseUrl&lt;/code&gt;, and &lt;code&gt;LicenseKey&lt;/code&gt; you have obtained:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="nc"&gt;TEBeautySettings&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getInstance&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;initBeautySettings&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
                                       &lt;span class="nc"&gt;S1_07&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;         &lt;span class="c1"&gt;// Replace S1_07 with the beauty package number you purchased.&lt;/span&gt;
                                       &lt;span class="s"&gt;"LicenseUrl"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;// Replace with your LicenseUrl&lt;/span&gt;
                                       &lt;span class="s"&gt;"LicenseKey"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// Replace with youLicenseKey&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4l18peb4l3gip0pqkyev.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4l18peb4l3gip0pqkyev.png" alt="Image description" width="800" height="620"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By completing the aforementioned steps, you will have successfully integrated advanced beauty effects.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 7. Start live streaming
&lt;/h2&gt;

&lt;p&gt;After the above log in to method call returns successfully, call the &lt;a href="https://trtc.io/document/66146#" rel="noopener noreferrer"&gt;VideoLiveKit&lt;/a&gt;'s &lt;a href="https://trtc.io/document/66146#c6f79ba4-2dc1-4d65-82b3-e52d3ef28f2f" rel="noopener noreferrer"&gt;startLive&lt;/a&gt; method, specifying the room ID to start your live streaming page.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;com.trtc.uikit.livekit.VideoLiveKit&lt;/span&gt;
&lt;span class="nc"&gt;VideoLiveKit&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createInstance&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;applicationContext&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;startLive&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"roomId"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Step 8. Watch the live stream&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;After the above log in to method call returns successfully, call the &lt;a href="https://trtc.io/document/66146#" rel="noopener noreferrer"&gt;VideoLiveKit&lt;/a&gt;'s &lt;a href="https://trtc.io/document/66146#34d62c35-ff96-4250-be5a-a518ed341a5d" rel="noopener noreferrer"&gt;joinLive&lt;/a&gt; method, specifying the room ID to start your live streaming page.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;com.trtc.uikit.livekit.VideoLiveKit&lt;/span&gt;
&lt;span class="nc"&gt;VideoLiveKit&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createInstance&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;applicationContext&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;joinLive&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"roomId"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;My name is Susie. I am a writer and Media Service Product Manager. I work with startups across the globe to build real-time communication solutions using SDK and APIs.&lt;/p&gt;

&lt;p&gt;If you want to build face filter or other special effect into your app, welcome to &lt;a href="https://sc-rp.tencentcloud.com:8106/t/RA" rel="noopener noreferrer"&gt;&lt;strong&gt;contact us&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Easiest Steps To Build A Face Filter App With Flutter</title>
      <dc:creator>LunarDrift</dc:creator>
      <pubDate>Sat, 17 May 2025 06:39:46 +0000</pubDate>
      <link>https://dev.to/susiewang/easiest-steps-to-build-a-face-filter-app-with-flutter-40k6</link>
      <guid>https://dev.to/susiewang/easiest-steps-to-build-a-face-filter-app-with-flutter-40k6</guid>
      <description>&lt;p&gt;Hello everyone, I’m excited to share this article with you all.&lt;/p&gt;

&lt;p&gt;By the end of this article, you’ll be able to build a face filter app from scratch using Flutter and the TRTC Beauty AR SDK. You’ll get hands-on experience with AR stickers, face filters, virtual makeup, and face/gesture recognition — and you can customize all the special effects to your liking.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzjl0edsj58defemf3u8f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzjl0edsj58defemf3u8f.png" alt="Image description" width="800" height="303"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Creating a new Flutter project.
&lt;/h2&gt;

&lt;p&gt;Log in to &lt;a href="https://console.trtc.io/aboutcloud" rel="noopener noreferrer"&gt;&lt;strong&gt;TRTC Console &amp;gt; Relevant Services&lt;/strong&gt;&lt;/a&gt;, and click &lt;strong&gt;Get Started.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fet4wihbave31elhv2tje.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fet4wihbave31elhv2tje.png" alt="Image description" width="800" height="335"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select &lt;strong&gt;Mobile&lt;/strong&gt;, and based on your actual needs, fill in the App Name, Package Name, and Bundle ID. Check the features you want to trial: &lt;strong&gt;All Beauty Features, Virtual Background, Face Recognition, Gesture Recognition&lt;/strong&gt;, then click &lt;strong&gt;Confirm&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F703lgyniig60ld2rq6y0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F703lgyniig60ld2rq6y0.png" alt="Image description" width="700" height="744"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After activation, you can view your information on the current page and refer to the &lt;strong&gt;Integration Guide&lt;/strong&gt; below for integration. The License Url and License Key will be used in the &lt;a href="https://trtc.io/document/60195" rel="noopener noreferrer"&gt;&lt;strong&gt;integration guide&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fckkf4ui295rv7s0xbdbo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fckkf4ui295rv7s0xbdbo.png" alt="Image description" width="800" height="812"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Beauty Flutter SDK requires dependence on the Beauty SDK of the Android/iOS end. Through the Plugins provided by Flutter, the native end features can be exposed to the Flutter client. Therefore, when integrating the beauty filter features, you need to integrate the SDK of the native end manually.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Running Demo
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/Tencent-RTC/TencentEffect_Flutter" rel="noopener noreferrer"&gt;&lt;strong&gt;Download the demo project&lt;/strong&gt;&lt;/a&gt;, modify the demo/lib under the main.dart file, add your licenseUrl and licenseKey in this file. The sample code for using beauty features in TRTC is primarily located in demo/lib/page/trtc_page.dart and demo/lib/main.dart.&lt;/p&gt;

&lt;p&gt;Android&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In demo/app, find the build.gradle file, open the file, and change the value of applicationId to the package name you used when applying for the license.&lt;/li&gt;
&lt;li&gt;In demo/lib, execute flutter pub get.&lt;/li&gt;
&lt;li&gt;Use Android Studio to open the demo project and run it.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Step 3: SDK Integration
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;In the app module, find the build.gradle file and add the Maven repository URL for your corresponding package. For example, if you choose the S1-07 package, add the following:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="n"&gt;dependencies&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
   &lt;span class="n"&gt;implementation&lt;/span&gt; &lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="n"&gt;com&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;tencent&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;mediacloud&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="n"&gt;TencentEffect_S1&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mo"&gt;07&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="n"&gt;latest&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;release&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;For the Maven URL corresponding to each package, please refer to the&lt;/strong&gt; documentation. The latest version number of the SDK can be found in the&lt;a href="https://trtc.io/document/60203#" rel="noopener noreferrer"&gt;Release History&lt;/a&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In your project, find the &lt;code&gt;android/app&lt;/code&gt; module and locate the &lt;code&gt;src/main/assets&lt;/code&gt; folder. Copy the &lt;code&gt;demo/android/app/src/main/assetslut&lt;/code&gt; and &lt;code&gt;MotionRes&lt;/code&gt; folders from the demo project to your own project's &lt;code&gt;android/app/src/main/assets&lt;/code&gt;. If your project doesn't have an assets folder, you can create one manually.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;If you are using an Android of the Tencent Effect SDK that is lower than 3.9&lt;/strong&gt;, you will need to locate the AndroidManifest.xml file under the app module, and add the following Tag in the application section:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;   &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;uses&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="kd"&gt;native&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;library&lt;/span&gt;
            &lt;span class="nl"&gt;android:&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"libOpenCL.so"&lt;/span&gt;
            &lt;span class="nl"&gt;android:&lt;/span&gt;&lt;span class="n"&gt;required&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"false"&lt;/span&gt; &lt;span class="o"&gt;/&amp;gt;&lt;/span&gt;
        &lt;span class="c1"&gt;//true indicates that libOpenCL is essential for the current app. Without this library, the system will not allow the app to install&lt;/span&gt;
        &lt;span class="c1"&gt;//false indicates that libOpenCL is not essential for the current app. The app can be installed normally with or without this library. If the device has this library, GAN-type special effects in the Tencent Special Effects SDK (e.g., fairy tale face, comics face) will function normally. If the device does not have this library, GAN-type effects won't work, but it will not affect the use of other features within the SDK.&lt;/span&gt;
        &lt;span class="c1"&gt;//For information about uses-native-library, please refer to the Android official website: https://developer.android.com/guide/topics/manifest/uses-native-library-element&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Obfuscation configuration&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you enable compile optimization (setting minifyEnabled to true) when packaging the release, it will trim some code that is not called in the Java layer. This code may possibly be invoked by the native layer, thus causing the no xxx method exception.&lt;/p&gt;

&lt;p&gt;If you enabled such compile optimization, you should add these keep rules to avoid trimming xmagic's code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;keep&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;com&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;tencent&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;xmagic&lt;/span&gt;&lt;span class="o"&gt;.**&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt; &lt;span class="o"&gt;*;}&lt;/span&gt;
&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;keep&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;org&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;light&lt;/span&gt;&lt;span class="o"&gt;.**&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt; &lt;span class="o"&gt;*;}&lt;/span&gt;
&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;keep&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;org&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;libpag&lt;/span&gt;&lt;span class="o"&gt;.**&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt; &lt;span class="o"&gt;*;}&lt;/span&gt;
&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;keep&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;org&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;extra&lt;/span&gt;&lt;span class="o"&gt;.**&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt; &lt;span class="o"&gt;*;}&lt;/span&gt;
&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;keep&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;com&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;gyailib&lt;/span&gt;&lt;span class="o"&gt;.**{&lt;/span&gt; &lt;span class="o"&gt;*;}&lt;/span&gt;
&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;keep&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;com&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;tencent&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;cloud&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;iai&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;lib&lt;/span&gt;&lt;span class="o"&gt;.**&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt; &lt;span class="o"&gt;*;}&lt;/span&gt;
&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;keep&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;com&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;tencent&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;beacon&lt;/span&gt;&lt;span class="o"&gt;.**&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt; &lt;span class="o"&gt;*;}&lt;/span&gt;
&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;keep&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;com&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;tencent&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;qimei&lt;/span&gt;&lt;span class="o"&gt;.**&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt; &lt;span class="o"&gt;*;}&lt;/span&gt;
&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;keep&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;androidx&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;exifinterface&lt;/span&gt;&lt;span class="o"&gt;.**&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt; &lt;span class="o"&gt;*;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 4: Flutter Integration
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Method 1:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Remote Dependencies, add the following reference in your project's pubspec.yaml file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  tencent_effect_flutter:
   git:
     url: https://github.com/Tencent-RTC/TencentEffect_Flutter
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Method 2:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Local Dependencies, download the latest version of &lt;a href="https://github.com/Tencent-RTC/TencentEffect_Flutter" rel="noopener noreferrer"&gt;tencent_effect_flutter&lt;/a&gt;, then add the folders android, ios, lib and the files pubspec.yaml, tencent_effect_flutter.iml to your project directory, and then add the following reference in your project's pubspec.yaml file: (refer to demo)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nl"&gt;tencent_effect_flutter:&lt;/span&gt;
    &lt;span class="nl"&gt;path:&lt;/span&gt; &lt;span class="o"&gt;../&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;flutter pub get
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 5: &lt;strong&gt;SDK usage&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Associated with TRTC&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Add the following code in the onCreate method of the application class (or the onCreate method of FlutterActivity):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="n"&gt;TRTCCloudPlugin&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;register&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="n"&gt;XmagicProcesserFactory&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Call the resource initialization API&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;_initSettings&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;InitXmagicCallBack&lt;/span&gt; &lt;span class="n"&gt;callBack&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="kd"&gt;async&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="n"&gt;_setResourcePath&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="c1"&gt;/// Resource copying only needs to be done once. In the current version, if copied successfully once, there is no need to copy the resources again.&lt;/span&gt;
  &lt;span class="c1"&gt;/// Copying the resource only needs to be done once. Once it has been successfully copied in the current version, there is no need to copy it again in future versions.&lt;/span&gt;
  &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;isCopiedRes&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;callBack&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;call&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;_copyRes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;callBack&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;_setResourcePath&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="kd"&gt;async&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kt"&gt;String&lt;/span&gt; &lt;span class="n"&gt;resourceDir&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;ResPathManager&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getResManager&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getResPath&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="n"&gt;TXLog&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;printlog&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="s"&gt;'&lt;/span&gt;&lt;span class="si"&gt;$TAG&lt;/span&gt;&lt;span class="s"&gt; method is _initResource ,xmagic resource dir is &lt;/span&gt;&lt;span class="si"&gt;$resourceDir&lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="n"&gt;TencentEffectApi&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getApi&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;?.&lt;/span&gt;&lt;span class="na"&gt;setResourcePath&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;resourceDir&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;_copyRes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;InitXmagicCallBack&lt;/span&gt; &lt;span class="n"&gt;callBack&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="n"&gt;_showDialog&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="n"&gt;TencentEffectApi&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getApi&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;?.&lt;/span&gt;&lt;span class="na"&gt;initXmagic&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="n"&gt;saveResCopied&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="n"&gt;_dismissDialog&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="n"&gt;callBack&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;call&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="n"&gt;Fluttertoast&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;showToast&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nl"&gt;msg:&lt;/span&gt; &lt;span class="s"&gt;"initialization failed"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. Perform beauty authorization
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="n"&gt;TencentEffectApi&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getApi&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;?.&lt;/span&gt;&lt;span class="na"&gt;setLicense&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;licenseKey&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;licenseUrl&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
           &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;errorCode&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
         &lt;span class="n"&gt;TXLog&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;printlog&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Print the authentication result errorCode = &lt;/span&gt;&lt;span class="si"&gt;$errorCode&lt;/span&gt;&lt;span class="s"&gt;   msg = &lt;/span&gt;&lt;span class="si"&gt;$msg&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
         &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;errorCode&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="c1"&gt;// Authentication succeeded&lt;/span&gt;
         &lt;span class="p"&gt;}&lt;/span&gt;
       &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;4. Enable Beauty Filter&lt;/strong&gt;&lt;br&gt;
&lt;br&gt;
 &lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="c1"&gt;/// Enable beauty filter operation&lt;/span&gt;
 &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="n"&gt;enableCustomVideo&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;trtcCloud&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;enableCustomVideoProcess&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;open&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  5. Set Beauty Attributes
&lt;/h3&gt;

&lt;p&gt;For specific beauty attributes, refer to &lt;a href="https://trtc.io/document/60207#" rel="noopener noreferrer"&gt;&lt;strong&gt;Beauty Attributes Table&lt;/strong&gt;&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="n"&gt;TencentEffectApi&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getApi&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;?.&lt;/span&gt;&lt;span class="na"&gt;setEffect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sdkParam&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;effectName&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;sdkParam&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;effectValue&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;sdkParam&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;resourcePath&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;sdkParam&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;extraInfo&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  6. Set other properties
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Pause Beauty Sound Effects&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="n"&gt;TencentEffectApi&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getApi&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;?.&lt;/span&gt;&lt;span class="na"&gt;onPause&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Resume Beauty Sound Effects&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="n"&gt;TencentEffectApi&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getApi&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;?.&lt;/span&gt;&lt;span class="na"&gt;onResume&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Monitor Beauty Events&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="n"&gt;TencentEffectApi&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getApi&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
     &lt;span class="o"&gt;?.&lt;/span&gt;&lt;span class="na"&gt;setOnCreateXmagicApiErrorListener&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="n"&gt;errorMsg&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;code&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
       &lt;span class="n"&gt;TXLog&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;printlog&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Error creating an effect object errorMsg = &lt;/span&gt;&lt;span class="si"&gt;$errorMsg&lt;/span&gt;&lt;span class="s"&gt; , code = &lt;/span&gt;&lt;span class="si"&gt;$code&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
 &lt;span class="p"&gt;});&lt;/span&gt;   &lt;span class="c1"&gt;/// Needs to be set before creating the beauty filter&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Set Callback for Face, Gesture, and Body Detection Status&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="n"&gt;TencentEffectApi&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getApi&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;?.&lt;/span&gt;&lt;span class="na"&gt;setAIDataListener&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;XmagicAIDataListenerImp&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Set Callback Function for Dynamic Prompt Messages&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="n"&gt;TencentEffectApi&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getApi&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;?.&lt;/span&gt;&lt;span class="na"&gt;setTipsListener&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;XmagicTipsListenerImp&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Configure the Callback of Facial Keypoints and Other Data (only available in S1-05 and S1-06)&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="n"&gt;TencentEffectApi&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getApi&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;?.&lt;/span&gt;&lt;span class="na"&gt;setYTDataListener&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
   &lt;span class="n"&gt;TXLog&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;printlog&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"setYTDataListener  &lt;/span&gt;&lt;span class="si"&gt;$data&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
 &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Remove All Callbacks&lt;/strong&gt;. You need to remove all callbacks when terminating the page:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt; &lt;span class="n"&gt;TencentEffectApi&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getApi&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;?.&lt;/span&gt;&lt;span class="na"&gt;setOnCreateXmagicApiErrorListener&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
 &lt;span class="n"&gt;TencentEffectApi&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getApi&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;?.&lt;/span&gt;&lt;span class="na"&gt;setAIDataListener&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
 &lt;span class="n"&gt;TencentEffectApi&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getApi&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;?.&lt;/span&gt;&lt;span class="na"&gt;setYTDataListener&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
 &lt;span class="n"&gt;TencentEffectApi&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getApi&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;?.&lt;/span&gt;&lt;span class="na"&gt;setTipsListener&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For more information on the APIs, see &lt;a href="https://trtc.io/document/60200#" rel="noopener noreferrer"&gt;&lt;strong&gt;API Documentation&lt;/strong&gt;&lt;/a&gt;. For others, refer to the &lt;a href="https://github.com/Tencent-RTC/TencentEffect_Flutter" rel="noopener noreferrer"&gt;&lt;strong&gt;Demo Project&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  7. Add Data to and Remove Data from the Effect Panel
&lt;/h3&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Add effect resources.&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Add your resource file to the corresponding folder as described in step 1. For example, to add a 2D animated effect resource:&lt;/p&gt;

&lt;p&gt;You should put the resource in &lt;code&gt;android/xmagic/src.mian/assets/MotionRes/2dMotionRes&lt;/code&gt; of your project:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhcp050pelt164jhtqhr4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhcp050pelt164jhtqhr4.png" alt="Image description" width="800" height="1066"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Beauty Panel Configuration
&lt;/h3&gt;

&lt;p&gt;The demo provides a simple beauty panel UI. The panel's properties are configured through JSON files, located as shown in the following diagram. The implementation of the panel can be referred from the demo project.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9s3ukcnad6qwinpchqg0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9s3ukcnad6qwinpchqg0.png" alt="Image description" width="690" height="798"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With abundant built-in excellent materials, special effects, templates, trendy makeup effects, and interactive games can be customized. See detailed usage instructions for more information &lt;a href="https://trtc.io/document/60296?platform=flutter&amp;amp;product=beautyar" rel="noopener noreferrer"&gt;Beauty AR Studio Introduction&lt;/a&gt;.﻿&lt;/p&gt;

&lt;p&gt;Congratulation, you have built your first Face Filter App with Flutter and Tencent RTC Beauty AR SDK.&lt;/p&gt;

&lt;p&gt;My name is Susie. I am a writer and Media Service Product Manager. I work with startups across the globe to build real-time communication solutions using SDK and APIs.&lt;/p&gt;

&lt;p&gt;If you want to build face filter or other special effect into your app, welcome to &lt;a href="https://sc-rp.tencentcloud.com:8106/t/RA" rel="noopener noreferrer"&gt;&lt;strong&gt;contact us&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>flutter</category>
      <category>programming</category>
      <category>beginners</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Implement AI real-time subtitles in online meetings</title>
      <dc:creator>LunarDrift</dc:creator>
      <pubDate>Mon, 12 May 2025 13:52:37 +0000</pubDate>
      <link>https://dev.to/susiewang/implement-ai-real-time-subtitles-in-online-meetings-4ac0</link>
      <guid>https://dev.to/susiewang/implement-ai-real-time-subtitles-in-online-meetings-4ac0</guid>
      <description>&lt;p&gt;This article will show you how to quickly install a basic conference demo, and further introduce the &lt;a href="https://trtc.io/document/68951?platform=web&amp;amp;product=conference" rel="noopener noreferrer"&gt;&lt;strong&gt;AI transcription&lt;/strong&gt;&lt;/a&gt; function so that the voice content in the meeting can be presented in real time in the form of subtitles. Whether you want to build a smarter business meeting tool or improve the accessibility of online classes and live roadshows, this guide will provide you with clear ideas and operational examples.&lt;/p&gt;

&lt;p&gt;First,  we quickly install a basic &lt;a href="https://trtc.io/document/60441?platform=web&amp;amp;product=conference" rel="noopener noreferrer"&gt;&lt;strong&gt;Conference demo&lt;/strong&gt;&lt;/a&gt;. Open the Terminal, copy and paste the sample command to clone the repository.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;git clone https://github.com/Tencent-RTC/TUIRoomKit.git&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Install dependencies.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;cd ./TUIRoomKit/Web/example/vite-vue3-ts&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;npm install&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://trtc.io/document/59832?platform=web&amp;amp;product=call&amp;amp;menulabel=web" rel="noopener noreferrer"&gt;&lt;strong&gt;Go to the Activate Service page&lt;/strong&gt;&lt;/a&gt; and get the SDKAppID and SDKSecretKey*&lt;em&gt;, then fill them in the TUIRoomKit/Web/example/vite-vue3-ts/src/config/basic-info-config.js&lt;/em&gt;* file.&lt;/p&gt;

&lt;h2&gt;
  
  
  Function Introduction
&lt;/h2&gt;

&lt;p&gt;After accessing TUIRoomKit, you can turn on the AI real-time subtitle function by clicking ‘AI Assistant’ in the bottom bar, and you can see "&lt;strong&gt;AI real-time subtitles",&lt;/strong&gt; which ****display the discussion during the meeting in the form of subtitles.&lt;/p&gt;

&lt;h2&gt;
  
  
  Function Access
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1: Start Local Backend Service
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Note：&lt;/strong&gt;Open AI transcription need to use the user's cloud id and key, high sensitivity, so do not integrate this interface in the client, you need to add your own business background StartAITranscription related code, here to Nodejs service as an example, if your background service for other languages, you can click debugging, to generate the corresponding language examples of code.&lt;/p&gt;

&lt;p&gt;Open Nodejs backend service, the client listens for the user to enter the room and opens the AI transcription task via http request, sample code: &lt;a href="https://web.sdk.qcloud.com/trtc/AITask/server.zip" rel="noopener noreferrer"&gt;**click to download&lt;/a&gt;.**&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;dotenv&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;config&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;express&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;express&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;tencentcloud&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;tencentcloud-sdk-nodejs-trtc&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;cors&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;cors&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;TrtcClient&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;tencentcloud&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;trtc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;v20190722&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Client&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;// Get the Tencent RTC account SecretId and SecretKey from the environment variable&lt;/span&gt;
&lt;span class="c1"&gt;// You need to pass the Tencent RTC account SecretId and SecretKey into the entry parameter, and you also need to pay attention to the confidentiality of the key pair here.&lt;/span&gt;
&lt;span class="c1"&gt;// Code leaks can lead to SecretId and SecretKey leaks and threaten the security of all resources under the account.&lt;/span&gt;
&lt;span class="c1"&gt;// The following code example is for reference only, a more secure way of using the key is recommended, see: https://cloud.tencent.com/document/product/1278/85305&lt;/span&gt;
&lt;span class="c1"&gt;// The key can be obtained by going to the official console at https://console.cloud.tencent.com/cam/capi.&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;secretId&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;TENCENT_SECRET_ID&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="dl"&gt;''&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;secretKey&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;TENCENT_SECRET_KEY&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="dl"&gt;''&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;region&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;TENCENT_REGION&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="dl"&gt;''&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;clientConfig&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;credential&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;secretId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;secretId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;secretKey&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;secretKey&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="na"&gt;region&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;region&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;profile&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;httpProfile&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;endpoint&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;trtc.tencentcloudapi.com&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;TrtcClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;clientConfig&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;express&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;use&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;express&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;use&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;cors&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/start&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;SdkAppId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;RoomId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;RoomIdType&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;UserId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;UserSig&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;params&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;SdkAppId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;SdkAppId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;RoomId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;RoomId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;RoomIdType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;RoomIdType&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;TranscriptionParams&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;UserId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;UserId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;UserSig&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;UserSig&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;

  &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;StartAITranscription&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;params&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;success&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;status&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;error&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;status&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;500&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;error&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/stop&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;TaskId&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;StopAITranscription&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;TaskId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;TaskId&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;status&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;error&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;status&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;500&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;error&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;port&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;PORT&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="mi"&gt;3000&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;listen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;port&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Server is running on port &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;port&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2: Roomkit Enables AI Assistants
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Note：&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Roomkit only processes data for AI captioning/meeting summary, the actual ASR is turned on when the client user enters the room, here you can adjust the timing according to your business needs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;template&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;conference&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="na"&gt;main-view&lt;/span&gt; &lt;span class="na"&gt;display-mode&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"permanent"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;conference&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="na"&gt;main-view&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;template&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;script&lt;/span&gt; &lt;span class="na"&gt;setup&lt;/span&gt; &lt;span class="na"&gt;lang&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"ts"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
import &lt;span class="si"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;roomService&lt;/span&gt; &lt;span class="si"&gt;}&lt;/span&gt; from '@tencentcloud/roomkit-web-vue3';
// Called before the conference-main-view component is onmounted.
roomService.setComponentConfig(&lt;span class="si"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;AIControl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nl"&gt;visible&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="si"&gt;}&lt;/span&gt;);
&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;script&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3: Roomkit listens for the user to enter the room and calls the node service to enable AI transcription.
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;template&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;conference&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="na"&gt;main-view&lt;/span&gt; &lt;span class="na"&gt;display-mode&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"permanent"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;conference&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="na"&gt;main-view&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;template&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;script&lt;/span&gt; &lt;span class="na"&gt;setup&lt;/span&gt; &lt;span class="na"&gt;lang&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"ts"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
import &lt;span class="si"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;conference&lt;/span&gt; &lt;span class="si"&gt;}&lt;/span&gt; from '@tencentcloud/roomkit-web-vue3';
import &lt;span class="si"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;startAITranscription&lt;/span&gt; &lt;span class="si"&gt;}&lt;/span&gt; from '../http';
const handleAITask = (data: &lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;roomId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;) =&amp;gt; &lt;span class="si"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;roomId&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nf"&gt;startAITranscription&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;RoomId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;roomId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;UserId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;robot&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// A robot user is required here, in the case of robot, this should not be the same as the userId of the user in the room, it is recommended to use robot.&lt;/span&gt;
    &lt;span class="na"&gt;UserSig&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;xxx&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// The userSig of the robot&lt;/span&gt;
    &lt;span class="na"&gt;SdkAppId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;sdkAppId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;RoomIdType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// Room type is string room&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="si"&gt;}&lt;/span&gt;;
conference.on(RoomEvent.ROOM_JOIN, handleAITask);
conference.on(RoomEvent.ROOM_START, handleAITask);
onUnmounted(() =&amp;gt; &lt;span class="si"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;conference&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;off&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;RoomEvent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ROOM_JOIN&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;handleAITask&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;conference&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;off&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;RoomEvent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ROOM_START&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;handleAITask&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="si"&gt;}&lt;/span&gt;);
&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;script&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="c1"&gt;// http.ts&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;axios&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;axios&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;http&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;axios&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;baseURL&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;http://localhost:3000&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// Your Node.js service address.&lt;/span&gt;
  &lt;span class="na"&gt;timeout&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;10000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// Request timeout&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="kr"&gt;interface&lt;/span&gt; &lt;span class="nx"&gt;TranscriptionParams&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nl"&gt;SdkAppId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;number&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nl"&gt;RoomId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nx"&gt;RoomIdType&lt;/span&gt;&lt;span class="p"&gt;?:&lt;/span&gt; &lt;span class="nx"&gt;number&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nl"&gt;UserId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nl"&gt;UserSig&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="kr"&gt;interface&lt;/span&gt; &lt;span class="nx"&gt;StopParams&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nl"&gt;TaskId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// Start the AI transcription task&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;startAITranscription&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;params&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;TranscriptionParams&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;http&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/start&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;params&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// Stop the AI transcription task&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;stopAITranscription&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;params&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;StopParams&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;http&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/stop&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;params&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
    </item>
  </channel>
</rss>
