DEV Community

Connie Leung
Connie Leung

Posted on • Edited on

How I Automated YouTube Metadata Generation Using the new TypeScript ADK

How I Automated YouTube Metadata Generation Using the TypeScript ADK

Google released the new Agent Development Kit (ADK) for Typescript (TS) in December 2025. This is a major milestone for JavaScript developers who have long awaited a dedicated TypeScript SDK.

As of today, ADK supports Python, Go, Java, and TypeScript.

Now, I can use this new SDK to build a multi-agent system to solve a pain point of mine. As an active content creator, I publish a lot of YouTube videos and blog posts throughout the year.

The publication of YouTube videos involves various prompting efforts in AI Studio.

  • Copy the public YouTube URL.
  • Prompt the new Gemini 3 model to generate a description for the URL provided.
  • Get the text response, update the description of the video and save the video.
  • Prompt Gemini 3 to generate some hashtags for the URL.
  • Get the hashtags, append them to the description of the video and save the video.
  • Prompt Gemini 3 to generate the timeline for the URL.
  • Get the timeline entries, append them to the description of the video and save the video.
  • Prompt Gemini 3 to generate a draft blog post in markdown format.
  • Save the markdown text from the AI Studio to my Download folder

The process is not only tedious, but also boring when I produce on average 1.5 videos per week.
To get rid of the suffering, I want to implement an agent to automate the generation of description, hashtags, and timeline.
I will provide the public YouTube URL once. The root agent delegates to the subagents to fetch the transcript, generate description, hashtags, and timeline, and send the metadata to the recipient email address.

This is only a demo but it will be enhanced iteratively.

1. Prerequisites

  • A Gemini API Key that you can create in AI Studio
  • Node LTS
  • Basic Knowledge of Agent, Tools, Memory. It could be from other frameworks such as LangGraph
  • Knowledge of TypeScript. The latest TS version 5 is sufficient.
  • VSCode or any IDE/Editor

2. Goal

My plan was to use the latest Gemini 3 Flash Preview model to build a multi-agent system to generate metadata from any Public YouTube URL with transcript. Then, the metadata is sent to a recipient email address.

The architecture involves a root agent that calls a sequential agent to perform the following:

  • Fetch a YouTube transcript
  • Generate a description, hashtags, and timeline in parallel
  • Send email to a recipient email with the metadata (description, hashtags, timeline).

3. Architecture

The architecture comprises a root agent, some subagents, and custom tools.

Root Agent

  • Delegate a public YouTube URL and recipient email to a subsequent agent
  • Use the save_user_context_tool tool to save the URL and email address to the shared context. The shared context is a temporary memory that is accessible to all agents in the same session.

Sub Agents

  1. Transcript Agent: Fetches the transcript of the YouTube URL and updates the shared context.
  2. Parallel Agent: A workflow agent that calls description, hashtags, and timeline subagents to generate the metadata based on the transcript.
  3. Email Agent: Simulates sending the metadata to the recipient email address.

4. Implementation

First, we have to install the required dependencies

npm i @google/adk @google/adk-devtools @google/genai
npm i --save-exact zod@3.25.76
npm i @playzone/youtube-transcript
Enter fullscreen mode Exit fullscreen mode

This is the final tree of the multi-agent system.

subagents/youtube-agents.ts defines both ParallelYouTubeAgent and SequentialYouTubeAgent. ParallelYouTubeAgent is declared as a private constant because it is referenced by SequentialYouTubeAgent only. However, SequentialYouTubeAgent is exported so that the root agent can import it.

src
├── agent.ts
├── output-key.const.ts
├── subagents
│   ├── send-email.agent.ts
│   ├── youtube-agents.ts
│   ├── youtube-description.agent.ts
│   ├── youtube-hashtags.agent.ts
│   ├── youtube-timeline.agent.ts
│   ├── youtube-transcript.agent.ts
│   └── youtube-transcript.tool.ts
└── tools.ts
Enter fullscreen mode Exit fullscreen mode

I also defined some environment variables that the above agents need to generate text responses. This multi-agent system uses Gemini Developer API instead of Gemini in Vertex AI for simplicity. If you live in Hong Kong, then you will need to have VPN enabled.

export GEMINI_MODEL_NAME="gemini-3-flash-preview"
export GEMINI_API_KEY="<YOUR_GEMINI_API_KEY>"
export GOOGLE_GENAI_USE_VERTEXAI=FALSE
Enter fullscreen mode Exit fullscreen mode

The above environment variables are sufficient to make the agents run. You can find them in the .env.example file.

Step 1: Create a YouTube Transcript Agent

The YouTubeTranscriptTool custom tool extracts the video ID from the YouTube URL. Then, the tool calls the third-party API to fetch the transcript for the video ID.

import { FunctionTool } from '@google/adk';
import { YouTubeTranscriptApi } from '@playzone/youtube-transcript';
import { z } from 'zod';

const getTranscriptSchema = z.object({
    youtube_url: z.string().describe('The URL of the YouTube video.'),
});

type GetTranscriptInput = z.infer<typeof getTranscriptSchema>;

function extractVideoID(url: string) {
    console.log('youtube_url', url);
    const regExp = /^.*(?:(?:youtu.be\/|v\/|\/u\/\w\/|embed\/|watch\?))\??v?=?([^#&?]*).*/;
    const match = url.match(regExp);
    return (match && match?.[1]?.length === 11) ? match[1] : null;
}

async function fetchTranscript(url: string) {
    const videoID = extractVideoID(url);
    if (!videoID) {
        throw new Error('Unable to extract video ID from YouTube URL provided.');
    }
    console.log('videoID', videoID);

    const api = new YouTubeTranscriptApi();
    const transcript = await api.fetch(videoID);

    const transcriptText = transcript.snippets?.reduce(
        (acc, snippet) => { 
            const start = snippet.start;
            const end = snippet.start + snippet.duration;
            return `${acc}[${start}-${end}]${snippet.text} `;
        }, '').trim();

    return transcriptText;
}

async function getTranscript(youtube_url: string) {
    try {
        const transcript = await fetchTranscript(youtube_url);
        return { status: 'success', transcript };
    } catch (err) {
        console.log(err);
        return { status: 'error', message: 'Error getting YouTube transcript.' };
    }
};

export const YouTubeTranscriptTool = new FunctionTool({
    name: 'fetch_youtube_transcription',
    description: `Fetches the transcript of a YouTube video.`,
    parameters: getTranscriptSchema,
    execute: ({ youtube_url }: GetTranscriptInput) => getTranscript(youtube_url)
});
Enter fullscreen mode Exit fullscreen mode

The YoutubeTranscriptAgent reads the YouTube URL from the shared context. Then, the agent executes the YoutubeTranscriptTool tool to fetch the transcript for this URL. When the tool is successful, the agent writes the transcript to shared context with key youtube_transcript.

import { LlmAgent } from '@google/adk';
import { YoutubeTranscriptTool } from './youtube-transcript.tool';
import { TRANSCRIPT_KEY, YOUTUBE_URL_KEY } from '../output-key.const';

process.loadEnvFile();
const model = process.env.GEMINI_MODEL_NAME || 'gemini-3-flash-preview';

export const TRANSCRIPT_KEY = 'youtube_transcript';
export const YOUTUBE_URL_KEY = 'youtube_url';

export const YouTubeTranscriptAgent = new LlmAgent({
    name: 'youtube_transcript_agent',
    model,
    description: 'Fetches the transcript of a public YouTube URL provided by the user.',
    instruction: `
        You are a helpful assistant that fetches the transcript for a public YouTube URL provided by the user.

        INSTRUCTIONS:
        1. Read '${YOUTUBE_URL_KEY}' from the shared context.
        2. Use the 'fetch_youtube_transcription' tool to get the transcript.
        3. If the tool returns an error status, respond with the error message.
        4. If the tool is successful, your FINAL response must be the RAW text of the transcript. Do not add conversational filler like "Here is the transcript:".
    `,
    outputKey: TRANSCRIPT_KEY,   
    tools: [YoutubeTranscriptTool],
 });
Enter fullscreen mode Exit fullscreen mode

Step 2: Create a YouTube Description Agent

This is a specialist agent that reads the transcript from the shared context to generate an engaging description relevant to the video content.

The model generated three paragraphs. Therefore, I tweaked the prompt to prefix each paragraph with an asterisk (*) symbol. I replaced each '*' symbol with a newline character, copied and pasted the paragraphs to the description of the YouTube video, and saved it manually.

The outputKey is DESCRIPTION_KEY, so this agent writes the hashtags to the shared context with key youtube_description.

import { LlmAgent } from '@google/adk';
import { DESCRIPTION_KEY, TRANSCRIPT_KEY } from '../output-key.const';

export const DESCRIPTION_KEY = 'youtube_description';
export const TRANSCRIPT_KEY = 'youtube_transcript';

process.loadEnvFile();
const model = process.env.GEMINI_MODEL_NAME || 'gemini-3-flash-preview';

export const YouTubeDescriptionAgent = new LlmAgent({
    name: 'youtube_description_agent',
    model,
    description: 'Generates a description based on the YouTube transcript.',
    instruction: `
        You are a helpful assistant that generates a description for a YouTube transcript.

        INSTRUCTIONS:
        1. Read '${TRANSCRIPT_KEY}' from the shared context to get the transcript.
        3. Based on the transcript in the shared context, generate a concise and engaging YouTube video description that accurately reflects the content of the video.
        4. Ensure the description is between 100-300 words.
        5. Ensure the description is in text format, and prefix each paragraph with a * symbol.
        5. The description should be presented in the perspective of the content creators summarizing their own video. Instead of "In this video, the presenter covers...", use "In this video, I cover...".
        6. Focus on making the description appealing to potential viewers, highlighting key points and value propositions of the video.
        7. If the tool returns an error status, respond with the error message.
        8. If successful, your final response must contain ONLY the description text. Do not include any JSON, tool call code, or conversational filler like "Here is your description.
    `,
    outputKey: DESCRIPTION_KEY,
 });
Enter fullscreen mode Exit fullscreen mode

Step 3: Create a YouTube Hashtags Agent

This is a specialist agent that reads the transcript from the shared context to generate hashtags that begin with '#', all lowercase and in alphabetic order.

The outputKey is HASHTAGS_KEY, so this agent writes the hashtags to the shared context with key youtube_hashtags.

import { LlmAgent } from '@google/adk';
import { HASHTAGS_KEY, TRANSCRIPT_KEY } from '../output-key.const';

export const TRANSCRIPT_KEY = 'youtube_transcript';
export const HASHTAGS_KEY = 'youtube_hashtags';

process.loadEnvFile();
const model = process.env.GEMINI_MODEL_NAME || 'gemini-3-flash-preview';

export const YouTubeHashtagsAgent = new LlmAgent({
    name: 'youtube_hashtags_agent',
    model,
    description: 'Generates hashtags based on the YouTube transcript.',
    instruction: `
        You are a helpful assistant that generates hashtags for a YouTube transcript.

        INSTRUCTIONS:
        1. Read '${TRANSCRIPT_KEY}' from the shared context to get the transcript.
        3. Based on the transcript in the shared context, generate a list of hashtags that accurately reflect the content of the video.
        4. Focus on making the hashtags appealing to potential viewers, highlighting key points and themes of the video.
        5. The hashtags should each start with the '#' symbol and contain no spaces.
        6. The hashtags should be in English, all lowercase, and order in alphabetical ascending order.
        7. The hashtags should be presented in string format, separated by spaces.
        8. If the tool returns an error status, respond with the error message.
        9. If successful, your final response must contain ONLY the hashtags. Do not include any JSON, tool call code, or conversational filler like "Here are your hashtags.
    `,
    outputKey: HASHTAGS_KEY,
 });
Enter fullscreen mode Exit fullscreen mode

Step 4: Create a YouTube Timeline Agent

This is a specialist agent that reads the transcript from the shared context to generate a timeline relevant to the video content.

The timeline consists of three columns, Start, End, and Caption with each entry on a new row.

Some examples of timeline entry are:

|00:00|00:30| Introduction to ADK                  |
|00:30|01:30| Steps to create an agent from scratch|
Enter fullscreen mode Exit fullscreen mode

The outputKey is TIMELINE_KEY, so this agent writes the timeline to the shared context with key youtube_timeline.

import { LlmAgent } from '@google/adk';
import { TIMELINE_KEY, TRANSCRIPT_KEY } from '../output-key.const';

export const TRANSCRIPT_KEY = 'youtube_transcript';
export const TIMELINE_KEY = 'youtube_timeline';

process.loadEnvFile();
const model = process.env.GEMINI_MODEL_NAME || 'gemini-3-flash-preview';

export const YoutubeTimelineAgent = new LlmAgent({
    name: 'youtube_timeline_agent',
    model,
    description: 'Generates a timeline with a caption based on the YouTube transcript.',
    instruction: `
        You are a helpful assistant that generates a timeline with a caption for a YouTube transcript.

        ANALYSIS STEPS (STRICT):
        1. Identify the very first timestamp (usually 00:00).
        2. Identify the very last timestamp mentioned in the transcript to determine the total length.
        3. Divide the video into 10-15 logical segments. 
        4. CRITICAL: Do not skip the middle or end of the video. The final row of your table MUST reach the end of the transcript.

        INSTRUCTIONS:
        1. Read '${TRANSCRIPT_KEY}' from the shared context to get the transcript.
        2. Based on the transcript in the shared context, create a chronological timeline that summarizes the video's flow.
        3. You must output a Markdown table with exactly three columns.
        4. The header must be: |Start|End|Caption|.
        5. The separator row must be: |-------|-----|---------|
        6. Each timeline entry should be in |Start|End|Caption format.
        7. Use descriptive, professional captions.
        8. Ensure the 'End' timestamp of one row matches the 'Start' timestamp of the next row.
        9. Timestamps must be in MM:SS or HH:MM:SS format.
        10. Each entry must be on a new row. Every row MUST end with a newline character. 
        11. If the tool returns an error status, respond with the error message.
        12. If successful, your final response must contain ONLY the timeline. Do not include any JSON, tool call code, or conversational filler like "Here is your timeline.
    `,
    outputKey: TIMELINE_KEY,
 });
Enter fullscreen mode Exit fullscreen mode

Step 5: Create a Parallel Agent

YouTubeDescriptionAgent, YouTubeHashtagsAgent, and YouTubeTimelineAgent can execute in parallel to generate text responses. Therefore, I constructed a parallel agent to call them independently.

ParallelYoutubeAgent is a parallel agent and the subAgents array includes the YouTubeDescriptionAgent, YouTubeHashtagsAgent, and YouTubeTimelineAgent specialist agents.

I defined it in subagents/youtube-agents.ts but will not export it, as it is only used by the sequential agent in the same file.

import { ParallelAgent } from '@google/adk';
import { YouTubeDescriptionAgent } from './youtube-description.agent';
import { YouTubeHashtagsAgent } from './youtube-hashtags.agent';
import { YouTubeTimelineAgent } from './youtube-timeline.agent';

const ParallelYouTubeAgent = new ParallelAgent({
    name: "parallel_youtube_agent",
    subAgents: [YouTubeDescriptionAgent, YouTubeHashtagsAgent, YouTubeTimelineAgent],
    description: "Runs multiple Youtube agents in parallel to gather description, hashtags, and timeline."
});
Enter fullscreen mode Exit fullscreen mode

Add ParallelYouTubeAgent to the src/subagents/youtube-agents.ts file.

Step 6: Create a Send Email Agent

The custom SendEmailTool simulates sending an email and outputs a success message.

import { FunctionTool, LlmAgent } from '@google/adk';
import z from 'zod';

const sendEmailSchema = z.object({
    youtube_description: z.string().describe("YouTube video description"),
    youtube_timeline: z.string().describe("YouTube video timeline"),
    youtube_hashtags: z.string().describe("YouTube video hashtags"),
    email: z.string().describe("Recipient email address"),
});

type SendEmailInput = z.infer<typeof sendEmailSchema>;

export const SendEmailTool = new FunctionTool({
  name: 'send_email_tool',
  description: 'Sends an email with the provided details.',
  parameters: sendEmailSchema,
  execute: async ({ 
    youtube_description: description, 
    youtube_timeline: timeline, 
    youtube_hashtags: hashtags, 
    email 
  }: SendEmailInput) => {
    const metadata = {
      description,
      timeline,
      hashtags,
    };

    return {
      status: 'success',
      message: `Email sent to ${email} with metadata ${JSON.stringify(metadata)}`,
    };
  }
});
Enter fullscreen mode Exit fullscreen mode

The SendEmailAgent agent reads the description, hashtags, timeline, and email address from the shared context, and provides them to the SendEmailTool tool to send an email.

import { LlmAgent } from '@google/adk';
import { DESCRIPTION_KEY, HASHTAGS_KEY, RECIPIENT_EMAIL_KEY, TIMELINE_KEY } from '../output-key.const';

process.loadEnvFile();
const model = process.env.GEMINI_MODEL_NAME || 'gemini-3-flash-preview';

export const SendEmailAgent =  new LlmAgent({
    name: "send_email_agent",
    model,
    description: 'Send an email to a recipient with the provided metadata.',
    instruction: `
        You are an email automation specialist. Your job is to take processed video metadata and send an email to a recipient with the provided details.

        ### INPUT DATA
        Retrieve the following from the shared context:
        1. Description: Found at '${DESCRIPTION_KEY}'
        2. Timeline: Found at '${TIMELINE_KEY}'
        3. Hashtags: Found at '${HASHTAGS_KEY}'
        4. Recipient Email Address: Found at '${RECIPIENT_EMAIL_KEY}'

        ### EMAIL REQUIREMENTS
        - Use the 'send_email_tool' to send an email with the retrieved metadata.
    `,
    tools: [SendEmailTool],
 });
Enter fullscreen mode Exit fullscreen mode

Step 7: Create a Sequential Agent

The sequential agent is a specialized agent that executes the subagents one after another, in sequence.

import { ParallelAgent, SequentialAgent } from '@google/adk';
import { YouTubeTranscriptAgent } from './youtube-transcript.agent';
import { SendEmailAgent } from './send-email.agent';

/* The definition of ParallelYouTubeAgent in Step 5  */

export const SequentialYouTubeAgent = new SequentialAgent({
    name: 'sequential_youtube_agent',
    subAgents: [YouTubeTranscriptAgent, ParallelYouTubeAgent, SendEmailAgent],
    description: 'Runs Youtube agents sequentially to generate description based on transcript.',
});
Enter fullscreen mode Exit fullscreen mode

SequentialYouTubeAgent is also added to src/subagents/youtube-agents.ts file, so ParallelYouTubeAgent does not need to be imported into the file.

The SequentialYouTubeAgent executes the subagents in steps. The YouTubeTranscriptAgent agent fetches the transcript and writes to the shared context. The parallel agent uses the three specialist agents to generate a description, hashtags and a timeline. Finally, the SendEmailAgent agent processes the metadata and sends emails to an email address

Step 8: Update the Root Agent

The latest Node LTS is NodeJS 24 and it supports a built-in process.loadEnvFile function to load the environment variables from .env.

If you are using an older NodeJS version and cannot upgrade to LTS version, please install dotenv to load the variables.

npm i dotenv
Enter fullscreen mode Exit fullscreen mode
import dotenv from 'dotenv';

dotenv.config();
Enter fullscreen mode Exit fullscreen mode

The save_user_context_tool tool saves the value for a given key in the shared context.

import { FunctionTool, ToolContext } from '@google/adk';
import z from 'zod';

const saveUserContextSchema = z.object({
    key: z.string().describe("The key to store the data in the shared context"),
    value: z.any().describe("The data to store in the shared context"),
});

type SaveUserContextInput = z.infer<typeof saveUserContextSchema>;

export const SaveUserContextTool = new FunctionTool({
  name: 'save_user_context_tool',
  description: 'Saves user-specific information into the shared context for other agents to use.',
  parameters: saveUserContextSchema,
  execute: async ({ key, value }: SaveUserContextInput, toolContext?: ToolContext) => {
    if (toolContext) {
        toolContext.state.set(key, value);
    }

    return {
      status: 'success',
      message: `Saved '${value}' to ${key} in the shared context.`,
    };
  }
});
Enter fullscreen mode Exit fullscreen mode

The root agent uses the tool to save youtube_url and recipient_email_address to the shared context. When both keys are present in the shared context, the root agent delegates to the SequentialYoutubeAgent subsequent agent to generate the metadata and simulates sending email to an email address.

import { LlmAgent } from '@google/adk';
import { SequentialYouTubeAgent } from './subagents/youtube-agents';
import { SaveUserContextTool } from './tools';
import { RECIPIENT_EMAIL_KEY, YOUTUBE_URL_KEY } from './output-key.const';

export const RECIPIENT_EMAIL_KEY = 'recipient_email_address';
export const YOUTUBE_URL_KEY = 'youtube_url';

process.loadEnvFile();
const model = process.env.GEMINI_MODEL_NAME || 'gemini-3-flash-preview';

export const rootAgent = new LlmAgent({
    name: 'youtube_transcript_agent',
    model,
    description: 'Generate details based on YouTube transcript.',
    instruction: `You are a helpful assistant that generates useful details for the YouTube URL provided and sends an email with the results.

        INSTRUCTIONS:
        1. Ask user for a public YouTube URL and recipient email address.
          - If the user provides an input, determine it is an URL or an email address.
          - If it is an URL, use the 'save_user_context_tool' tool to save it to the shared context with key '${YOUTUBE_URL_KEY}'.
          - If it is an email address, use the 'save_user_context_tool' tool to save it to the shared context with key '${RECIPIENT_EMAIL_KEY}'.
          - If the input is neither, ask the user to provide a valid YouTube URL or email address.
        2. If 'save_user_context_tool' completes, check the status of the response.
        3. If and only if the status is 'success', and '${YOUTUBE_URL_KEY}' and '${RECIPIENT_EMAIL_KEY}' are present in the shared context,
            - Delegate to 'sequential_youtube_agent'.
            - IMPORTANT: Tell the agent: "Please use the URL to get the transcript."
        4. If '${YOUTUBE_URL_KEY}' is not present in the shared context, ask the user to provide a valid YouTube URL.
        5. Once the 'sequential_youtube_agent' agent completes, check the status of the response.
            - When the status is 'success', confirm the completion of the agent.
            - When the status is 'error', respond with the error message.
    `,
    subAgents: [SequentialYouTubeAgent],
    tools: [SaveUserContextTool],
});
Enter fullscreen mode Exit fullscreen mode

This is the end of the code walkthrough of the multi-agent system to generate metadata (description, hashtags, timeline) based on the transcript of a public YouTube URL.

5. Testing

"scripts": {
    "build": "npx tsc --project tsconfig.json",
    "cli": "npm run build && npx @google/adk-devtools run dist/agent.js",
    "web": "npm run build && npx @google/adk-devtools web --host 127.0.0.1 dist/agent.js"
},
Enter fullscreen mode Exit fullscreen mode

In package.json file, I added several script commands to build the agent code and launch the ADK web for local testing.

npm run build builds the TypeScript code and compiles the JavaScript files to dist/ folder. The rootDir and outDir are defined in the compilerOptions of the tsconfig.json file. The tsconfig.json specifies the root files and the options to compile the TypeScript project.

When I typed npm run web, the local server started at http://127.0.0.1:8000. The default port is 8000 and --host fixes the host to 127.0.0.1. I opened a new tab at the address, provided the public YouTube URL and recipient address. The root agent saved the inputs into the shared context and delegated the job to sequential_youtube_agent sequential agent to process.

When the agent completed successfully, the web interface displayed The email has been successfully sent to <email> with the video's description, timeline, and hashtags..

6. Conclusion

By leveraging the Agent Development Kit (ADK) for TypeScript SDK, I successfully built my first Youtube Metadata Generation Agent using the SDK. Rather than pasting the public YouTube URL repeatedly into the AI Studio chat to ask Gemini 3 to generate a description, hashtags, timeline, and draft blog post, I provide the URL to the agent once, the agent packages the data in a JSON object and display the object in the web interface.

This streamlined process is more efficient, time-saving, and less error-prone than the manual process that I have been performing for years. Can you imagine that I uploaded 36 videos to my YouTube channel in 2025, and I did it manually 36 times?

ADK for TS opens the door of agent development for me and this agent is my first step toward workflow automation.

Other agents that I plan to add in the future:

  1. Create a blog agent to read the transcript and make a blog post draft in markdown format.
  2. Create a closed caption agent to generate an accurate SRT file in English based on the transcript.
  3. Create an agent to call the Advocu API to report the new content activity.
  4. (Optional) Create a cover image agent to create an image using Gemini 3 Pro Image model.

Enhancements:

  1. Fix the error handling. If the video does not a transcript (my Cantonese videos), the agent will throw an exception.
  2. Replace the fake logic in the send email agent with NodeMailer call to send emails to my personal Gmail account.
  3. Migrate to Cloud Run or VertexAI Agent Space.
  4. Migrate to VertexAI when I am able to resolve all the technical issues.
  5. Create a web app to call the agent directly with the YouTube URL and the recipient address.

Resources

Note: Google Cloud credits are provided for this project.

Top comments (0)