DEV Community

Connie Leung
Connie Leung

Posted on

Building an AI Creative Suite with Angular, Gemini, Imagen and Veo

This blog post is about my Angular AI Creative Suite side project that uses Gemini 2.5 Flash Lite, Imagen 4 Fast, and Veo 2 to generate stories, send messages to a chatbot, create images, and videos. It focuses on securing the Gemini API Key, dependency injection, and applying techniques to sanitize and render streamed responses.

Securing the Gemini API Key

The Angular AI Creative Suite uses the Gemini API key to call different Gemini APIs, but we should avoid hardcoding the API key in the application.

Some tutorials recommend storing API keys in environment.ts but it is prone to commiting the sensitive information to Github.

Instead, I define build parameters in angular.json that replace the constants with the actual values during build time.

1. The angular.json Configuration

In the angular.json file, we define environment variables that represent the API Key, max output tokens for text generation, temperature, top K, top P, max output tokens for chatbot, Gemini model name, image model name, and video model name. GEMINI_API_KEY is a dummy value that is set on command line before the application is built.

// angular.json
"build": {
  "builder": "@angular/build:application",
  "options": {
    // ... other options
    "define": {
      "GEMINI_API_KEY": "YOUR_API_KEY",
      "MAX_OUTPUT_TOKEN": "4096",
      "TEMPERATURE": "0.3",
      "TOP_K": "40",
      "TOP_P": "0.75",
      "CHAT_MAX_OUTPUT_TOKEN": "2048",
      "VIDEO_MODEL_NAME": "\"veo-2.0-generate-001\"",
      "IMAGE_MODEL_NAME": "\"imagen-4.0-fast-generate-001\"",
      "GEMINI_MODEL_NAME": "\"gemini-2.5-flash-lite\""
    }
  },
}
Enter fullscreen mode Exit fullscreen mode

2. Build the Application with GEMINI_API_KEY

The actual API key is injected when the application is built or served, typically via a command line environment variable, which is then substituted into the placeholder defined in angular.json. This ensures the key is never stored directly in the source code.

The command to serve the application for development would look like this (using shell variable substitution):

# In your terminal
export GEMINI_API_KEY="AIzaSy...your-secret-key..."

# Build command using the defined variables
ng build --define GEMINI_API_KEY=\"$GEMINI_API_KEY\" 
Enter fullscreen mode Exit fullscreen mode

Subsequently, ng build uses the default model parameters (model names, temperature, top K, top P, and max output token) in angular.json and the actual API keyto build the application.

The bundle can be found in the dist folder.

3. Declare Build Constants in Angular Application

For each build parameter, a constant is declared in the types.d.ts file. Then, the application can use these parameters to call the Gemini AI SDK to generate responses.

// types.d.ts
declare const GEMINI_API_KEY: string;
declare const MAX_OUTPUT_TOKEN: string;
declare const TEMPERATURE: string;
declare const TOP_K: string;
declare const TOP_P: string;
declare const CHAT_MAX_OUTPUT_TOKEN: string;
declare const VIDEO_MODEL_NAME: string;
declare const IMAGE_MODEL_NAME: string;
declare const GEMINI_MODEL_NAME: string;
Enter fullscreen mode Exit fullscreen mode

Dependency Injection Gemini API

The application architecture has components for presentation, services to perform component logic and a shared Gemini service to perform AI tasks.

The Gemini Service uses Injection Tokens and Providers to inject the Gemini AI SDK and the Gemini chat instance.

1. Gemini Injection Tokens

// ai-injection-tokens.const.ts

import { InjectionToken } from '@angular/core';
import { GoogleGenAI, Chat, Models, GenerateContentConfig } from '@google/genai';

export const GEMINI_AI_TOKEN 
    = new InjectionToken<GoogleGenAI>('GoogleGenAIToken');
export const GEMINI_CHAT_TOKEN 
    = new InjectionToken<Chat>('GoogleGenAIChatToken');
export const GEMINI_TEXT_CONFIG_TOKEN 
    = new InjectionToken<GenerateContentConfig>('GoogleGenAITextConfigToken');
Enter fullscreen mode Exit fullscreen mode

2. Gemini Providers

// gemini.provider.ts
// GEMINI_API_KEY is replaced at build-time from angular.json define
import { EnvironmentProviders, inject, makeEnvironmentProviders } from '@angular/core';
import { GoogleGenAI } from '@google/genai';
import { GEMINI_AI_TOKEN, GEMINI_CHAT_TOKEN, GEMINI_TEXT_CONFIG_TOKEN } from '../constants/ai-injection-tokens.const';

const apiKey = GEMINI_API_KEY;

export function provideGoogleGeminiAi(): EnvironmentProviders {
  return makeEnvironmentProviders([
    {
      provide: GEMINI_AI_TOKEN,
      useFactory: () => {
        return new GoogleGenAI({ apiKey });
      },
    },
    {
      provide: GEMINI_TEXT_CONFIG_TOKEN,
      useValue:  {
        maxOutputTokens: +MAX_OUTPUT_TOKEN,
        temperature: +TEMPERATURE,
        topK: +TOP_K,
        topP: +TOP_P,
      }
    },
    {
      provide: GEMINI_CHAT_TOKEN,
      useFactory: () => {
        const ai = inject(GEMINI_AI_TOKEN);
        return ai.chats.create({
          model: GEMINI_MODEL_NAME || 'gemini-2.5-flash-lite',
          config: {
            systemInstruction: 'You are a helpful and creative assistant. Please provide answers, maximum 250 words.',
            temperature: +TEMPERATURE,
            topK: +TOP_K,
            topP: +TOP_P,
            maxOutputTokens: +CHAT_MAX_OUTPUT_TOKEN,
          },
        });
      },
    }
  ]);
}
Enter fullscreen mode Exit fullscreen mode

The provideGoogleGeminiAi provider function uses makeEnvironmentProviders to create an array of providers. They construct a Gemini AI instance, a generation configuration, and a chat instance.

useFactory uses a factory function to create a Gemini AI instance and a chat instance. useValue returns a generation configuration to generate text.

3. Register the Gemini Provider

// app.config.ts

import { ApplicationConfig } from '@angular/core';
import { provideGoogleGeminiAi } from './gemini/providers/gemini.provider';

export const appConfig: ApplicationConfig = {
  providers: [
    provideGoogleGeminiAi(),
  ],
};
Enter fullscreen mode Exit fullscreen mode

Add the provideGoogleGeminiAi to appConfig to register the provider.

4. Gemini Service

GeminiService is a service that contains methods for generating texts, creating images and videos, and sendining chat messages. The service is able to experiment with the capabilities of Gemini 2.5 Flash Lite, Imagen 4, and Veo 2 (Veo 3 is very expensive).

// gemini.service.ts

const POLLING_PERIOD = 10000;
const apiKey = GEMINI_API_KEY;

@Injectable({
  providedIn: 'root',
})
export class GeminiService {
  private http = inject(HttpClient);
  private readonly ai = inject(GEMINI_AI_TOKEN);
  private readonly chat = inject(GEMINI_CHAT_TOKEN);
  private readonly genTextConfig = inject(GEMINI_TEXT_CONFIG_TOKEN);

  async generateTextStream(contents: string) {
      return await this.ai.models.generateContentStream({
        model: GEMINI_MODEL_NAME,
        contents,
        config: this.genTextConfig
      });
  }
Enter fullscreen mode Exit fullscreen mode

The generateTextStream uses the SDK and the user prompt to generate a story and return a response stream.

  async generateImages(prompt: string, config: GenerateImagesConfig)
  : Promise<GeneratedData[]> {
      const response = await this.ai.models.generateImages({
          model: IMAGE_MODEL_NAME,
          prompt,
          config: {
              ...config,
              outputMimeType: 'image/png',
          },
      });

      const images = response.generatedImages || [];

      return /... Base64 inline data .../
   }
Enter fullscreen mode Exit fullscreen mode

The generateImages uses the SDK and the user prompt to generate a PNG image and convert the bytes into a Base64 inline data.

 async sendChatMessageStream(message: string) {
     return await this.chat.sendMessageStream({ message });
    } catch (error) {
      throw new Error(this.getErrorMessage(error));
    }
}
Enter fullscreen mode Exit fullscreen mode

The sendChatMessageStream sends a chat message and waits for the streamed response of the AI.

async generateVideos(prompt: string, config: GenerateVideosConfig, imageBytes?: string): Promise<string[]> {
      const image = imageBytes ? { 
          image: { 
             imageBytes, 
             mimeType: 'image/png' 
          } 
      } : undefined;
      const ranges: number[] = Array(config.numberOfVideos || 1).fill(1);

      const downloadLinks = await ranges.reduce(async (prev) => {
        const request: GenerateVideosParameters = {
          model: VIDEO_MODEL_NAME,
          prompt,
          config: { ...config, numberOfVideos: 1 },
          ...image
        };

        /... poll the operation until the video is ready .../
      }, Promise.resolve([]) as Promise<string[]>);

      if (!downloadLinks.length) {
        return downloadLinks;
      }

      const blobUrls$ = /... retrieve videos by API key .../ 
      return await firstValueFrom(blobUrls$);
  }
}
Enter fullscreen mode Exit fullscreen mode

The generateVideos issues a request to generate videos. Video generation is an expensive and time consuming operation; therefore, the operations are polled every 10 seconds until all the videos are ready.

Best Practices of Rendering Streamed Responses

Both story generator and chatbot return streamed responses to improve UX and reduce latency. The chunks are sanitized to ensure the content is not harmful. Moreover, the chatbot returns markdown, so it is converted to HTML. The HTML template then displays the HTML text.

npm install dompurify marked streaming-markdown
Enter fullscreen mode Exit fullscreen mode

1. Handling Text Streaming

// parser.service.ts

import { Injectable } from '@angular/core';
import DOMPurify from 'dompurify';
import * as smd from 'streaming-markdown';

@Injectable({
    providedIn: 'root'
})
export class ParserService {
    parser: smd.Parser | undefined = undefined;
    chunks = '';

    initParser(element: HTMLElement): void {
      /... initialize parser .../
    }

    hasParser(): boolean {
      return !!this.parser;
    }

    writeToElement(chunk: string) {
        /... validation logic .../

        this.chunks = this.chunks + chunk;
        DOMPurify.sanitize(this.chunks);
        if (DOMPurify.removed.length) {
            smd.parser_end(this.parser);
            return;
        }

        smd.parser_write(this.parser, chunk);
    }

    flushAll() {
      /... reset parser and chunks .../
    }
}
Enter fullscreen mode Exit fullscreen mode

The ParserService appends the chunk to the accumulated text before sanitization. When the sanitization is successful, the parser adds the chunk to the innerHTML of the DOM element.

// story-generator.service.ts

@Injectable({
  providedIn: 'root'
})
export class StoryService {
  private readonly geminiService = inject(GeminiService);
  private readonly promptFormService = inject(PromptFormService);

  private readonly historyKey = 'story'; 
  readonly prompt = this.promptFormService.prompt;

  async generateStory(
    { lengthDescription: length, genre }: StoryParams,
    chunkSignal: WritableSignal<string>,
  ): Promise<void> {
      const fullPrompt = /... construct the prompt ... /
      const stream = await this.geminiService.generateTextStream(fullPrompt);

      for await (const chunk of stream) {
        const chunkText = chunk.candidates?.[0].content?.parts?.[0].text || '';
        const markdownText = chunkText.replace(/\n\n/g, '<br><br>')
        chunkSignal.set(markdownText);
      }
  }
}
Enter fullscreen mode Exit fullscreen mode

The generateStory method retrieves a stream from GeminiService. The for await loop iterates the stream to obtain the chunk and extract the chunk text. The \n\n in markdown is replaced with <br><br> and markdownText is set to chunkSignal.

// story-generator.component.ts
@Component({
  selector: 'app-story-generator',
  standalone: true,
  template: `
    <textarea [(ngModel)]="prompt" placeholder="Enter your prompt..."></textarea>
    <button (click)="generateStory()">Generate Story</button>

    <!-- The container where the streamed content will be rendered -->
    <div #storyHolder class="story-container">
      <!-- Signal value is updated on every stream chunk -->
    </div>
  `,
})
@Component({
  selector: 'app-story-generator',
  templateUrl: './story-generator.component.html',
  imports: [],
  styleUrl: '../ui/tailwind-utilities.css',
  changeDetection: ChangeDetectionStrategy.OnPush,
})
export default class StoryGeneratorComponent {
  private readonly storyService = inject(StoryService);
  private readonly parserService = inject(ParserService);

  storyChunk = signal('');

  storyHolder = viewChild<ElementRef<HTMLDivElement>>('storyHolder');
  storyHolderElement = computed(() => this.storyHolder()?.nativeElement);

  constructor() {
    this.prompt.set('A knight who is afraid of the dark');
    effect(() => {
      const element = this.storyHolderElement();
      if (element && !this.parserService.hasParser()) {
        this.parserService.initParser(element);
      }
    });

    afterRenderEffect({
      write: () => this.parserService.writeToElement(this.storyChunk())
    });
  }

  async generateStory(): Promise<void> {
    const params = {
      lengthDescription: this.length(),
      genre: this.genre()
    };
    await this.storyService.generateStory(
      params,
      this.storyChunk,
    );
  }
}
Enter fullscreen mode Exit fullscreen mode

The generateStory method calls the storyService to generate asynchronous content. When the storyHolder element is visible, the effect callback initializes the parser. When the storyChunk signal receives a streamed chunk, the afterRenderEffect function writes it to the element's innerHTML in the write phase.

2. Handling Markdown

// chat.service.ts

@Injectable({
  providedIn: 'root'
})
export class ChatBotService  {
  private readonly geminiService = inject(GeminiService);

  #messages = signal([INITIAL_BOT_MESSAGE]);
  messages = this.#messages.asReadonly();
  #message = signal('');

  async sendMessage(userMessage: string): Promise<void> {
    /... add user message to chat .../ 

    const newId = this.#messages().length;

    try {
      // Get bot response
      const botStream = await this.geminiService
          .sendChatMessageStream(userMessage);

      for await (const chunk of botStream) {
        const chunkText = chunk.candidates?.[0].content?.parts?.[0].text || '';
        DOMPurify.sanitize(this.#message() + chunkText);
        if (DOMPurify.removed.length) {
          throw new Error('Received unsafe content. Aborting.');
        }

        this.#message.update((prev) => prev + chunkText);
      }

      const parsedMarkdown = await marked.parse(this.#message());
      this.#messages.set([...this.#messages(), { id: newId, sender: 'bot', text: parsedMarkdown }]);
    } finally {
      this.#message.set('');
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

The sendMessage method retrieves the chatbot stream from GeminiService. The for await loop iterates the botStream to obtain the chunk and extract the chunk text.DOMPurify sanitizes the accumulated AI message and the chunk. When the sanitization is successful, the #message signal accumulates the chunk. The marked library converts the markdown to HTML and inserts the AI message to the #messages array signal.

// chat.component.ts
@Component({
  selector: 'app-chatbot',
  imports: [FormsModule],
  template: `
<div>
    <app-chatbot-messages #messagesContainer
      [messages]="messages()" 
    />
    <app-chatbot-input (sendMessageClicked)="sendMessage($event)"  />
  </div>
`,
  styleUrls: [
      '../ui/tailwind-utilities.css',
      './chatbot.component.css'],
  changeDetection: ChangeDetectionStrategy.OnPush,
})
export default class ChatbotComponent {
  private readonly chatbotService = inject(ChatBotService);

  messages = this.chatbotService.messages;

  async sendMessage(userMessage: string): Promise<void> {
    this.chatbotService.sendMessage(userMessage);
  }
}
Enter fullscreen mode Exit fullscreen mode

The sendMessage method calls the chatbotService to submit the user message to Gemini Flash 2.5 Lite to generate an answer. The messages stores all the user and AI messages and are displayed in the chatbot messages component.

This application summarizes the integration of Gemini and GenMedia models to a modern AI-rich Angular application.

Top comments (1)

Collapse
 
nguyn_nhc_a2df78ca8 profile image
Nguyễn Đình Đức

Discover the power of AhaChat AI – designed to grow your business.”

“AhaChat AI: The smart solution for sales, support, and marketing.”

“Empower your business with AhaChat AI today.”