I read a blog post by Patrick Loeber where he wrote examples of using Gemini 2.5 Flash Image, also known by its code name Nano Banana, to create, edit, fuse, restore, and transform images.
Therefore, I ported these practical examples using Angular, Firebase AI Logic and Nano Banana. I also added my own creativity by providing custom prompts to generate a new image from a uploaded one.
Key Technologies
- Frontend: Angular 20 (The application framework).
- AI Model: Gemini 2.5 Flash Image (Nano Banana) - used for all image generation and editing tasks.
- Backend/Services: Firebase AI Logic (for seamless integration of Gemini within a Firebase ecosystem).
- Styling: Tailwind CSS.
Create a Firebase project
- Navigate to Firebase Console to create a new Firebase Project.
- Add AI Logic to the project and register a Web App in it. In the web app, there is sample code to install
firebase
and the configuration to initialize a firebase application.
npm install firebase dotenv
Environment Variables Setup
The Angular app keeps the API Keys and IDs in the .env
file securely. The project has a template file, env.example
, so that developers can copy it and provide the Firebase keys and IDs.
GEMINI_MODEL_NAME
is default to gemini-2.5-flash-image
that is a Gemini model responsible for image creation and editing tasks.
// .env.example
FIREBASE_API_KEY=<Firebase API Key>
FIREBASE_AUTH_DOMAIN=<Firebase Auth Domain>
FIREBASE_PROJECT_ID=<Firebase Project ID>
FIREBASE_STORAGE_BUCKET=<Firebase Storage Bucket>
FIREBASE_MESSAGING_SENDER_ID=<Firebase Messaging Sender ID>
FIREBASE_APP_ID=<Firebase App ID>
FIREBASE_MEASUREMENT_ID=<Firebase Measurement ID>
GEMINI_MODEL_NAME="gemini-2.5-flash-image" // The Nano Banana model
The Angular app runs on browser, so it does not have access to the process.env
object.
A small Node.js script,cli.js
, reads these environment variables and exports them into a JSON file (src/app/firebase-ai.json
) so the Angular application can import the configuration.
Finally, add .env
and firebase-ai.json
to the gitignore
file so that developers do not commit the files accidentally.
// .gitignore
.env
firebase-ai.json
TailwindCSS Setup
First, install the dependencies.
npm install tailwindcss @tailwindcss/postcss
Next, follow the TailwindCSS documentation and create .postcssrc.json
.
touch .postcssrc.json
// .postcssrc.json
{
"plugins": {
"@tailwindcss/postcss": {}
}
}
Next, import TailwindCSS in the global stylesheet, styles.css.
// styles.css
@import "tailwindcss";
Integrating with Firebase AI Logic and Nano Banana
I am going to create a firebase provider so that I can use dependency injection to inject a GenerativeModel
into a firebase service.
1. Initialize an Injection Token
// ai/constants/firebase.constant.ts
import { InjectionToken } from '@angular/core';
import { GenerativeModel } from 'firebase/ai';
export const NANO_BANANA_MODEL = new InjectionToken<GenerativeModel>('NANO_BANANA_MODEL');
This is an injection token to inject the Generative model.
2. Firebase Provider
// ai/providers/firebase.provider.ts
import { getAI, getGenerativeModel, makeEnvironmentProviders, ResponseModality } from '@angular/fire/ai';
import { initializeApp } from 'firebase/app';
import firebaseConfig from '../firebase-ai.json';
const { app, geminiModelName = 'gemini-2.5-flash-image' } = firebaseConfig;
const firebaseApp = initializeApp(app);
const ai = getAI(firebaseApp, { backend: new GoogleAIBackend() });
const DEFAULT_CONFIG = {
model: geminiModelName,
generationConfig: {
responseModalities: [ResponseModality.IMAGE],
candidateCount: 1,
}
};
export function provideFirebase() {
return makeEnvironmentProviders([
{
provide: NANO_BANANA_MODEL,
useFactory: () => getGenerativeModel(ai, DEFAULT_CONFIG),
},
]);
}
The provideFirebase
function ensures that the NANO_BANANA_MODEL
token initializes the Firebase app and returns a Gemini 2.5 Flash Image model that only outputs an image.
3. Firebase Service (Core AI Logic)
The FirebaseService
handles the image processing, base64 encoding of input images, and calls to the Nano Banana model.
// ai/services/firebase.service.ts
async function fileToGenerativePart(file: File) {
return await new Promise<InlineDataPart>((resolve) => {
const reader = new FileReader();
reader.onloadend = () => resolve({
inlineData: {
data: (reader.result! as string).split(',')[1],
mimeType: file.type
}
});
reader.readAsDataURL(file);
});
}
The fileToGenerativePart
function reads a file, extracts the base64 encoded data and resolves to a InlineDataPart
object.
// ai/services/firebase.service.ts
async function resolveImageParts(imageFiles: File[]) {
if (!imageFiles.length) {
return [];
}
const imagePartResults = await Promise.allSettled(
imageFiles.map(file => fileToGenerativePart(file))
);
return imagePartResults
.filter((result) => result.status === 'fulfilled')
.map((result) => result.value);
}
The resolveImageParts
checks the status of the promises and filters out the failed one.
// ai/services/firebase.service.ts
@Injectable({ providedIn: 'root' })
export class FirebaseService {
private readonly geminiModel = inject(GEMINI_MODEL_TOKEN); // Injected Nano Banana model
private async getBase64Image(parts: Array<string | Part>) {
const result = await this.geminiModel.generateContent(parts);
const inlineDataParts = result.response.inlineDataParts();
if (inlineDataParts?.[0]) {
const { data, mimeType } = inlineDataParts[0].inlineData;
return `data:${mimeType};base64,${data}`;
}
throw new Error('Error in generating the image.');
}
async generateImage(prompt: string, imageFiles: File[]) {
try {
// validate prompt is not blank
const imageParts = await resolveImageParts(imageFiles);
// Call the firebase AI Logic to generate an image and encode to Base64 inline data
return await this.getBase64Image([prompt, ...imageParts]);
} catch (err) {
console.error('Prompt or candidate was blocked:', err);
if (err instanceof Error) {
throw err;
}
throw new Error('Error in generating the image.');
}
}
}
The generateImage
method calls the Firebase AI Logic to generate an image and encodes to a Base64 inline data.
User Interfaces
The frontend is built with the Angular and it has two main components to highlight several Nano Banana model use cases.
1. Setup Feature Configuration
// features.json
{
"features": {
"create": {
"name": "Image Creation",
"path": "/editor/create",
"buttonText": "Create",
"loadingText": "Creating image...",
"description": "Generate new and unique images from text prompts using powerful AI models."
},
"restoration": {
"name": "Photo Restoration",
"path": "/predefined-prompt/restoration",
"mode": "single",
"buttonText": "Restore",
"loadingText": "Restoring photo...",
"description": "Bring old and damaged photos back to life by removing scratches and restoring color.",
"customPrompt": "Restore this photograph to its original quality. Remove scratches, enhance details, correct colors, and make it look as close to the original as possible when it was first taken."
}
}
}
The JSON file specifies features to render the navigation dropdown, user interface's texts and button texts, and user prompt dynamically.
2. Route Resolver
// app.routes.ts
export const routes: Routes = [
{
path: 'predefined-prompt/:featureId',
loadComponent: () => import('./predefined-prompt-editor/predefined-prompt-editor.component'),
title: featureNameResolver,
resolve: {
feature: featureResolver,
},
},
{
path: 'editor/:featureId',
loadComponent: () => import('./editor/editor.component'),
title: featureNameResolver,
resolve: {
feature: featureResolver,
},
}
];
The routes calls resolvers to resolve the route title and the feature input.
// feature-name.resolve.ts
export type FeatureDetails = {
buttonText: string;
loadingText: string;
description: string;
customPrompt?: string;
name: string;
path: string;
mode?: 'single' | 'multiple'
};
export const featureResolver: ResolveFn<FeatureDetails> = (route: ActivatedRouteSnapshot) => {
const featureService = inject(FeatureService);
const featureId = route.paramMap.get('featureId') || '';
return featureService.getFeatureDetails(featureId);
};
The resolver uses the feature id to look up a feature from the feature configurations. A feature has button text, loading text, description, name, path, optional prompt, and optional image mode.
3. Editor Component
Use cases: Image Creation, Image Editing and Fuse Photos.
// editor.service.ts
@Injectable({
providedIn: 'root'
})
export class EditorService {
private readonly firebaseService = inject(FirebaseService);
async handleGenerate(
featureId: string,
featureNeedsImage: boolean,
imageFiles: File[]
): Promise<string> {
const currentPrompt = this.prompt().trim();
const canGenerateImage = !!currentPrompt && (featureNeedsImage ? imageFiles.length > 0 : imageFiles.length === 0);
if (!canGenerateImage) {
return ''; // Button should be disabled, but this is a safeguard.
}
try {
return await this.firebaseService.generateImage(currentPrompt, imageFiles);
} catch (e: unknown) {
console.error(e);
if (e instanceof Error) {
this.error.set(e.message);
} else {
this.error.set('An unexpected error occurred.');
}
return '';
} finally {
this.isLoading.set(false);
}
}
}
The EditorService
injects the FirebaseService
to generate a new image from other images and a prompt.
The handleGenerate
method validates the prompt is not empty. When featureNeedsImage
is false, it is image creation and the imageFiles
array should not have image. When featureNeedsImage
is true, it is an editing task and the imageFiles
array has at least one image.
// editor.component.ts
import { EditorService } from './services/editor.service';
@Component({
selector: 'app-editor',
imports: [],
templateUrl: './editor.component.html',
changeDetection: ChangeDetectionStrategy.OnPush,
})
export default class EditorComponent {
featureId = input.required<string>();
feature = input.required<FeatureDetails>();
generatedImageUrl = signal('');
imageFiles = signal<File[]>([]);
featureNeedsImage = computed(() => this.feature()?.mode !== undefined);
private readonly editorService = inject(EditorService);
async handleGenerate(): Promise<void> {
const imageUrl = await this.editorService.handleGenerate(
this.featureId(),
this.featureNeedsImage(),
this.imageFiles()
);
this.generatedImageUrl.set(imageUrl);
}
}
The handleGenerate
method calls the EditorService
to generate an image and construct the inline base64 string. The component overwrites the value of the generatedImageUrl
signal and notifies the ImageViewerComponent
to render the image.
4. Predefined Prompt Component
Use cases: Colorization, 2D to 3D Model, 2D to 3D Maps
// predefined-prompt.service.ts
import { Injectable, inject, signal } from '@angular/core';
import { FirebaseService } from '../../ai/services/firebase.service';
@Injectable({
providedIn: 'root'
})
export class PredefinedPromptService {
private readonly firebaseService = inject(FirebaseService);
readonly error = signal('');
readonly isLoading = signal(false)
async handleGenerate(prompt: string, imageFiles: File[]): Promise<string> {
const currentPrompt = prompt.trim();
const editImageCondition = !!currentPrompt && imageFiles.length > 0;
if (!editImageCondition) {
return ''; // Button should be disabled, but this is a safeguard.
}
try {
// validation logic
return await this.firebaseService.generateImage(currentPrompt, imageFiles);
} catch (e: unknown) {
console.error(e);
if (e instanceof Error) {
this.error.set(e.message);
} else {
this.error.set('An unexpected error occurred.');
}
return '';
} finally {
this.isLoading.set(false);
}
}
}
The PredefinedPromptService
injects the FirebaseService
to generate a new image from other images and a prompt.
The handleGenerate
method validates the prompt is not empty and the imageFiles
array has at least one image.
import { PredefinedPromptService } from './services/predefined-prompt.service';
@Component({
selector: 'app-system-instruction-editor',
imports: [],
templateUrl: './predefined-prompt-editor.component.html',
changeDetection: ChangeDetectionStrategy.OnPush,
})
export default class PredefinedPromptComponent {
featureId = input.required<string>();
feature = input.required<FeatureDetails>();
private readonly predefinedPromptService = inject(PredefinedPromptService);
generatedImageUrl = signal('');
imageFiles = signal<File[]>([]);
customPrompt = computed(() => this.feature().customPrompt || '');
async handleGenerate(): Promise<void> {
const imageUrl = await this.predefinedPromptService.handleGenerate(
this.customPrompt(),
this.imageFiles()
);
this.generatedImageUrl.set(imageUrl);
}
}
The handleGenerate
method calls the PredefinedPromptService
to generate an image and construct the inline base64 string. The component overwrites the value of the generatedImageUrl
signal and notifies the ImageViewerComponent
to render the image.
Top comments (0)