AI is impressive, but it is also expensive. In today’s landscape, developers need smarter ways to add AI-powered features without burning through budgets. Fortunately, the Chrome team has been working on a set of on-device AI APIs that let you run models locally, trained for specific tasks, at zero inference cost.
Some of the available APIs include:
- Language API: Detects the language of a given text
- Translation API: Translates text from one language to another
- Prompt API: Accepts free-form prompts for structured outputs
- Summarization API: Condenses long text into concise summaries
In this post, I’ll show how you can take advantage of these APIs to add genuinely useful features without paying for tokens.
One thing I find particularly interesting is how Google Search now often gives you an AI-generated summary or even a direct answer. Sometimes you just want the takeaway, not the entire article. That idea is exactly what we’re going to replicate here.
The goal is simple: build a small piece of logic that generates a TL;DR (too long to read) section in bullet points for a blog post, so readers can instantly understand what it’s about.
Getting Started
Make sure you are running the latest version of Chrome. Older versions may not include the most recent on-device models.
Next, enable the following flags and restart Chrome:
chrome://flags/#optimization-guide-on-device-modelchrome://flags/#prompt-api-for-gemini-nano-multimodal-input
Once that’s done, install the models by running:
const session = await LanguageModel.create({
monitor(m) {
m.addEventListener('downloadprogress', (e) => {
console.log(`Downloaded ${e.loaded * 100}%`);
});
},
});
const summarizer = await Summarizer.create({
monitor(m) {
m.addEventListener('downloadprogress', (e) => {
console.log(`Downloaded ${e.loaded * 100}%`);
});
}
});
When the download reaches 100%, you are ready to use the APIs.
Implementation
We’ll use Angular for the implementation. If you don’t already have it set up:
- Install the Angular CLI:
npm install -g @angular/cli - Navigate to your desired folder and run:
ng new <name-project>
To speed things up, we’ll use Google’s new IDE, Antigravity, which relies on agents to generate code. The following prompt describes the UI we want to build:
You are an expert Frontend Developer. Your task is to build a premium, visually stunning blog post page for an Angular application. The project is already initialized.
### Goal
Create a standalone component named `BlogPost` that serves as a static blog post page. The design should be modern, "dark mode" by default, and evoke a high-tech, futuristic feel suitable for the topic of "Agentic AI".
### Structure & Content Constraints
The page must contain the following specific elements, stacked vertically:
1. **Header Section**:
* **Tags**: A row of small pill-shaped tags: "Artificial Intelligence", "Future", "Tech".
* **Title**: Large, impactful typography: "The Rise of Agentic AI: A New Era of Coding".
* **Subtitle**: A lighter sub-heading: "How autonomous agents are transforming the software development landscape".
2. **TL;DR Section**:
* Placed prominently below the header but before the main content.
* This section must clearly stand out from the rest of the text (e.g., using a border, different background tint, or accent color).
* **Heading**: "TL;DR".
* **Content**: A bulleted list summarizing the article (e.g., AI moving from autocomplete to autonomy, changing developer roles to architects).
3. **Main Content**:
* Several paragraphs of text discussing "Agentic AI".
* Explain how it differs from traditional coding assistants.
* Discuss the shift from "writing code" to "guiding agents".
* Use highly readable typography with good line height and contrast.
### Design & Aesthetics (Crucial)
* **Theme**: Dark mode. Background should be very dark (nearly black), text should be light grey/white.
* **Typography**: Use a clean sans-serif font like 'Inter'.
* **Color Palette**: Use neon/electric accents to pop against the dark background.
* *Primary Accent*: Electric Teal or Cyan (for tags/highlights).
* *Secondary Accent*: Electric Purple (for the TL;DR section or links).
* **Visual Style**:
* The blog post container should look like a "card" floating in the center of the screen with a subtle shadow and rounded corners.
* Use subtle gradients for the text title if possible.
* Ensure the design is fully responsive (looks good on mobile).
### Technical Requirements
* Use Angular Standalone Components.
* Hardcode all the text content directly in the template or component class for now.
* Do NOT implement any actual AI calls or backend services; this is purely a UI implementation task.
At this point, the focus is purely on UI. No AI calls yet.
Implementing the APIs
Create a new service to hold all the AI-related logic.
Start with a function that creates a session for the Summarization API:
private async createSummarizerSession() {
return await Summarizer.create({
type: 'key-points',
length: 'short',
expectedInputLanguages: ['en'],
outputLanguage: 'en',
expectedContextLanguages: ['en'],
sharedContext: 'About AI and Agentic AI'
});
}
Most parameters are self-explanatory. The most important one is type, which directly influences how the summary is generated.
If you want to explore all available options, check the official documentation:
https://developer.chrome.com/docs/ai/summarizer-api
Next, create a session for the Prompt API. We’ll use it to format the summarizer output into a clean structure:
private async createLanguageModelSession() {
return await LanguageModel.create({
initialPrompts: [
{ role: 'system', content: 'Convert this bullets into html. Return only the HTML' }
],
});
}
Finally, combine both sessions. The summarizer extracts the key points, and the Prompt API enforces a structured output using a schema.
async generateTlDr(content: string): Promise<string[]> {
const session = await this.createSummarizerSession();
const summary = await session.summarize(content, {
context: 'This article is intended for a tech-savvy audience.',
});
session.destroy();
const lmSession = await this.createLanguageModelSession();
const result = await lmSession.prompt(summary, { responseConstraint: schema });
lmSession.destroy();
const parsed = JSON.parse(result);
return parsed?.items || [];
}
Here is the full service code:
import { Injectable } from '@angular/core';
declare const Summarizer: any;
declare const LanguageModel: any;
const schema = {
type: 'object',
required: ['type', 'items'],
properties: {
type: {
type: 'string',
enum: ['bullet_list'],
description: 'Identifies the content as a bullet list'
},
items: {
type: 'array',
minItems: 1,
items: {
type: 'string',
minLength: 1
},
description: 'Each entry is one bullet item, without bullet symbols'
}
},
additionalProperties: false
};
@Injectable({
providedIn: 'root'
})
export class AiService {
async generateTlDr(content: string): Promise<string[]> {
const session = await this.createSummarizerSession();
const summary = await session.summarize(content, {
context: 'This article is intended for a tech-savvy audience.',
});
session.destroy();
const lmSession = await this.createLanguageModelSession();
const result = await lmSession.prompt(summary, { responseConstraint: schema });
lmSession.destroy();
const parsed = JSON.parse(result);
return parsed?.items || [];
}
private async createSummarizerSession() {
return await Summarizer.create({
type: 'key-points',
length: 'short',
expectedInputLanguages: ['en'],
outputLanguage: 'en',
expectedContextLanguages: ['en'],
sharedContext: 'About AI and Agentic AI'
});
}
private async createLanguageModelSession() {
return await LanguageModel.create({
initialPrompts: [
{ role: 'system', content: 'Convert this bullets into html. Return only the HTML' }
],
});
}
}
At this stage, most of the work is done.
Wiring It into the Component
Call the service from your component:
private aiService = inject(AiService);
public readonly postContent = content;
public tltrContent = signal<string[]>([]);
async ngOnInit() {
const tldr = await this.aiService.generateTlDr(this.postContent);
this.tltrContent.set(tldr);
}
Then update the template to render the TL;DR section:
<section class="tltr">
<h3>TL;DR</h3>
<div class="whitespace-pre-line">
@if (tltrContent().length > 0) {
<ul>
@for (item of tltrContent(); track $index) {
<li>{{ item }}</li>
}
</ul>
} @else {
<p>Loading...</p>
}
</div>
</section>
The TL;DR shown below is real and generated entirely on your machine. No tokens. No cost.
Support and Stability
Some of these APIs are still experimental, while others are already stable. In this example, the Summarization API does most of the heavy lifting and is already stable. The Prompt API is still experimental, but replacing it with regular code later should be straightforward.
You can find the full example here:
https://github.com/marianocodes/ai-local-gemini-nano-and-summarize
If you found this useful, share it and give it a like ❤️.




Top comments (0)