Introduction
I've spent the last few months working with various AI frameworks, and Genkit has become my go-to for production applications. It's Google's open-source framework for building AI workflows, and it solves a lot of the headaches I used to deal with when wiring up LLMs to real applications.
In this blog, I will walk you through why I think it's worth your time, and then we'll build something practical together.
The Problem Genkit Solves
If you've tried building AI features into production apps, you've probably run into the same issues I have:
Prompt spaghetti. Your prompts start as simple strings, then grow into multi-line templates with context injection, then suddenly you've got prompts scattered across services with no clear ownership.
Debugging black holes. Something went wrong in your AI pipeline. Was it the prompt? The model? The context you passed in? Good luck figuring it out without proper observability.
The integration mess. You need to call an external API, pass that data to an LLM, maybe do some post-processing, then return a structured response. This gets ugly fast without a clean abstraction.
Genkit addresses these by giving you a flow primitive—think of it as a function that can contain multiple steps (API calls, LLM invocations, transformations) with built-in logging and a dev UI for inspection.
What We're Building
We'll create a city analyser that:
- Takes a city name from the frontend
- Fetches real weather data from an external API
- Passes that to Gemini for analysis
- Returns both raw data and AI-generated insights
Nothing revolutionary, but it demonstrates the pattern you'll use for most real-world AI features.
Setting Up the Backend
First, let's get the Genkit server running.
mkdir genkit-backend && cd genkit-backend
npm init -y
npm install @genkit-ai/core @genkit-ai/flow @genkit-ai/google-ai express cors
npm install -D typescript ts-node @types/node @types/express
Here's the flow implementation. I'm using the API Ninjas weather endpoint, but swap in whatever data source makes sense for your use case:
// src/flows/analyzeCity.ts
import { flow } from '@genkit-ai/flow';
import { genkit } from '@genkit-ai/core';
import { googleAI } from '@genkit-ai/google-ai';
const ai = genkit({
plugins: [googleAI()],
});
interface WeatherData {
temp: number;
humidity: number;
wind_speed: number;
}
async function fetchWeather(city: string): Promise<WeatherData> {
const response = await fetch(
`https://api.api-ninjas.com/v1/weather?city=${encodeURIComponent(city)}`,
{
headers: { 'X-Api-Key': process.env.WEATHER_API_KEY! },
}
);
if (!response.ok) {
throw new Error(`Weather API failed: ${response.status}`);
}
return response.json();
}
export const analyzeCityFlow = flow(
'analyzeCityFlow',
async ({ city }: { city: string }) => {
// Step 1: Get external data
const weather = await fetchWeather(city);
// Step 2: Generate AI analysis
const result = await ai.generate({
model: 'google/gemini-pro',
prompt: `Given the current weather in ${city}:
- Temperature: ${weather.temp}°C
- Humidity: ${weather.humidity}%
- Wind Speed: ${weather.wind_speed} km/h
Write 2-3 sentences describing what it's like there right now and any practical suggestions for someone visiting today.`,
});
return {
city,
weather,
analysis: result.text(),
};
}
);
And the Express server to expose it:
// src/server.ts
import express from 'express';
import cors from 'cors';
import { analyzeCityFlow } from './flows/analyzeCity';
const app = express();
app.use(cors());
app.use(express.json());
app.post('/api/analyze-city', async (req, res) => {
try {
const result = await analyzeCityFlow(req.body);
res.json(result);
} catch (error) {
console.error('Flow failed:', error);
res.status(500).json({ error: 'Analysis failed' });
}
});
const PORT = process.env.PORT || 4000;
app.listen(PORT, () => {
console.log(`Server running on port ${PORT}`);
});
You can test flows in isolation using npx genkit dev, which spins up a local UI where you can invoke flows with test inputs and inspect each step. This alone has saved me hours of debugging.
The React Frontend
Nothing fancy here—just a Vite + TypeScript setup:
npm create vite@latest genkit-react -- --template react-ts
cd genkit-react
npm install
// src/App.tsx
import { useState } from 'react';
interface AnalysisResult {
city: string;
weather: {
temp: number;
humidity: number;
wind_speed: number;
};
analysis: string;
}
function App() {
const [city, setCity] = useState('');
const [result, setResult] = useState<AnalysisResult | null>(null);
const [loading, setLoading] = useState(false);
const [error, setError] = useState<string | null>(null);
const handleAnalyze = async () => {
if (!city.trim()) return;
setLoading(true);
setError(null);
try {
const response = await fetch('http://localhost:4000/api/analyze-city', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ city }),
});
if (!response.ok) throw new Error('Request failed');
setResult(await response.json());
} catch (err) {
setError('Failed to analyze city. Check if the backend is running.');
} finally {
setLoading(false);
}
};
return (
<main style={{ maxWidth: 600, margin: '0 auto', padding: 40 }}>
<h1>City Weather Analyzer</h1>
<div style={{ display: 'flex', gap: 8, marginBottom: 24 }}>
<input
type="text"
value={city}
onChange={(e) => setCity(e.target.value)}
placeholder="Enter a city name"
onKeyDown={(e) => e.key === 'Enter' && handleAnalyze()}
style={{ flex: 1, padding: 8 }}
/>
<button onClick={handleAnalyze} disabled={loading}>
{loading ? 'Analyzing...' : 'Analyze'}
</button>
</div>
{error && <p style={{ color: 'red' }}>{error}</p>}
{result && (
<div>
<h2>{result.city}</h2>
<p>{result.analysis}</p>
<details>
<summary>Raw weather data</summary>
<pre>{JSON.stringify(result.weather, null, 2)}</pre>
</details>
</div>
)}
</main>
);
}
export default App;
Why This Architecture Works
A few things I like about this setup:
Separation of concerns. The React app doesn't know anything about LLMs or external APIs. It just calls an endpoint and renders the response. If you swap Gemini for GPT-4 or change your data source, the frontend doesn't care.
Testability. Flows are just async functions. You can unit test them, mock the external API calls, and verify the prompt logic independently of the UI.
Observability out of the box. The Genkit dev UI shows you exactly what went into each step and what came out. When something breaks in production, you'll actually be able to figure out why.
Incremental complexity. Start with a single-step flow, then add steps as needed. Need to cache the weather data? Add a step. Need to run the analysis through a moderation filter? Add a step. The architecture scales with your requirements.
Where to Go From Here
This example barely scratches the surface. In production, you'd want to add:
- Input validation and rate limiting
- Caching for external API calls
- Streaming responses for better UX
- Authentication
- Proper error handling with retries
Genkit also supports retrieval-augmented generation (RAG), evaluations, and deployment to Firebase or Cloud Run. The official docs cover these in detail.
Conclusion
If you're building AI features into production apps and tired of ad-hoc solutions, give Genkit a serious look. The learning curve is minimal if you're already comfortable with TypeScript, and the structure it provides pays dividends as your AI logic grows more complex. If you like this blog and want to learn more about Frontend Development and Software Engineering, you can follow me on Dev.
Top comments (0)