Everyone is talking about Gemini 2.0 Flash (affectionately known as the "Nano Banana" model). It’s fast, it’s multimodal, and it’s finally giving developers a run for their money against other image generation models.
But if you are building an AI agent, a Discord bot, or a simple web app, you hit a common wall immediately: Storage.
The Gemini API returns the image as raw data (bytes). You can't drop raw bytes into a Slack channel or an <img> tag effortlessly. You need a URL.
Usually, this means:
- Setting up an AWS S3 bucket.
- Configuring IAM roles and policies.
- Fighting with CORS headers.
- Writing 50 lines of boto3 boilerplate.
Today, we’re going to skip all of that. We’re going to use Lab Nocturne Images, a stupid-simple image API that requires zero config, to build a text-to-image generator that returns a shareable link in less than 50 lines of Python.
The Stack
- Engine: Google Gemini 2.0 Flash
- Storage: Lab Nocturne Images
- Language: Python
Step 1: Get your Storage Key (1 second)
We aren't going to sign up for an account or add a credit card. Lab Nocturne has a "curl-to-start" feature.
Open your terminal:
curl https://images.labnocturne.com/key
You’ll get a JSON response with your API key. Copy it. That’s it. You now have storage.
Step 2: The Python Script
We need to generate the image, catch the bytes in memory (no saving to disk required), and pipe them straight to the hosting API.
pip install google-generativeai requests
Here is the full code:
import os
import requests
import google.generativeai as genai
import io
# CONFIGURATION
GOOGLE_API_KEY = "YOUR_GEMINI_API_KEY"
LAB_NOCTURNE_KEY = "YOUR_LN_TEST_KEY_FROM_STEP_1"
# Setup Gemini
genai.configure(api_key=GOOGLE_API_KEY)
model = genai.GenerativeModel('gemini-2.0-flash-exp')
def host_image(image_bytes):
"""
Uploads raw bytes to Lab Nocturne and returns a public URL.
No AWS config, no buckets, just a POST request.
"""
url = "https://images.labnocturne.com/upload"
headers = {"Authorization": f"Bearer {LAB_NOCTURNE_KEY}"}
# We send the bytes directly as a file named 'generated.jpg'
files = {'file': ('generated.jpg', image_bytes, 'image/jpeg')}
response = requests.post(url, headers=headers, files=files)
if response.status_code == 200:
return response.json()['url']
else:
raise Exception(f"Upload failed: {response.text}")
def generate_and_share(prompt):
print(f"🎨 Generating: '{prompt}'...")
# Generate the image
response = model.generate_content(prompt)
# Gemini returns parts; we want the image blob
# Note: Ensure your prompt triggers image generation capability
if response.parts:
img_part = response.parts[0]
# Convert to bytes
img_bytes = io.BytesIO(img_part.inline_data.data)
print("☁️ Uploading to CDN...")
public_url = host_image(img_bytes)
return public_url
else:
return "Failed to generate image."
# --- RUN IT ---
if __name__ == "__main__":
prompt = "A cyberpunk banana wearing sunglasses, 3d render, 4k"
link = generate_and_share(prompt)
print("-" * 30)
print(f"🚀 Image Live at: {link}")
print("-" * 30)
Why this matters for AI Agents
If you are building an autonomous agent (using LangChain, CrewAI, or AutoGen), your agent needs "tools."
Giving your agent S3 access is risky and complex. Giving your agent a Lab Nocturne key is safe and trivial. You can drop the host_image function above into any LLM tool definition, and suddenly your text-based bot creates persistent visual artifacts.
The Limits
The command-line key we generated in Step 1 is a Test Key.
- Images persist for 7 days (perfect for testing/prototyping).
- 100MB limit.
If you are shipping to production, you might want to grab a permanent key, but for hackathons, side projects, and "Nano Banana" experiments, this is the fastest way to get from Prompt -> URL.
Happy Shipping! 🚢
Top comments (0)