DEV Community

Dylan HUANG
Dylan HUANG

Posted on • Originally published at nanobanana2.com

How People Actually Use AI Image Generation: Data from 4,900+ Users

How People Actually Use AI Image Generation: Data from 4,900+ Users

Based on real usage data from NanoBanana2 (nanobanana2.com), an AI image platform with users across 54 countries.

The Surprising Truth: Editing Beats Creating

The AI image generation space is dominated by "text-to-image" marketing. But our data tells a different story.

75% of all generations are edits, not creations.

Users don't come to generate images from scratch — they come to modify existing photos. They upload a product shot and change the background. They take a portrait and adjust the lighting. They grab a screenshot and restyle it.

Usage Breakdown (Last 7 Days, 2,100+ Generations)

Mode Share What Users Do
Edit Mode 75.2% Upload photo → describe changes in natural language
Standard (Text-to-Image) 21.4% Generate from text prompt
Video Generation 1.9% Text/image to video
Other 1.5% Upscaling, enhancement, background removal

This ratio has been consistent since the Edit Mode feature launched and has been climbing — it was 58% in early March, now 75%.

6 Real Use Cases We've Observed

1. E-Commerce Product Photography (Most Common Paid Use Case)

Online sellers upload product photos and use AI to:

  • Place products in lifestyle settings without a photoshoot
  • Generate multiple background variations for A/B testing
  • Create seasonal versions (holiday themes, summer vibes)
  • Remove backgrounds and add professional studio lighting

Why it matters: A professional product photoshoot costs $500-2,000. AI editing costs under $1 per image.

Countries with highest e-commerce usage: Singapore, Japan, United States

2. Architecture & Interior Visualization

Architects and designers upload building photos or renderings and ask AI to:

  • Show the same building in different seasons (spring cherry blossoms, winter snow)
  • Visualize different lighting conditions (golden hour, overcast, nighttime)
  • Add landscaping, people, or vehicles for context
  • Test material changes (brick → glass, wood → concrete)

Why it's uniquely suited to AI editing: Traditional 3D rendering takes hours per variation. AI editing takes seconds. The ability to reference real-world conditions through web search grounding makes the results more accurate.

3. Academic & Research Visualization

Researchers and students use the platform to:

  • Beautify data charts and figures for papers and presentations
  • Generate conceptual diagrams from text descriptions
  • Create poster-quality scientific illustrations
  • Convert rough sketches into polished graphics

Key insight: This is a highly price-sensitive segment. The free tier (5 credits) is often sufficient for a single paper's needs, but researchers who publish frequently become subscribers.

4. Children's Book & Illustration

Illustrators and parents use AI to:

  • Generate character concepts in consistent styles
  • Create full-page illustrations from text descriptions
  • Edit existing illustrations to adjust poses, expressions, or backgrounds
  • Produce multiple variations to choose from

Trend: Children's book illustration is one of the fastest-growing use cases, driven by self-publishing platforms making it easier for anyone to create books.

5. Social Media Content Creation

Content creators and social media managers use the platform for:

  • Quick visual assets for posts (no designer needed)
  • Trending topic imagery (web search grounding ensures relevance)
  • Before/after transformations for engagement
  • Brand-consistent visuals across platforms

Web Search Grounding advantage: When a content creator asks for "Oscar 2026 poster style" or "cherry blossom forecast Japan," the AI references actual current information — a capability unique to platforms using Google's Gemini with grounding.

6. Portrait & Personal Photo Enhancement

Individual users edit personal photos to:

  • Change backgrounds (vacation photos, professional headshots)
  • Adjust lighting and mood
  • Remove unwanted objects or people
  • Apply artistic styles while preserving likeness

This is the highest volume but lowest monetization use case — most personal users stay on the free tier. However, it drives organic growth through sharing.

Geographic Distribution

Our 4,900+ users span 54 countries. The distribution reflects the platform's 11-language support:

Region Share Top Countries
North America 16% US (14.5%), Canada (1.5%)
South & Southeast Asia 16% India (6.5%), Indonesia (3.1%), Singapore (2.5%)
Europe 25% Russia (5.5%), Netherlands (4.8%), Germany (4.4%), UK (2.4%), France (2.1%)
East Asia 10% Japan (3.5%), Hong Kong (3.0%), South Korea
Middle East & Africa 8% Egypt (2.6%), Iran (2.6%)
South America 3% Brazil (1.6%)

Notable: The Netherlands ranks #4 globally despite being a small country — driven by the country's strong e-commerce and design industry.

What This Means for the AI Image Market

  1. "Text-to-image" is not the killer use case — editing existing photos is. The industry's marketing doesn't match user behavior.

  2. Web search grounding is a genuine differentiator for time-sensitive content creation. No amount of training data can replace knowing what's happening today.

  3. The market is global from day one. Language support isn't a nice-to-have — it's the difference between serving 14% of potential users (English only) and 100%.

  4. Price sensitivity varies dramatically by use case. E-commerce users gladly pay $29.90/month because the ROI is obvious. Personal photo editors rarely convert. Academic users fall somewhere in between.


Data source: NanoBanana2 (nanobanana2.com) internal analytics, March 2026. All data anonymized.

Top comments (0)