DALL-E Is Dead: OpenAI Retires Its Image Models on May 12 — Here's What Replaces Them
On May 12, 2026, OpenAI will pull the plug on DALL-E. Both DALL-E 2 and DALL-E 3 — the image generation models that introduced millions of people to AI-generated art — will stop responding to API calls. The endpoints will return errors. The models will go dark.
This isn't a surprise. OpenAI has been signaling this move for months. ChatGPT users were automatically transitioned from DALL-E 3 to GPT Image 1.5 back in December 2025. The API deprecation notice went out in early 2026. But the actual shutdown date — May 12 — makes it real in a way that deprecation notices don't.
What makes this moment significant isn't just the retirement of a popular product. It's the pattern it represents. In March 2026, OpenAI shut down Sora, its text-to-video model. Now DALL-E follows. Two of OpenAI's most recognizable creative AI tools, gone within two months of each other.
The replacements tell a story about where AI image generation is heading. Instead of standalone, single-purpose models, OpenAI is betting on image generation built directly into its large language models. GPT Image 1.5 is already live. GPT-Image-2 is imminent. The architecture has fundamentally shifted.
This article covers everything you need to know: the full timeline of DALL-E's life and death, what exactly is being retired, what replaces it, how the replacements compare, and what developers and businesses need to do before May 12.
The Timeline: DALL-E's Journey from Breakthrough to Retirement
DALL-E had one of the most compressed product lifecycles in AI history. From first research paper to full retirement in just over five years.
January 2021: DALL-E (Original)
OpenAI published a research blog post introducing DALL-E, a 12-billion parameter version of GPT-3 trained to generate images from text descriptions. It was a research preview, not a product. No public access. But the concept — type a sentence, get an image — captured the imagination of the entire tech world. The name, a portmanteau of Salvador Dali and WALL-E, became instantly iconic.
The original DALL-E could generate images from prompts like "an armchair in the shape of an avocado" or "a professional high-quality illustration of a baby daikon radish in a tutu walking a dog." The results were rough by today's standards, but in 2021 they felt like science fiction.
April 2022: DALL-E 2
DALL-E 2 was the version that changed everything. OpenAI released it with a waitlist system that generated massive demand. The model used a diffusion-based architecture (a significant departure from the original's discrete VAE approach) and produced dramatically higher-quality images at higher resolutions.
DALL-E 2 introduced key features: inpainting (editing specific parts of an image), outpainting (extending images beyond their original borders), and variations (generating similar images based on an uploaded reference). It went from research curiosity to mainstream product. Artists, designers, marketers, and hobbyists flooded the platform.
The API launched later in 2022, enabling developers to build DALL-E 2 into their own applications. This was the beginning of DALL-E as infrastructure — not just a consumer toy, but a building block for other products.
October 2023: DALL-E 3
DALL-E 3 was integrated directly into ChatGPT, a move that foreshadowed the direction OpenAI would ultimately take. Instead of requiring users to visit a separate interface, DALL-E 3 could generate images mid-conversation. Ask ChatGPT to explain a concept, then ask it to illustrate that concept — all in the same thread.
The model quality jumped significantly. DALL-E 3 was far better at following complex prompts, rendering text within images (still imperfect, but dramatically improved), and producing coherent compositions with multiple subjects. It also launched with a built-in safety system developed with ChatGPT's moderation layer.
Critically, DALL-E 3 was also made available through the API, maintaining backward compatibility while offering a substantially more capable model.
2025: GPT-4o Image Generation and the Beginning of the End
The writing was on the wall when OpenAI introduced native image generation capabilities within GPT-4o. Rather than calling a separate DALL-E model, GPT-4o could generate images as part of its own multimodal output. This wasn't a wrapper around DALL-E — it was a fundamentally different architecture where image generation was a native capability of the language model itself.
The quality was competitive with DALL-E 3, and the user experience was superior. No mode-switching, no separate model invocation. Just a conversation that could produce text, code, and images fluidly.
December 2025: GPT Image 1.5 Replaces DALL-E 3 in ChatGPT
In December 2025, OpenAI quietly replaced DALL-E 3 with GPT Image 1.5 as the default image generation model in ChatGPT. Users who had been using DALL-E 3 through ChatGPT were automatically migrated. For most casual users, the transition was seamless — they simply noticed that image generation got faster and more responsive to conversational context.
This was the clearest signal that DALL-E's days were numbered. OpenAI had already moved its flagship consumer product off the model.
Early 2026: Deprecation Announcement
OpenAI formally announced that both the DALL-E 2 and DALL-E 3 APIs would be retired, with May 12, 2026 as the shutdown date. The announcement gave API users roughly four months to migrate their integrations to the new GPT Image endpoints.
March 2026: Sora Shuts Down
Before DALL-E even reaches its shutdown date, OpenAI retired Sora, its text-to-video generation model. The official reasoning cited refocusing resources, but the pattern was clear: OpenAI was pulling back from standalone creative AI tools in favor of integrated capabilities within its core LLM products.
May 12, 2026: DALL-E Goes Dark
The endpoint stops responding. Five years and four months after the original DALL-E blog post, the product line is fully retired.
What Exactly Is Being Retired on May 12
Let's be specific about what stops working and what doesn't.
What Shuts Down
- DALL-E 2 API — The
dall-e-2model endpoint stops accepting requests. Any application callingPOST /v1/images/generationswith"model": "dall-e-2"will receive an error response. - DALL-E 3 API — The
dall-e-3model endpoint stops accepting requests. Same applies: any API call specifying DALL-E 3 as the model will fail. - DALL-E image editing endpoints — The
/v1/images/editsendpoint (inpainting) that relied on DALL-E 2 will no longer function. - DALL-E variations endpoint — The
/v1/images/variationsendpoint is also being retired. - Azure OpenAI DALL-E deployments — Azure customers who deployed DALL-E 2 or DALL-E 3 through Azure OpenAI Service will also be affected. Microsoft has issued its own migration guidance aligned with the May 12 date.
What Is NOT Affected
- ChatGPT image generation — ChatGPT already switched to GPT Image 1.5 in December 2025. If you generate images through ChatGPT (web, mobile, or desktop app), nothing changes for you on May 12.
- Previously generated images — Images you've already created with DALL-E are yours. They don't disappear. But the ability to generate new ones through the DALL-E endpoints ends.
- GPT Image API endpoints — The newer image generation endpoints that use GPT Image 1.5 (and soon GPT-Image-2) continue to function normally.
Impact on Existing Integrations
This is where the real disruption hits. Any application, service, or workflow that makes direct API calls to DALL-E 2 or DALL-E 3 will break on May 12 unless migrated. This includes:
- SaaS products that offer AI image generation powered by DALL-E
- Marketing automation tools with DALL-E integrations
- Design tools and Figma/Canva plugins that call the DALL-E API
- Custom internal tools built on the DALL-E endpoints
- No-code/low-code workflows (Zapier, Make, etc.) that reference DALL-E model names
- Mobile apps using the OpenAI SDK with DALL-E model specifications
If you maintain any of these, May 12 is a hard deadline.
What Replaces DALL-E: The Shift to Multimodal LLM-Integrated Generation
The retirement of DALL-E isn't just a product swap. It represents a fundamental architectural shift in how OpenAI approaches image generation. The old model: a specialized image generation system that receives a text prompt and returns an image. The new model: a multimodal LLM that can generate images as one of its native output modalities, with full awareness of conversation context.
GPT Image 1.5: The Current Default
GPT Image 1.5 has been the default image generation model in ChatGPT since December 2025. It's also available through the API. Here's what defines it:
- Conversation-aware generation. Unlike DALL-E, which treated each prompt as an isolated request, GPT Image 1.5 understands the full conversation context. If you've been discussing brand guidelines for 10 messages, the image it generates reflects that entire conversation — not just the final prompt.
- Iterative refinement. You can say "make the background darker" or "move the text to the left" and GPT Image 1.5 understands what you're referring to. DALL-E required you to re-describe the entire image from scratch for each iteration.
- Faster generation. GPT Image 1.5 produces results noticeably faster than DALL-E 3, particularly for simple requests.
- Integrated with text reasoning. Because the image generation happens within the LLM itself, the model can reason about what to generate before generating it. This leads to better adherence to complex, multi-part prompts.
For API users, the migration path from DALL-E 3 to GPT Image 1.5 is straightforward. The endpoint structure is similar, though there are differences in parameters and pricing that need to be accounted for.
GPT-Image-2: The Imminent Successor
GPT-Image-2 hasn't been officially announced yet, but it's an open secret at this point. On April 4, 2026, a model matching GPT-Image-2's expected specifications appeared on LM Arena (formerly LMSYS Chatbot Arena), the crowdsourced AI benchmark platform. The results were striking.
We've published a detailed review based on the LM Arena data and early access testing: GPT-Image-2 Preview Review. The highlights:
- 99% text rendering accuracy. This has been the Achilles' heel of AI image generation since the beginning. DALL-E 3 could occasionally render short text correctly. GPT-Image-2 handles paragraphs, logos, and complex typography with near-perfect accuracy.
- Color cast elimination. One of GPT Image 1.5's known issues — a tendency to add unwanted color tints to generated images — appears to be resolved in GPT-Image-2.
- 4K resolution output. Previous models topped out at 1024x1024 or similar resolutions. GPT-Image-2 generates natively at up to 4K, which matters for print, large-format displays, and professional design workflows.
- New architecture. While OpenAI hasn't disclosed the technical details, the quality jump suggests a significant architectural change rather than incremental improvement over GPT Image 1.5.
The expected release timeline is late April to mid-May 2026 — conveniently timed to coincide with the DALL-E shutdown, giving API users a clear upgrade path.
The Architectural Shift: Why This Matters
The move from DALL-E to GPT Image represents more than a product update. It's a philosophical shift in how image generation works:
| DALL-E Architecture | GPT Image Architecture |
|---|---|
| Standalone diffusion model | Native capability of multimodal LLM |
| Isolated prompt-to-image pipeline | Context-aware within conversation |
| Text prompt is the only input | Text, images, conversation history, and reasoning all inform generation |
| Each generation is independent | Iterative refinement within a session |
| Separate safety/moderation layer | Safety integrated into the model's reasoning |
| Fixed output sizes (1024x1024, etc.) | Flexible output sizes up to 4K |
This is the same pattern we've seen across AI: specialized, single-purpose models being absorbed into general-purpose multimodal systems. Image generation is following the same path that code generation, data analysis, and web browsing already took within ChatGPT.
GPT Image 1.5 vs. DALL-E 3: What Actually Changed
For the millions of users who were transitioned from DALL-E 3 to GPT Image 1.5 in December 2025, the change wasn't entirely seamless. Some things got better. Some things users miss. Here's an honest assessment.
What's Better in GPT Image 1.5
- Conversational context. This is the biggest improvement. DALL-E 3 in ChatGPT would use ChatGPT to rewrite your prompt before sending it to the DALL-E model, but the image model itself had no awareness of your conversation. GPT Image 1.5 natively understands the thread. The difference shows up most when you're iterating: "Now make it more minimalist" actually works as expected.
- Speed. GPT Image 1.5 generates images noticeably faster than DALL-E 3 did, particularly for standard-complexity requests.
- Text in images. While still not perfect (GPT-Image-2 is the real leap here), GPT Image 1.5 handles text rendering better than DALL-E 3 in most cases. Short phrases, labels, and signs are more consistently accurate.
- Prompt adherence for complex scenes. Multi-subject, multi-action prompts that DALL-E 3 would partially ignore are handled more reliably by GPT Image 1.5.
- Consistent style within a session. Because the model maintains context, generating multiple images in the same style within one conversation is much easier. You don't need to repeat detailed style descriptions for each generation.
What Users Miss from DALL-E 3
- Certain artistic styles. DALL-E 3 had a particular aesthetic that some users preferred, especially for illustration-style outputs. It excelled at a "clean digital illustration" look that GPT Image 1.5 doesn't always replicate exactly.
- Predictability. DALL-E 3's behavior was more predictable in a narrow sense — same prompt, similar output. GPT Image 1.5's context-awareness means it can produce different results depending on conversation history, which is usually a benefit but occasionally a frustration.
- The editing endpoints. DALL-E 2's inpainting and outpainting were specific capabilities that don't have direct equivalents in the GPT Image API yet. Users who built workflows around these features need alternative approaches.
- Pricing clarity. DALL-E 3 had straightforward per-image pricing. GPT Image 1.5 pricing through the API is token-based, which can be harder to predict for budgeting purposes.
The Net Assessment
For most users and use cases, GPT Image 1.5 is a clear upgrade over DALL-E 3. The conversational context and iterative refinement capabilities alone make it the better tool for anyone who generates images as part of a creative workflow. The users most affected by the transition are those who built specific automation pipelines around DALL-E 3's exact behavior and API structure.
GPT-Image-2: The Real Successor
If GPT Image 1.5 is the bridge, GPT-Image-2 is the destination. Based on the LM Arena results from April 4 and early access reports, GPT-Image-2 represents a generational leap that makes the DALL-E retirement feel less like a loss and more like a necessary clearing of the path.
What We Know So Far
We've covered GPT-Image-2 in depth in our full review, but here are the key facts relevant to the DALL-E retirement context:
- Text rendering is essentially solved. 99% accuracy on text within images. This was the single most common complaint about every image generation model since DALL-E's inception. GPT-Image-2 handles multi-line text, different fonts, logos, and typographic layouts with near-perfect fidelity.
- 4K native resolution. No upscaling tricks. The model generates at up to 4096x4096 natively. For professional design, print production, and high-resolution marketing materials, this removes a major limitation.
- The color cast problem is fixed. GPT Image 1.5 has a known tendency to introduce unwanted warm or cool tints. GPT-Image-2 produces neutral, accurate colors by default while still being responsive to color direction in prompts.
- Photorealism reaches a new benchmark. Side-by-side comparisons show GPT-Image-2 producing photorealistic outputs that are materially harder to distinguish from photographs than any previous model.
- Style range. Early testing suggests GPT-Image-2 handles a wider range of artistic styles than GPT Image 1.5, potentially addressing the complaints from users who preferred DALL-E 3's illustration capabilities.
Expected Availability
OpenAI hasn't published an official release date, but multiple signals point to late April or early-to-mid May 2026. The timing makes strategic sense: announce GPT-Image-2 availability before May 12, giving DALL-E API users a compelling reason to migrate rather than just a deadline forcing them off the old model.
For API users planning their migration, the practical advice is: migrate to GPT Image 1.5 now to ensure continuity on May 12, then upgrade to GPT-Image-2 when it becomes available.
The Competitive Landscape Without DALL-E
DALL-E's retirement doesn't happen in a vacuum. The AI image generation market in 2026 is vastly more competitive than when DALL-E 2 first launched in 2022. Here's who benefits from DALL-E's exit and where the market stands.
Midjourney
Midjourney has been DALL-E's primary competitor in the consumer market since 2022. With DALL-E gone, Midjourney becomes the most prominent standalone AI image generation brand. Their V7 model, released in early 2026, produces exceptional results for artistic and creative use cases. Midjourney's strength has always been aesthetic quality and community — they've built a loyal user base that was never going to switch to DALL-E regardless.
DALL-E's retirement may push some users to Midjourney who want a dedicated image generation tool rather than an integrated ChatGPT experience. But Midjourney's Discord-first interface and lack of a full-featured API (their web app is still relatively new) limit its appeal for developers and enterprise users.
Flux (by Black Forest Labs)
Flux has emerged as the open-source leader in image generation. Flux Pro and Flux Dev offer quality competitive with DALL-E 3, and the open-source Flux Schnell model has become the go-to for developers who want fast, free image generation they can run locally. DALL-E's retirement strengthens Flux's position as the primary alternative for developers who want more control over their image generation stack and don't want to depend on OpenAI's product decisions.
Ideogram
Ideogram carved out a niche early with superior text rendering in images — the exact area where DALL-E consistently struggled. With GPT-Image-2 reportedly solving the text problem, Ideogram faces new competitive pressure from above, but DALL-E's exit as a mid-market option could push more users toward Ideogram's specialized strengths in design and typography-focused generation.
Nano Banana Pro and Nano Banana 2
Nano Banana has been gaining traction as a fast, high-quality option that excels at photorealism. As we covered in our GPT-Image-2 comparison review, Nano Banana 2 competes directly with GPT-Image-2 on several benchmarks. DALL-E's exit opens up market space that Nano Banana is well-positioned to fill, particularly for API users who want alternatives to OpenAI's ecosystem.
Stable Diffusion (by Stability AI)
Stability AI has had a turbulent few years, but Stable Diffusion remains one of the most widely used image generation models, particularly in the open-source and self-hosted space. The SD3 and SDXL ecosystems have massive communities of fine-tuned models and tools. For users who want maximum customization, local inference, or specialized fine-tuning, Stable Diffusion continues to be the primary option. DALL-E's exit doesn't directly impact this market segment, but it reinforces the trend toward either fully integrated solutions (like GPT Image) or fully open ones (like SD).
Google's Imagen and Gemini
Google's Imagen 3, available through Gemini and the Vertex AI API, is another multimodal-LLM-integrated image generation system. Google is following a similar architectural path to OpenAI: image generation as a native capability of the conversational AI rather than a standalone service. DALL-E's retirement validates this approach and may accelerate Google's investment in Gemini's image capabilities.
The Bigger Picture
DALL-E's exit clarifies the market into three tiers:
- Integrated multimodal platforms (OpenAI GPT Image, Google Gemini/Imagen) — image generation as a feature of a general-purpose AI
- Dedicated image generation services (Midjourney, Ideogram, Nano Banana) — specialized tools for users who prioritize image quality and creative control
- Open-source and self-hosted (Flux, Stable Diffusion) — maximum control and customization for developers and enterprises with specific requirements
DALL-E occupied an awkward middle ground: a standalone image model from a company that was increasingly focused on integrated multimodal AI. Its retirement resolves that tension.
Market Share Implications
DALL-E's retirement redistributes a significant user base. While exact numbers aren't public, DALL-E 3 was one of the most widely used image generation APIs, particularly among enterprise customers who defaulted to OpenAI's ecosystem for all their AI needs. Those users now face a choice: stay within OpenAI's ecosystem (GPT Image 1.5 / GPT-Image-2), diversify to specialized tools, or adopt multi-model platforms that abstract over multiple providers.
The developers most likely to leave OpenAI's image generation ecosystem entirely are those who were already frustrated with DALL-E 3's limitations — particularly around text rendering, artistic control, and the lack of fine-tuning options. For these users, Flux's open-source customizability or Midjourney's superior aesthetic output were already tempting. The forced migration removes inertia as a factor.
What API Users Need to Do Before May 12: A Migration Checklist
If you have any production system that calls the DALL-E 2 or DALL-E 3 API, the clock is ticking. Here's a practical migration plan.
Step 1: Audit Your DALL-E Usage
- Search your codebase for references to
dall-e-2anddall-e-3model names - Check for calls to
/v1/images/generations,/v1/images/edits, and/v1/images/variations - Review your OpenAI dashboard usage logs to identify all applications consuming DALL-E endpoints
- Check no-code/low-code tools (Zapier, Make, Retool, etc.) for DALL-E integrations
- Audit Azure OpenAI deployments if applicable
Step 2: Understand the API Differences
- Model name change: Update
"model": "dall-e-3"to the appropriate GPT Image model identifier - Parameter differences: Some DALL-E-specific parameters (like
quality,style) may work differently or have different valid values in the GPT Image API - Response format: Verify that the response structure matches your parsing logic
- Pricing model: GPT Image uses token-based pricing rather than per-image pricing. Update your cost tracking and budgeting accordingly
- Rate limits: Check that your rate limits for the new endpoints match your usage patterns
Step 3: Update and Test
- Update your OpenAI SDK to the latest version (older versions may not support the GPT Image endpoints)
- Modify API calls to target the new model and endpoint
- Run your existing prompt suite against GPT Image 1.5 and compare outputs
- Test edge cases: very long prompts, prompts with specific style requirements, prompts that previously worked well with DALL-E's particular aesthetic
- If you used DALL-E 2's edit or variation endpoints, implement alternative workflows (GPT Image handles iterative editing through conversation context rather than dedicated endpoints)
Step 4: Handle the Inpainting/Outpainting Gap
If your product relied on DALL-E 2's /v1/images/edits endpoint for inpainting or outpainting, you need an alternative approach. Options include:
- Using GPT Image's conversational editing capabilities (describe the edit you want in natural language)
- Integrating an alternative inpainting solution (Flux Fill, Stable Diffusion inpainting)
- Waiting for GPT-Image-2, which is expected to include more robust editing capabilities
Step 5: Update Documentation and Communication
- Update your product documentation to reflect the model change
- If your product mentions "Powered by DALL-E" or similar branding, update it
- Notify users if the change affects their experience (different output style, pricing changes, etc.)
- Update your terms of service or privacy policy if they reference specific OpenAI models
Step 6: Plan for GPT-Image-2
- Migrate to GPT Image 1.5 now for May 12 continuity
- Design your integration to make model swapping easy (configuration-based model selection rather than hardcoded)
- When GPT-Image-2 launches, test it against your use cases before switching production traffic
- Consider offering users a choice between models if your product's quality requirements warrant it
OpenAI's Creative Product Strategy: A Pattern Emerges
Zoom out from the DALL-E retirement and a clear pattern emerges in OpenAI's product decisions over the past year.
The Retreat from Standalone Creative Tools
March 2026: Sora shut down. OpenAI's text-to-video model, which launched with enormous hype in early 2024, was retired after struggling with competition, cost structure, and safety concerns. Video generation capabilities are being folded into the ChatGPT/API ecosystem rather than maintained as a separate product.
May 2026: DALL-E shut down. The image generation pioneer, retired in favor of integrated multimodal generation within GPT models.
Two of OpenAI's most publicly visible creative AI products, gone within two months. This isn't coincidence — it's strategy.
The Integration Thesis
OpenAI's bet is that creative capabilities are more valuable as features of a general-purpose AI system than as standalone products. The reasoning:
- Context matters. An image generation model that understands your conversation, your project, and your preferences produces better results than one that sees each prompt in isolation.
- Maintenance cost. Running separate models for text, images, video, code, and other modalities is expensive and complex. Consolidating into a single multimodal architecture is more efficient.
- User experience. Users don't want to context-switch between tools. They want one interface that handles everything. The popularity of "GPT, make me an image" within ChatGPT versus opening a separate DALL-E tool proves this.
- Competitive positioning. The standalone image generation market is crowded (Midjourney, Flux, Ideogram, Stable Diffusion). The integrated multimodal AI market is less contested and harder to replicate.
What This Means for the Industry
OpenAI's move signals a broader trend that will affect the entire AI industry:
- Standalone creative AI tools face consolidation pressure. If the largest AI company in the world decided that standalone image and video generation models aren't worth maintaining separately, smaller companies building similar standalone products should take notice.
- Multimodal is the new baseline. Expect Google (Gemini), Anthropic (Claude), and other major AI labs to accelerate their own multimodal capabilities. The expectation is shifting from "can your AI generate images?" to "can your AI generate images, video, audio, and code within a single conversation?"
- API stability becomes a real concern. Developers who built on DALL-E are now forced to migrate. This experience will make teams more cautious about deep integration with any single model, and more interested in abstraction layers that insulate them from upstream model changes.
- The open-source advantage grows. One thing that Flux and Stable Diffusion can offer that OpenAI cannot: they won't be retired by a corporate product decision. For organizations that need long-term stability, self-hosted open-source models become more attractive after seeing DALL-E and Sora shut down.
- Abstraction layers become essential infrastructure. The DALL-E retirement is a case study in why direct model coupling is risky. Expect more demand for middleware and orchestration platforms that decouple applications from specific model providers.
Genra's Perspective
We'll keep this brief because this article is about DALL-E and OpenAI's strategy, not about us. But the DALL-E retirement does illustrate something we've built our platform around.
At Genra, we integrate multiple image and video generation models behind the scenes. When you create content through Genra, our multi-model orchestration layer selects the best available model for your specific request — considering factors like image type, style requirements, resolution needs, and speed. When DALL-E retires on May 12, Genra users won't notice anything. The orchestration layer will simply stop routing to DALL-E endpoints and continue routing to GPT Image 1.5, GPT-Image-2 (when available), and other models in our stack.
This is the advantage of working at the platform level rather than directly with individual model APIs. Models come and go. Products get retired. The platforms that abstract over multiple models provide continuity that single-model integrations cannot.
Key Takeaways
- DALL-E 2 and DALL-E 3 APIs shut down on May 12, 2026. Both endpoints will stop accepting requests. If you have production integrations, migration is mandatory, not optional.
- ChatGPT users are already on GPT Image 1.5. The consumer-facing transition happened in December 2025. May 12 primarily affects API users and Azure OpenAI deployments.
- GPT Image 1.5 is the immediate replacement. It's live, it's available through the API, and it's a genuine upgrade in terms of conversational context and iterative refinement.
- GPT-Image-2 is coming imminently. Expected late April to mid-May 2026, with 99% text rendering, 4K resolution, and resolved color cast issues. This is the real successor to DALL-E.
- The architectural shift is from standalone to integrated. OpenAI is moving image generation from a separate model to a native capability of its LLMs. This is the same path Google is taking with Gemini/Imagen.
- Sora + DALL-E retirements show a clear strategy. OpenAI is pulling back from standalone creative tools in favor of capabilities integrated within ChatGPT and the API. Expect this trend to continue.
- The competitive landscape benefits everyone else. Midjourney, Flux, Ideogram, Nano Banana, and Stable Diffusion all gain market share as DALL-E exits the standalone image generation space.
- API stability is a growing concern. Two major model retirements in two months will push developers toward abstraction layers and multi-model platforms that insulate against upstream changes.
Frequently Asked Questions
When exactly does DALL-E shut down?
Both DALL-E 2 and DALL-E 3 APIs will stop accepting requests on May 12, 2026. After that date, any API call specifying a DALL-E model will return an error. ChatGPT image generation is not affected, as it already transitioned to GPT Image 1.5 in December 2025.
Will my existing DALL-E generated images be deleted?
No. Images you've already generated with DALL-E are yours and will not be removed. The retirement only affects the ability to generate new images through DALL-E endpoints. Any images stored in your OpenAI account history or downloaded locally remain accessible.
What is the direct replacement for the DALL-E 3 API?
GPT Image 1.5 is the current replacement, available through OpenAI's API. GPT-Image-2 is expected to launch in late April to mid-May 2026 as a further upgrade. The API structure is similar but not identical to DALL-E 3 — you'll need to update model names, review parameter changes, and adjust for token-based pricing.
Is GPT Image 1.5 better than DALL-E 3?
For most use cases, yes. GPT Image 1.5 offers better conversational context awareness, faster generation, improved text rendering, and stronger adherence to complex prompts. Some users miss DALL-E 3's particular illustration aesthetic and the predictability of its outputs. The editing endpoints (inpainting, outpainting, variations) from DALL-E 2 don't have direct equivalents yet.
What happened to Sora, and is it related to the DALL-E shutdown?
OpenAI shut down Sora, its text-to-video model, in March 2026. While OpenAI hasn't explicitly linked the two decisions, they follow the same pattern: retiring standalone creative AI products and folding those capabilities into integrated multimodal systems within ChatGPT and the API. Both decisions reflect OpenAI's strategic shift away from maintaining separate models for each creative modality.
Are Azure OpenAI DALL-E deployments also affected?
Yes. Azure OpenAI customers who deployed DALL-E 2 or DALL-E 3 through Azure OpenAI Service are affected by the same May 12, 2026 shutdown date. Microsoft has issued migration guidance for Azure customers. Check the Azure OpenAI Service documentation for Azure-specific migration paths and alternative model deployments.
What should I use if I need inpainting or outpainting, since those DALL-E 2 endpoints are being retired?
You have several options: use GPT Image 1.5's conversational editing (describe the edit you want in natural language), integrate an alternative like Flux Fill or Stable Diffusion inpainting for programmatic use, or wait for GPT-Image-2 which is expected to include enhanced editing capabilities. The approach depends on whether you need API-level programmatic access or can work within a conversational interface.
How does this affect platforms like Genra that use multiple AI models?
Multi-model platforms are the least affected by individual model retirements. Platforms like Genra that integrate multiple image generation models behind the scenes can automatically reroute requests when a model is retired, ensuring users experience no disruption. This is one of the practical benefits of using a platform layer rather than integrating directly with a single model's API.
Top comments (0)