Twenty-five years in advertising, and I've never seen a tool shift the creative brief conversation the way Midjourney has. Clients used to come in with moodboards torn from magazines. Now they come in with Midjourney grids and say "something like this, but more blue."
That's either the beginning of a beautiful workflow or the death of creative direction, depending on your mood.
After three months using Midjourney v7 as a primary tool on real client projects -- brand identity work, editorial illustrations, social content -- here's what I've actually learned.
What Changed in v7
Midjourney has been iterating fast. v7 landed in early 2026 and the differences from v6 are real, not just marketing.
The biggest change is what I'd call coherence. Complex scenes -- multiple people, busy backgrounds, specific spatial relationships -- used to be where Midjourney fell apart. v7 handles them better. Not perfectly, but noticeably better. I tested this on a brief for a client who needed editorial-style images of small business teams. v6 kept generating weirdly posed people who didn't seem to be in the same physical space. v7 got it right about 60% of the time without extra prompting. That matters when you're billing hourly.
Text rendering improved. Not to Ideogram levels -- we'll get to that -- but you can now get legible short phrases on signs, buttons, and simple layouts without running to a different tool. "Open" on a storefront door. A product label with a short name. One or two words on a poster. That used to be a nightmare.
The personalization system is genuinely interesting. After a few hundred image ratings, Midjourney learns your preferences and starts generating images that lean into your aesthetic without explicit prompting. I found it useful but a little eerie. It works.
The Interface Situation
For most of Midjourney's existence, you had to use it through Discord. Typing commands into a chat interface to generate images. Watching your prompts get shuffled between other users' generations in a shared bot channel. Clicking a tiny upscale button before your image scrolled off the screen.
It was ridiculous. We all used it anyway because the output was that good.
The web interface -- midjourney.com -- finally exists and actually works. Clean grid view, project folders, image history, and a prompt input that doesn't require Discord syntax. You can remix images directly, adjust aspect ratios with sliders, and see your generation history without scrolling through Discord chat.
I switched to the web interface immediately and haven't gone back to Discord. Some power users still prefer Discord for the speed of keyboard shortcuts. But for any new user, the web interface is the way to go. It's what Midjourney should have launched with two years ago.
Where Midjourney Still Wins
The Aesthetic Eye
This is the thing that's hardest to quantify and most important to understand. Midjourney makes images that look like they were meant to look that way. There's a compositional intention to them -- a sense of light, mood, and visual weight -- that DALL-E and even the best open-source models don't consistently match.
I ran a test during a brand identity project. Same prompt, same aspect ratio, same subject: "a founder working late at night in a minimalist loft office, warm lamp light, laptop glow, city lights through floor-to-ceiling windows, editorial photography style."
Midjourney: the light sources interacted naturally. The atmosphere was specific. I could show that image to a creative director without explanation.
DALL-E 3.5: technically accurate, good prompt adherence, but flatter. It looked like a stock photo. Not a bad stock photo, but it didn't have a feeling.
For mood boards, editorial work, brand concepts, and anything where emotional resonance matters -- Midjourney is still the one.
Character Reference and Style Reference
These features arrived in late v6 and v7 has improved them further. Character reference (--cref) lets you maintain visual consistency for a specific character across different scenes and styles. Style reference (--sref) lets you lock in an aesthetic -- a particular illustrative style, a lighting approach, a color palette -- and apply it consistently across a project.
I used both on a children's book concept for a client. Fed Midjourney an initial character sketch, then generated 30 different scenes with that same character using --cref. The consistency was remarkable -- maybe 80% required no additional prompting to maintain the character's visual identity. The remaining 20% needed a couple of prompt tweaks, but the time savings versus manually prompting for consistency on every image were substantial.
For agencies and brand teams, this is the feature that justifies the Pro subscription.
Community and Inspiration
Midjourney's public explore feed and community galleries are genuinely useful. When you're stuck on how to prompt for a particular aesthetic, searching the community for similar results -- and then inspecting the prompts that generated them -- is faster than any tutorial.
This isn't unique to v7, but it's worth mentioning: the Midjourney community has produced an enormous library of prompting knowledge that you can directly benefit from. The /describe command, which generates a prompt from any image you upload, is a shortcut I use constantly when clients bring in reference images.
Where Midjourney Falls Short
Text in Images Is Still Not Reliable
Better than v6. Still not good enough to trust without checking every output. For any design that requires accurate text -- product packaging, signage, logos, anything with a specific word that has to be spelled correctly -- you need to verify every single generation.
For anything beyond a few words, use Ideogram 2.0. The comparison isn't close. Ideogram built text rendering as a core competency. Midjourney added it as an improvement. The difference shows.
Pricing Gets Complicated Fast
Basic plan at $10/month gives you 200 image generations. That sounds like a lot. It isn't, for real work. I burn through 200 generations testing a single campaign concept.
Standard at $30/month is where actual users live. You get 15 hours of "fast GPU time" per month -- roughly 900-1,000 image generations if you're using standard quality settings. On a busy project week, I've hit that limit in four days. Then you're on "relax mode," which queues your generations behind other users. Waits of 5-10 minutes per image. Brutal during a deadline.
Pro at $60/month doubles the fast GPU hours and adds stealth mode (private generations). For client work where you need to keep creative concepts confidential, stealth mode matters. At $30 more per month, it's justified if you have the client volume.
You can also buy additional fast GPU hours at $4/hour. I've done this. It adds up.
Prompt Adherence vs. DALL-E
Here's the honest trade-off. Midjourney interprets your prompt with its aesthetic sensibility. That produces beautiful images. It also means it sometimes ignores what you specifically asked for.
You asked for a red car. Midjourney liked the composition better in burgundy. You asked for the product in the foreground. Midjourney put it tastefully off to the side.
DALL-E 3.5 follows instructions more literally. That's less artistically interesting but more professionally reliable when a client's brief has specific requirements. I've switched to DALL-E mid-project when a client was getting frustrated with Midjourney's creative liberties. Not my preferred outcome, but sometimes the right call.
If you need control, DALL-E. If you need inspiration, Midjourney.
No Native Editing or Inpainting Workflow
Adobe Firefly lets you paint directly on an image and regenerate specific regions. DALL-E has its own edit features. Midjourney's approach -- using the vary region tool to edit sections of generated images -- is clunkier than the competition.
For concepts and ideation, this doesn't matter much. For delivering final assets that need post-generation cleanup, you're going to be exporting to Photoshop or another tool. Factor that into your workflow.
Midjourney vs. The Competition in 2026
The image generation space moved fast in 2025-2026. Midjourney's lead has narrowed. Here's where things actually stand:
vs. DALL-E 3.5: DALL-E closes the quality gap on photorealistic images and wins on prompt accuracy and text. Midjourney still wins on artistic/editorial work where mood matters more than accuracy. For professional creative work, I use both -- Midjourney for concept exploration, DALL-E when I need to nail a specific specification.
vs. Ideogram 2.0: Ideogram's text-in-image capability is the best in the business. If typography is part of your design, Ideogram is the tool. On pure image quality outside of text, Midjourney still has an edge, but Ideogram has gotten competitive.
vs. Leonardo AI: Leonardo's strength is its fine-tuning and custom model training features. If you're generating large volumes of content in a consistent style -- product images, game assets, brand content -- Leonardo's workflow is worth the extra complexity. For general creative use, Midjourney is easier and more consistently beautiful.
vs. Stable Diffusion / Flux: Open-source tools win on price (free if you run locally) and customization (you can fine-tune on anything). They lose on ease of use and general-purpose image quality. For technical users with specific needs, worth exploring. For most creative professionals, the setup overhead isn't worth it. See our comparison of Midjourney, DALL-E, and Ideogram for a more detailed breakdown.
Pricing Breakdown
| Plan | Price | Fast GPU Hours | Key Feature |
|---|---|---|---|
| Basic | $10/month | 3.3 hrs (~200 imgs) | Web access, standard quality |
| Standard | $30/month | 15 hrs (~900 imgs) | Relax mode, unlimited slow |
| Pro | $60/month | 30 hrs (~1,800 imgs) | Stealth mode, 12 concurrent jobs |
| Mega | $120/month | 60 hrs (~3,600 imgs) | For studios and heavy production |
Annual billing gives you a 20% discount across all plans.
For most individual creative professionals, Standard at $30/month is the right tier. If you're regularly working on client projects with specific confidentiality requirements, Pro's stealth mode justifies the extra $30. Basic is honestly too limited for production use -- treat it as an extended trial.
Who Should Use Midjourney v7
Midjourney is the right tool if you:
- Do editorial, advertising, or brand creative work where aesthetic quality is the primary metric
- Need to develop campaign concepts and mood boards quickly
- Work with character or brand consistency across multiple images (character reference)
- Don't need precise text rendering in your images
- Are willing to pay $30-60/month for consistently beautiful output
Consider an alternative if you:
- Need accurate text in your images (use Ideogram 2.0 instead)
- Have very specific compositional or instructional requirements (DALL-E is more reliable)
- Are primarily generating product photography for e-commerce (DALL-E or specialized tools)
- Have a tight budget (Ideogram has a generous free tier; Midjourney's is genuinely limited)
For a full guide on using these tools for actual content work, check out how to write better AI image prompts -- the prompting techniques that work best for Midjourney are specific and learnable.
The Verdict: 4.5 out of 5
Midjourney v7 is still the best AI image generator for creative work. Not because the competition is bad -- it isn't -- but because Midjourney's aesthetic sensibility is a genuine moat that isn't easily replicated.
That 0.5 deduction is for the prompt adherence trade-off, the text rendering that still needs a babysitter, and the pricing that climbs fast once you're in production.
The web interface was long overdue and finally good. The character and style reference features are legitimately powerful for professional workflows. v7's improvements to scene coherence are real.
Twenty-five years in creative work, and I'd rather spend an hour exploring Midjourney outputs than two days searching stock libraries. The economics of creative exploration have changed. Whether that's good or bad depends on what you do next with what it gives you.
Top comments (0)