DEV Community

Tongyi Lab
Tongyi Lab

Posted on

Nov21, 2025 | The Tongyi Weekly: Your weekly dose of cutting-edge AI from Tongyi Lab

Hello, creators, engineers, and visionaries,

Before we dive in this week, we have a milestone to share — and it belongs to you.

10 million users are now creating with Qwen Chat! Not just asking questions, but writing code, designing images, uncovering insights, and bringing invisible visions to life.

This week wasn’t just about releases. It was about awakening new possibilities.

From an agent system that evolves itself, to video models climbing the global leaderboards — we’re witnessing AI innovation and creativity, powered by your ingenuity.

👉 Subscribe to The Tongyi Weekly and never miss a release:
Subscribe Now:https://www.linkedin.com/build-relation/newsletter-follow?entityUrn=7392460924453945345


📣 Model Release & Updates

Introducing AgentEvolver: An Open-Source Self-Evolving Agent System
We’re thrilled to open-source AgentEvolver —an end-to-end, self-evolving training framework that unifies self-questioning, self-navigating, and self-attributing into a cohesive system. It empowers agents to autonomously improve their capabilities, aiming for efficient, cost-effective, and continuous capability evolution.
AgentEvolver provides three Self-Evolving Mechanisms from Environment to Policy:

  • Automatic Task Generation (Self-Questioning) – Explore the environment and autonomously create diverse tasks, eliminating costly manual dataset construction.
  • Experience-guided Exploration (Self-Navigating) – Summarize and reuse cross-task experience, guiding higher-quality rollouts and improving exploration efficiency.
  • Attribution-based Credit Assignment (Self-Attributing) – Process long trajectories to uncover the causal contribution of intermediate steps, enabling fine-grained and efficient policy optimization.

Built on a service-oriented dataflow architecture, AgentEvolver seamlessly integrates environment sandboxes, LLMs, and experience management into modular services.
AgentEvolver achieves superior results while using substantially fewer parameters than larger baseline models, according to the AppWorld and BFCL-v3 benchmarks.

Qwen Code v0.2.1 Released: Smarter, Faster, Cleaner
We shipped 8 versions (v0.1.0->v0.2.1) in 17 days and here's the new leap:
Free Web Search: Support for multiple providers. Qwen OAuth users get 2000 free searches per day!

  • Smarter Code Editing: New fuzzy matching pipeline reduces errors and saves tokens—fewer retries needed.
  • More Control: Fine-tune AI behavior with temperature, top_p, and max tokens settings.
  • Better IDE Integration: Enhanced Zed IDE support with todo and task management tools.
  • Cleaner Output: Tool responses now use plain text instead of complex JSON—easier for AI to understand.
  • Improved Search: Better file filtering (respects .gitignore), smarter search tools, and standardized naming.
  • Faster Performance: Multi-stage normalization pipeline for zero-overhead matching, better Unicode handling, and optimized output limits.
  • Bug Fixes: Fixed token limits for multiple models, improved cross-platform support (macOS & Windows), and better stability.

Try it now:


🧩 Ecosystem Highlights

Model Milestone: Wan2.5-Preview landed in the Top 5 on LMArena leaderboards
This week, we've seen a new milestone of Wan 2.5-Preview with 2 models — i2v and t2i — landed in the Top 5 on the Image-to-Video and Text-to-Image LMArena leaderboards.

  • Wan2.5-i2v-preview → #3 on Image-to-Video Leaderboard
  • Wan2.5-t2i-preview → #5 on Text-to-Image Leaderboard

Try it now

Wan Powers ElevenLabs’ New Image & Video Platform
We’re proud to see Wan among the leading models powering ElevenLabs’ new creative platform — ElevenLabs Image & Video (Beta).
Try it on ElevenLabs

SGLang Diffusion Joins the Ecosystem — With Wan & Qwen Support!
SGLang Diffusion brings SGLang’s state-of-the-art performance to image & video generation. And yes — it now supports Wan, Qwen-Image, and Qwen-Image-Edit, and other major open-source video and image generation models.
We love seeing this kind of ecosystem synergy — this is how AI grows.
SGLang Diffusion


✨ Community Spotlights

Multi-Angle Relighting LoRA: Qwen-Edit-2509-Multi-Angle-Lighting from dx8152
Introducing Qwen-Edit-2509-Multi-Angle-Lighting from dx8152, a LoRA that lets you paint with light.
The idea is simple: use a control map + text prompt to change the lighting. It's still in the early stages (V1), but the potential here is huge.
Try it here

Manga Coloring LoRA: PanelPainter V2
"PanelPainter V2" just dropped, and it's a total glow-up. It's not just a helper anymore; this LoRA is trained to handle the coloring on its own. It's not perfect (consistency is still tricky ), but it's a massive step in the right direction.
Try it here

The Nunchaku-quantized versions of Qwen-Image-Edit-2509: nunchaku-qwen-image-edit-2509 from nunchaku-tech
nunchaku-tech dropped quantized versions of the 2509 model, and the big news is the pre-fused Lightning models. We're talking 4-step and 8-step edits.
This is a must-grab for anyone who wants high-speed, low-VRAM image editing.
Try it here

Realistic Photography LoRA: boreal-qwen-image from kudzueye
This LoRA from kudzueye is an experimental LoRA designed for realistic photography.
There's a ComfyUI workflow included to get you started.

Try it here

Preserving Subjects While Editing Images: Qwen-Image-Edit-InSubject from peteromallet
This LoRA from peteromallet is fine-tune for QwenEdit and significantly improves its ability to preserve subjects while making edits to images. It works effectively with both single subjects and multiple subjects in the same image.
Try it here

Book Flatten and Crop LoRA: book_flatten_and_crop_qwen_image_edit_2509 from tarn59
Need to fix those split-page book scans?
Tarn59 just solved that with a new LoRA for Qwen-Image-Edit-2509. It flattens the page, crops the image, and magically removes the middle crease. Works best if you play around with the aspect ratio to match your book.
Try it here

FLAT/LOG Style Images: QwenEdit2509-FlatLogColor from tlennon-ie
AI images usually come "pre-cooked" with too much contrast, which is a nightmare for color grading.

tlennon-ie created a brilliant fix with Qwen-Image-Edit-2509. It converts generations into a flat, LOG-style profile—basically a digital negative that preserves shadow and highlight details.
Perfect if you need to match AI assets with professional video footage.
Try it here


🔥 Upcoming Events

Meet Qwen in Seoul (Dec10): AMD’s AI Developer Meetup
AMD’s AI Developer Meetup in Seoul (Dec 10) is filling FAST.As a key partner, we’re bringing you the future of generative AI — live, hands-on, and free.

  • Dec 10 | 📍 Seoul, Aloft Gangnam
  • Free limited-edition swag for all attendees
  • Register now — spots are limited: https://luma.com/0yxjboie

What You’ll Experience:

  • Qwen-Image Technology Deep Dive
  • Korean Enterprise AI & Cloud Case Studies
  • 🎨 Hands-On Workshop: Qwen-Image × LoRA

→ Fine-tune your own LoRA with Qwen-Image

→ Train & infer using DiffSynth-Studio on AMD MI300x GPUs

→ Build custom visual models — from zero to masterpiece

Wan Muse “Heartbeat”Creative Challenge — The Shortlist Is Here
The Professional Category Shortlist for Wan Muse Season 2: “Heartbeat” is now live.
📌 Public Review Period: November 18–21, 2025
👉 View All Shortlisted Works
🔍 Found an issue? We take fairness seriously. Report violations (real-name required):

  • Not AI-generated by Wan
  • Plagiarism or copyright breach
  • Content policy violation

📩 Email: tongyiwanxiang@service.aliyun.com


📬 Want More? Stay Updated.
Every week, we bring you:

  • New model releases & upgrades
  • AI research breakthroughs
  • Open-source tools you can use today
  • Community highlights that inspire

👉 Subscribe to The Tongyi Weekly and never miss a release: https://www.linkedin.com/build-relation/newsletter-follow?entityUrn=7392460924453945345

Thank you for being part of this journey.

Tongyi Lab is a research institution under Alibaba Group dedicated to artificial intelligence and foundation models, focusing on the research, development, and innovative applications of AI models across diverse domains. Its research spans large language models (LLMs), multimodal understanding and generation, visual AIGC, speech technologies, and more.

Top comments (0)