DEV Community

Tongyi Lab
Tongyi Lab

Posted on

Nov28, 2025 | The Tongyi Weekly: Your weekly dose of cutting-edge AI from Tongyi Lab

Hello, community,

This week, research and community converged in perfect harmony.
On the global stage, our work on Gated Attention was honored with the NeurIPS 2025 Best Paper Award. And right here, in the open, we launched Z-Image: an open-source, 6-billion-parameter model that delivers top-tier image generation for everyone, everywhere.

But as always, the real magic came from you.

This week reminded us of a simple truth: Great AI isn’t built in isolation — it’s co-created.

You read our papers.You fine-tune our models.You build tools we never imagined.And you push us to be better.

👉 Subscribe to The Tongyi Weekly and never miss a release:
Subscribe Now:https://www.linkedin.com/build-relation/newsletter-follow?entityUrn=7392460924453945345


📣 Model Release & Updates

Introducing Z-Image: A High Performance, Open, and Accessible Image Generation Model
We are pleased to introduce Z-Image, an efficient 6-billion-parameter foundation model for image generation.
Through systematic optimization, it proves that top-tier performance is achievable without relying on enormous model sizes, delivering strong results in photorealistic generation and bilingual text rendering that are comparable to leading commercial models.
We are publicly releasing two specialized models on Z-Image: Z-Image-Turbo for generation and Z-Image-Edit for editing. The model code, weights, and an online demo are now publicly available to encourage community exploration and use. With this release, we aim to promote the development of generative models that are accessible, low-cost, and high-performance.
📄 Blog
📌 GitHub
📌 ModelScope
📌 HuggingFace
📌 Z-Image gallery
P.S. Z-Image Turbo is already #1 on Hugging Face’s trending models and Spaces. Thank you, community — you’re moving faster than we are!


📚 Research Breakthroughs

NeurIPS 2025 Best Paper Award
We are deeply honored to announce that our paper“Gated Attention for Large Language Models: Non-linearity, Sparsity, and Attention-Sink-Free” has been awarded the NeurIPS 2025 Best Paper Award.
Reflections from the Selection Committee: This paper represents a substantial amount of work that is possible only with access to industrial scale computing resources, and the authors’ sharing of the results of their work, which will advance the community’s understanding of attention in large language models, is highly commendable, especially in an environment where there has been a move away from open sharing of scientific results around LLMs.
📖 Read the announcement

Qwen3-VL Technical Report Now on arXiv
The full story behind Qwen3-VL is now out on arXiv
From pretraining to post-training, architecture to infra, data to evaluation, we’ve packed in the details for anyone building on vision-language models.

  • 3 models >1M downloads in just over a month
  • Qwen3-VL-8B leads with 2M+ downloads
  • Built on the shoulders of Qwen2.5-VL (2800+ citations in <10 months!) Whether you’re fine-tuning, deploying, or researching VLMs — this is your playbook. 📄 Read the full paper on arXiv

🧩 Ecosystem Highlights

Turn Portraits Into Cartoons: Qwen-Image-Edit-2509-Caricature-LoRA from drbaph
This LoRA from drbaph transforms input images into sketched caricature art with exaggerated features. It's an image-to-image model that takes your photo as input and creates humorous, artistic caricature representations of people and animal subjects with emphasized facial features and characteristics.
👉 Try it here

Light Restoration V2: Qwen-Image-Edit-2509-Light_restoration from dx8152
dx8152 is moving at lightning speed! The V2 update of their Light Restoration LoRA now lets you scrub lighting from any reference image to build better training pairs.
👉 Try it here

Day/Night Shift: Qwen-Edit-Loras from lividtm
Need a clean Day/Night shift? lividtm has you covered. This LoRA for Qwen-Image-Edit-2509 handles 2K resolution while keeping scene details locked. Simple trigger words, high fidelity.
👉 Try it here


📬 Want More? Stay Updated.

Every week, we bring you:
● New model releases & upgrades
● AI research breakthroughs
● Open-source tools you can use today
● Community highlights that inspire

👉 Subscribe to The Tongyi Weekly and never miss a release.
Subscribe Now:https://www.linkedin.com/build-relation/newsletter-follow?entityUrn=7392460924453945345

Thank you for being part of this journey.

Tongyi Lab is a research institution under Alibaba Group dedicated to artificial intelligence and foundation models, focusing on the research, development, and innovative applications of AI models across diverse domains. Its research spans large language models (LLMs), multimodal understanding and generation, visual AIGC, speech technologies, and more.

Top comments (0)