DEV Community

Tongyi Lab
Tongyi Lab

Posted on

Nov7, 2025 | The Tongyi Weekly: Your weekly dose of cutting-edge AI from Tongyi Lab

Hello, community!

We’re Tongyi Lab — the AI research institute under Alibaba Group, and the team behind Qwen, Wan, Tongyi Fun, and a growing ecosystem of models and frameworks loved by millions of developers worldwide.

From this week forward, we will be sharing the latest updates and breakthroughs from Tongyi and bring you directly from our lab to your desk — weekly.

👉 Subscribe to The Tongyi Weekly and never miss a release:

Subscribe Now


Welcome to this week's update. In the past week, we've seen exciting updates from our open-source projects like Qwen and AgentScope.

📣 Model Release & Updates

Introducing Qwen3-Max-Thinking-Preview: An Early Preview of Qwen3-Max-Thinking

We're excited to announce that Qwen3-Max-Thinking-Preview is now available on Qwen Chat! This is an early preview of Qwen3-Max-Thinking.

Even at this intermediate stage, this model demonstrates remarkable potential (100% score) on challenging reasoning benchmarks like AIME 2025 and HMMT when augmented with tool use and scaled test-time compute.

Try it in Qwen Chat and Alibaba Cloud API:

Qwen Chat
Alibaba Cloud API (enable_thinking=True)

AgentScope Updates: New Agents, Enhanced Features, and More

This week, we've upgraded AgentScope - our open-source framework for building agentic applications - with exciting new samples and features, making it easier than ever to build, deploy, and scale intelligent agent systems:

𝐍𝐞𝐰 𝐀𝐠𝐞𝐧𝐭 𝐈𝐦𝐩𝐥𝐞𝐦𝐞𝐧𝐭𝐚𝐭𝐢𝐨𝐧𝐬: we open-sourced two new yet powerful agent applications built on AgentScope:

𝐂𝐨𝐫𝐞 𝐂𝐚𝐩𝐚𝐛𝐢𝐥𝐢𝐭𝐢𝐞𝐬 𝐄𝐱𝐩𝐚𝐧𝐬𝐢𝐨𝐧:

𝐀𝐠𝐞𝐧𝐭𝐒𝐜𝐨𝐩𝐞-𝐒𝐚𝐦𝐩𝐥𝐞𝐬:

We introduced a curated collection of ready-to-use agent implementations and full-stack applications built with AgentScope: https://github.com/agentscope-ai/agentscope-samples

𝐑𝐮𝐧𝐭𝐢𝐦𝐞 𝐔𝐩𝐠𝐫𝐚𝐝𝐞𝐬:

We've upgraded the AgentScope Runtime to make it easier to deploy and interact with agents: App-like Agent Deployment, Python SDK, and GUI & Desktop-enabled Sandboxes: https://github.com/agentscope-ai/agentscope-runtime

🧩 Ecosystem Highlights

Qwen3-VL Lands on llama.cpp
Qwen3-VL—our state-of-the-art vision-language model—is now available on llama.cpp! You can now run this powerful model entirely on your personal device, with native support for on CPU, CUDA, Metal, Vulkan, and other backends.

We’ve also released GGUF weights for all variants—from 2B up to 235B.

Download & explore:

Hugging Face: https://huggingface.co/collections/Qwen/qwen3-vl
ModelScope: https://modelscope.cn/collections/Qwen3-VL-5c7a94c8cb144b
PR: https://github.com/ggml-org/llama.cpp/pull/16780

Qwen3-Max-Preview Entered the Top Tier of Arena Expert Leaderboard

The Qwen3-Max-Preview continues to rank near the top of the new Arena Expert Leaderboard, showcasing its ability to handle challenging prompts from real users.

Arena Expert is a new LMArena evaluation framework to identify the toughest, most expert-level prompts from real users, powering a new Expert leaderboard.

Check out the Arena Expert Leaderboard: https://lmarena.ai/leaderboard

✨ Community Spotlights

Qwen-Edit LoRA Model Hits Top 5 on Hugging Face - from Developer @dx8152

Shoutout to developer @dx8152! The LoRA model Qwen-Edit-2509-Multiple-angles, built atop Qwen-Image-Edit-2509, surged to #5 on Hugging Face’s download chart—an inspiring example of what’s possible when foundational models empower creators.

Download Link: https://huggingface.co/dx8152/Qwen-Edit-2509-Multiple-angles

Article content
Demo: Qwen-Edit-2509-Multiple-angles

📬 Want More? Stay Updated.

This is just one week of what’s coming.

Every week, we bring you:

New model releases & upgrades
AI research breakthroughs
Open-source tools you can use today
Community highlights that inspire

👉 Subscribe to The Tongyi Weekly and never miss a release:

Subscribe Now

Thank you for being part of this journey.

Tongyi Lab is a research institution under Alibaba Group dedicated to artificial intelligence and foundation models, focusing on the research, development, and innovative applications of AI models across diverse domains. Its research spans large language models (LLMs), multimodal understanding and generation, visual AIGC, speech technologies, and more.

Top comments (0)