Three stories defined the past week: Anthropic shipped Claude Opus 4.7, Moonshot open-sourced Kimi K2.6 with 300-agent swarms, and Amazon committed another $25 billion to Anthropic alongside a $100 billion AWS spend. Here is what you need to know.
AI Coding Tools: Opus 4.7 Ships With a 1M Context Window
Anthropic released Claude Opus 4.7 on April 16, a new flagship model focused on agentic coding and long-horizon work. The model scores 87.6% on SWE-bench Verified and 64.3% on SWE-bench Pro, jumping from 80.8% on Opus 4.6. It runs with a full 1 million token context window and high-resolution image support for charts and dense documents.
The model landed across major platforms the same week. Claude Opus 4.7 arrived on Amazon Bedrock on launch day in four regions, with up to 10,000 requests per minute per account. GitHub Copilot began rolling out Opus 4.7 to Copilot Pro+ users with a 7.5x premium request multiplier until April 30. The model is replacing both Opus 4.5 and Opus 4.6 in the Copilot model picker.
Claude Code shipped Opus 4.7 the same day with new controls. The update added an "xhigh" effort level between high and max, a /ultrareview multi-agent code review command, and Auto mode for Max subscribers. Anthropic also launched Claude Design, a new Anthropic Labs product for building prototypes, slides, and one-pagers in collaboration with the model. Pricing stays at $5 per million input tokens and $25 per million output tokens.
AI Models: Kimi K2.6 Opens the Door to 12-Hour Agent Runs
Moonshot AI released Kimi K2.6 on April 20 as an open-source agentic model built for long-horizon coding. The model has 1 trillion total parameters in a Mixture-of-Experts architecture with 32 billion active per forward pass. It supports text, image, and video input, a 256K context window, and thinking and non-thinking modes behind an OpenAI-compatible API.
The headline claim is stamina. Kimi K2.6 targets 12-hour autonomous coding sessions and agent swarms that scale to 300 sub-agents across 4,000 coordinated steps. On benchmarks, Moonshot claims SWE-Bench Pro at 58.6, SWE-bench Multilingual at 76.7, and BrowseComp at 83.2. The model matches or beats GPT-5.4 and Claude Opus 4.6 on several open-source leaderboards.
K2.6 is available immediately on Kimi.com, the developer API, Kimi Code CLI, Ollama, and Hugging Face. Day-one integrations cover Kilo Code, VS Code and JetBrains extensions, OpenClaw, Tencent CodeBuddy, and Genspark. The MIT-derived license allows commercial use and redistribution, a direct challenge to closed-source frontier labs.
AI Infrastructure: AWS Interconnect Reaches GA and Amazon Adds $25B to Anthropic
AWS Interconnect reached general availability on April 20 with two new capabilities. AWS Interconnect Multicloud provides Layer 3 private connections between AWS VPCs and other clouds, starting with Google Cloud, with Azure and OCI coming later in 2026. Traffic flows over the AWS global backbone with built-in MACsec encryption, never crossing the public internet. AWS also published the Interconnect specification on GitHub under Apache 2.0, so any cloud provider can become a partner.
Amazon announced a $25 billion investment in Anthropic on April 20, on top of the $8 billion already committed. The deal includes $5 billion immediately, with up to $20 billion tied to commercial milestones. Anthropic committed to spending more than $100 billion on AWS over 10 years, securing up to 5 gigawatts of Trainium chip capacity. One gigawatt is scheduled to come online this year using Trainium2 and Trainium3.
The structure mirrors the $50 billion Amazon-OpenAI deal from February. Anthropic is now valued at $380 billion, with annualized revenue climbing from $9 billion at the end of 2025 to more than $30 billion. Enterprise customers spending at least $1 million annually have doubled since February, crossing 1,000 accounts.
Standards and Protocols: Interconnect Spec Goes Open
The AWS Interconnect specification going to GitHub under Apache 2.0 is the standards story of the week. The move gives any cloud provider a published path to join the private connectivity mesh without negotiating bilateral deals. For AI workloads moving data between model training clusters in one cloud and inference infrastructure in another, the alternative has been either the public internet or expensive dedicated circuits.
The broader pattern is that hyperscale cloud providers are open-sourcing infrastructure specs to lock in network effects. Trainium chip access is exclusive, but the connectivity layer is open. This is the same playbook the Linux Foundation's Agentic AI Foundation uses for MCP and A2A: open standards for the plumbing, proprietary value on top.
MCP and A2A also saw continued adoption this week. Claude Opus 4.7 ships with both protocols built in, and Kimi K2.6 supports tool calls and OpenAI-compatible APIs that slot into MCP-aware agent stacks. The layered architecture is holding: MCP handles agent-to-tool connections, A2A handles agent-to-agent coordination, and the new open models and frontier releases are all landing with both built in by default.
Resources to Go Further
The AI landscape changes fast. Here are tools and resources to help you keep pace.
Try Dremio Free — Experience agentic analytics and an Apache Iceberg-powered lakehouse. Start your free trial
Learn Agentic AI with Data — Dremio's agentic analytics features let your AI agents query and act on live data. Explore Dremio Agentic AI
Join the Community — Connect with data engineers and AI practitioners building on open standards. Join the Dremio Developer Community
Book: The 2026 Guide to AI-Assisted Development — Covers prompt engineering, agent workflows, MCP, evaluation, security, and career paths. Get it on Amazon
Book: Using AI Agents for Data Engineering and Data Analysis — A practical guide to Claude Code, Google Antigravity, OpenAI Codex, and more. Get it on Amazon
Top comments (0)