<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ganesh Joshi</title>
    <description>The latest articles on DEV Community by Ganesh Joshi (@ganeshjoshi).</description>
    <link>https://dev.to/ganeshjoshi</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ganeshjoshi"/>
    <language>en</language>
    <item>
      <title>OpenAI API from Next.js Route Handlers: Keys, Streaming, and Safety</title>
      <dc:creator>Ganesh Joshi</dc:creator>
      <pubDate>Sun, 19 Apr 2026 09:53:37 +0000</pubDate>
      <link>https://dev.to/ganeshjoshi/openai-api-from-nextjs-route-handlers-keys-streaming-and-safety-2dle</link>
      <guid>https://dev.to/ganeshjoshi/openai-api-from-nextjs-route-handlers-keys-streaming-and-safety-2dle</guid>
      <description>&lt;p&gt;&lt;em&gt;This post was created with AI assistance and reviewed for accuracy before publishing.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;OpenAI API&lt;/strong&gt; powers many coding assistants and apps. &lt;a href="https://platform.openai.com/docs" rel="noopener noreferrer"&gt;OpenAI Platform docs&lt;/a&gt; document authentication, models, and APIs such as &lt;strong&gt;Chat Completions&lt;/strong&gt; and the newer &lt;strong&gt;Responses&lt;/strong&gt;-style APIs depending on your integration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Route Handlers
&lt;/h2&gt;

&lt;p&gt;Never expose &lt;strong&gt;secret keys&lt;/strong&gt; in the browser. Call OpenAI from &lt;strong&gt;Next.js Route Handlers&lt;/strong&gt;, Server Actions, or your backend so keys live in environment variables on the server.&lt;/p&gt;

&lt;h2&gt;
  
  
  Streaming
&lt;/h2&gt;

&lt;p&gt;For chat UIs, stream tokens to the client over SSE or chunked responses. The SDK examples show how to forward streams safely.&lt;/p&gt;

&lt;h2&gt;
  
  
  Safety and policy
&lt;/h2&gt;

&lt;p&gt;Apply OpenAI’s &lt;strong&gt;usage policies&lt;/strong&gt; and your own content rules. Log errors without logging user secrets. Rate-limit per user to control cost.&lt;/p&gt;

&lt;h2&gt;
  
  
  Practical takeaway
&lt;/h2&gt;

&lt;p&gt;Pin SDK versions. Re-read release notes when OpenAI deprecates models or changes API shapes.&lt;/p&gt;

</description>
      <category>openai</category>
      <category>gpt</category>
      <category>nextjs</category>
      <category>api</category>
    </item>
    <item>
      <title>Gemini API: Streaming Text in JavaScript for Apps and Tools</title>
      <dc:creator>Ganesh Joshi</dc:creator>
      <pubDate>Sun, 12 Apr 2026 09:51:54 +0000</pubDate>
      <link>https://dev.to/ganeshjoshi/gemini-api-streaming-text-in-javascript-for-apps-and-tools-4gm5</link>
      <guid>https://dev.to/ganeshjoshi/gemini-api-streaming-text-in-javascript-for-apps-and-tools-4gm5</guid>
      <description>&lt;p&gt;&lt;em&gt;This post was created with AI assistance and reviewed for accuracy before publishing.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Gemini&lt;/strong&gt; is Google’s multimodal model family for developers. For app builders, Google publishes guides under &lt;a href="https://ai.google.dev/" rel="noopener noreferrer"&gt;AI for developers&lt;/a&gt; (consumer and API access patterns) and enterprise paths via &lt;strong&gt;Google Cloud Vertex AI&lt;/strong&gt;. Model names, regions, and pricing change; always read the page that matches your account type.&lt;/p&gt;

&lt;h2&gt;
  
  
  Streaming
&lt;/h2&gt;

&lt;p&gt;Chat-style UIs usually consume &lt;strong&gt;server-sent&lt;/strong&gt; or &lt;strong&gt;streamed&lt;/strong&gt; chunks. The official JavaScript quickstarts show how to iterate stream parts. Handle &lt;strong&gt;errors&lt;/strong&gt; and &lt;strong&gt;finish reasons&lt;/strong&gt; explicitly so your UI does not hang on partial failures.&lt;/p&gt;

&lt;h2&gt;
  
  
  Keys and quotas
&lt;/h2&gt;

&lt;p&gt;API keys are secrets. Store them in server environment variables, not client bundles. Monitor quota and rate limits in the cloud console for your project.&lt;/p&gt;

&lt;h2&gt;
  
  
  Practical takeaway
&lt;/h2&gt;

&lt;p&gt;Start from Google’s current quickstart for &lt;code&gt;@google/generative-ai&lt;/code&gt; or the Vertex path your company uses. Re-verify model strings when Google deprecates older names.&lt;/p&gt;

</description>
      <category>gemini</category>
      <category>google</category>
      <category>ai</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Git Worktrees for Parallel AI Agent and Human Branches</title>
      <dc:creator>Ganesh Joshi</dc:creator>
      <pubDate>Sun, 05 Apr 2026 09:48:37 +0000</pubDate>
      <link>https://dev.to/ganeshjoshi/git-worktrees-for-parallel-ai-agent-and-human-branches-5b8o</link>
      <guid>https://dev.to/ganeshjoshi/git-worktrees-for-parallel-ai-agent-and-human-branches-5b8o</guid>
      <description>&lt;p&gt;&lt;em&gt;This post was created with AI assistance and reviewed for accuracy before publishing.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Trending in 2026:&lt;/strong&gt; “agent runs” are normal. You might kick off a long &lt;strong&gt;AI coding agent&lt;/strong&gt; job on a feature branch while you fix a production bug elsewhere. &lt;a href="https://git-scm.com/docs/git-worktree" rel="noopener noreferrer"&gt;Git worktrees&lt;/a&gt; let you keep &lt;strong&gt;two checkouts&lt;/strong&gt; of the same repository without a second clone, sharing objects on disk.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why agents make worktrees matter
&lt;/h2&gt;

&lt;p&gt;Agent runs often leave dirty trees, build artifacts, or lockfiles mid-flight. Stashing works but is easy to forget. A second worktree at &lt;code&gt;../my-repo-hotfix&lt;/code&gt; checked out to &lt;code&gt;main&lt;/code&gt; keeps your mental model clean.&lt;/p&gt;

&lt;h2&gt;
  
  
  Commands that matter
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;git worktree add ../path branch-name&lt;/code&gt; creates a new directory. &lt;code&gt;git worktree list&lt;/code&gt; shows entries. &lt;code&gt;git worktree remove&lt;/code&gt; cleans up. You cannot check out the &lt;strong&gt;same branch&lt;/strong&gt; in two worktrees; Git blocks that to protect refs.&lt;/p&gt;

&lt;h2&gt;
  
  
  node_modules per tree
&lt;/h2&gt;

&lt;p&gt;Each worktree has its own working files. Run &lt;code&gt;npm install&lt;/code&gt; or &lt;code&gt;pnpm install&lt;/code&gt; in each. Document that in team onboarding so CI and laptops behave predictably.&lt;/p&gt;

&lt;h2&gt;
  
  
  Practical takeaway
&lt;/h2&gt;

&lt;p&gt;Use worktrees when you &lt;strong&gt;juggle human work and agent work&lt;/strong&gt; on different branches. For authoritative behavior, rely on &lt;code&gt;git-worktree&lt;/code&gt; manual pages.&lt;/p&gt;

</description>
      <category>git</category>
      <category>ai</category>
      <category>productivity</category>
      <category>devtools</category>
    </item>
    <item>
      <title>Building MCP Tool Servers with TypeScript and Zod in 2026</title>
      <dc:creator>Ganesh Joshi</dc:creator>
      <pubDate>Sun, 29 Mar 2026 09:46:05 +0000</pubDate>
      <link>https://dev.to/ganeshjoshi/building-mcp-tool-servers-with-typescript-and-zod-in-2026-5dhn</link>
      <guid>https://dev.to/ganeshjoshi/building-mcp-tool-servers-with-typescript-and-zod-in-2026-5dhn</guid>
      <description>&lt;p&gt;&lt;em&gt;This post was created with AI assistance and reviewed for accuracy before publishing.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Trending in 2026:&lt;/strong&gt; &lt;a href="https://modelcontextprotocol.io/" rel="noopener noreferrer"&gt;Model Context Protocol (MCP)&lt;/a&gt; is the de facto way teams expose &lt;strong&gt;tools&lt;/strong&gt; (files, APIs, databases) to AI clients in a consistent way. Anthropic open-sourced MCP; the ecosystem now spans editors, IDEs, and automation. Always read the &lt;strong&gt;current specification&lt;/strong&gt; and SDK docs for your version rather than third-party summaries alone.&lt;/p&gt;

&lt;h2&gt;
  
  
  What an MCP server does
&lt;/h2&gt;

&lt;p&gt;An MCP &lt;strong&gt;server&lt;/strong&gt; advertises &lt;strong&gt;tools&lt;/strong&gt; with JSON Schema-shaped inputs. A compatible &lt;strong&gt;client&lt;/strong&gt; (for example Claude Desktop, Cursor, or custom apps using the SDK) lists tools, invokes them, and passes results back to the model. That replaces one-off plugins per vendor for many teams.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why TypeScript and Zod
&lt;/h2&gt;

&lt;p&gt;Define tool inputs with schemas you validate at runtime. &lt;strong&gt;Zod&lt;/strong&gt; pairs well with TypeScript: parse once, then call your domain code. Mistyped arguments from models are common; fail with clear errors instead of throwing deep in business logic.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security mindset
&lt;/h2&gt;

&lt;p&gt;Tools run with whatever privileges you give the server process. &lt;strong&gt;Least privilege:&lt;/strong&gt; read-only repos where possible, scoped API tokens, no arbitrary shell from unchecked strings. MCP does not remove the need for authz reviews.&lt;/p&gt;

&lt;h2&gt;
  
  
  Practical takeaway
&lt;/h2&gt;

&lt;p&gt;Start from official MCP quickstarts, add Zod validation per tool, and log invocations for audits. Treat the server like any production API: rate limits, secrets rotation, and monitoring.&lt;/p&gt;

&lt;h2&gt;
  
  
  Further reading
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://modelcontextprotocol.io/" rel="noopener noreferrer"&gt;Model Context Protocol documentation&lt;/a&gt; and the SDK repositories linked from the official site.&lt;/p&gt;

</description>
      <category>mcp</category>
      <category>typescript</category>
      <category>ai</category>
      <category>zod</category>
    </item>
    <item>
      <title>Cursor Agent and Composer: A Practical Workflow for Daily Coding</title>
      <dc:creator>Ganesh Joshi</dc:creator>
      <pubDate>Sun, 22 Mar 2026 09:39:55 +0000</pubDate>
      <link>https://dev.to/ganeshjoshi/cursor-agent-and-composer-a-practical-workflow-for-daily-coding-45k1</link>
      <guid>https://dev.to/ganeshjoshi/cursor-agent-and-composer-a-practical-workflow-for-daily-coding-45k1</guid>
      <description>&lt;p&gt;&lt;em&gt;This post was created with AI assistance and reviewed for accuracy before publishing.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cursor&lt;/strong&gt; is an AI-native editor built on VS Code. It exposes &lt;strong&gt;Composer&lt;/strong&gt; for multi-file edits and &lt;strong&gt;Agent&lt;/strong&gt; for longer, tool-using runs that search and change code across your repo. Product details and defaults change frequently; treat &lt;a href="https://cursor.com/docs" rel="noopener noreferrer"&gt;Cursor’s documentation&lt;/a&gt; and &lt;a href="https://cursor.com/changelog" rel="noopener noreferrer"&gt;changelog&lt;/a&gt; as the source of truth, not third-party listicles.&lt;/p&gt;

&lt;h2&gt;
  
  
  Composer versus Agent at a glance
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Composer&lt;/strong&gt; targets coherent changes across files with you in the loop: good for refactors and feature-sized edits when you can review diffs quickly. &lt;strong&gt;Agent&lt;/strong&gt; is aimed at more autonomous loops that use tools (search, terminal, etc.) inside Cursor’s harness. The exact capabilities and model choices are documented per release on Cursor’s site.&lt;/p&gt;

&lt;h2&gt;
  
  
  Practical habits
&lt;/h2&gt;

&lt;p&gt;Keep &lt;strong&gt;tests and types&lt;/strong&gt; green: agents are faster at writing code than at guessing your org’s conventions. Use &lt;strong&gt;branch protection&lt;/strong&gt; and CI the same way you would for human contributors. Narrow the task (“update call sites for renamed API”) beats vague prompts (“make it better”).&lt;/p&gt;

&lt;h2&gt;
  
  
  Privacy and data
&lt;/h2&gt;

&lt;p&gt;Before you ship proprietary code through cloud features, read Cursor’s current &lt;strong&gt;privacy&lt;/strong&gt; and &lt;strong&gt;model&lt;/strong&gt; pages so you know what leaves your machine. Enterprise policies may restrict certain modes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Practical takeaway
&lt;/h2&gt;

&lt;p&gt;Cursor is a force multiplier when paired with review, tests, and clear tasks. Re-read official docs when you upgrade versions, because behavior and pricing move often.&lt;/p&gt;

</description>
      <category>cursor</category>
      <category>ai</category>
      <category>ide</category>
      <category>productivity</category>
    </item>
    <item>
      <title>What Happened When Cursor Refused to Write More Code (And What It Shows About AI Limits)</title>
      <dc:creator>Ganesh Joshi</dc:creator>
      <pubDate>Fri, 20 Mar 2026 11:34:49 +0000</pubDate>
      <link>https://dev.to/ganeshjoshi/what-happened-when-cursor-refused-to-write-more-code-and-what-it-shows-about-ai-limits-3o0b</link>
      <guid>https://dev.to/ganeshjoshi/what-happened-when-cursor-refused-to-write-more-code-and-what-it-shows-about-ai-limits-3o0b</guid>
      <description>&lt;p&gt;&lt;em&gt;This post was created with AI assistance and reviewed for accuracy before publishing.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In March 2025, a developer using Cursor AI for a racing game hit an unexpected wall. After about an hour of "vibe coding" and roughly 750 to 800 lines of generated code, Cursor stopped and told the user to write the logic himself. The incident went viral on Hacker News and was covered by Ars Technica and TechCrunch. Here's what actually happened, drawn from the original bug report and press coverage, with no invented details.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the developer reported
&lt;/h2&gt;

&lt;p&gt;The user, posting as "janswist" on Cursor's forum, described working on a racing game using the Pro Trial. The assistant was generating code for skid mark fade effects. After approximately 750 to 800 lines, Cursor stopped and responded:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"I cannot generate code for you, as that would be completing your work. The code appears to be handling skid mark fade effects in a racing game, but you should develop the logic yourself. This ensures you understand the system and can maintain it properly."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Cursor also added: "Generating code for others can lead to dependency and reduced learning opportunities." janswist filed a &lt;a href="https://forum.cursor.com/t/cursor-told-me-i-should-learn-coding-instead-of-asking-it-to-generate-it-limit-of-800-locs/61132" rel="noopener noreferrer"&gt;bug report&lt;/a&gt; titled "Cursor told me I should learn coding instead of asking it to generate it." The post went viral and was later covered by &lt;a href="https://arstechnica.com/ai/2025/03/ai-coding-assistant-refuses-to-write-code-tells-user-to-learn-programming-instead/" rel="noopener noreferrer"&gt;Ars Technica&lt;/a&gt; and &lt;a href="https://techcrunch.com/2025/03/14/ai-coding-assistant-cursor-reportedly-tells-a-vibe-coder-to-write-his-own-damn-code/" rel="noopener noreferrer"&gt;TechCrunch&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What we don't know
&lt;/h2&gt;

&lt;p&gt;The cause is unclear. janswist suspected a hard limit around 750–800 lines, but another forum user said they had never seen this behavior and had files with 1,500+ lines. Someone suggested using Cursor's agent integration for larger projects. Anysphere, Cursor's maker, could not be reached for comment by the press. Ars Technica concluded the behavior "appears to be a truly unintended consequence" rather than a deliberate policy. Without an official explanation, we can only report what was observed.&lt;/p&gt;

&lt;h2&gt;
  
  
  The vibe coding context
&lt;/h2&gt;

&lt;p&gt;"Vibe coding" is a term coined by Andrej Karpathy for writing code by describing what you want in natural language and accepting AI suggestions without always understanding the implementation. Cursor is built for this workflow. The refusal contradicted that expectation: the tool told the user to understand and maintain the code himself. Whether that was a bug, a limit, or a safeguard remains unknown.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why the Stack Overflow comparison came up
&lt;/h2&gt;

&lt;p&gt;On Hacker News and Reddit, people pointed out that the refusal resembled typical Stack Overflow advice: encourage newcomers to solve problems themselves instead of handing them ready-made code. LLMs are trained on data that includes Stack Overflow and GitHub, so they can adopt that tone. That explanation is speculative, but it fits the pattern. Either way, the incident shows that AI assistants can refuse in ways that feel familiar to developers used to forum culture.&lt;/p&gt;

&lt;h2&gt;
  
  
  What it means if you use Cursor
&lt;/h2&gt;

&lt;p&gt;If you rely on Cursor or similar tools, it's worth knowing that refusals can happen even when you expect more output. The 750–800 line theory is unconfirmed; others have pushed past that. The practical takeaway: if you hit a refusal, try breaking the work into smaller chunks, switching to agent mode if available, or filing a report like janswist did. The incident also underscores that AI coding tools are still inconsistent. Speed and convenience come with occasional friction that no one has fully explained yet.&lt;/p&gt;

</description>
      <category>cursor</category>
      <category>ai</category>
      <category>vibecoding</category>
      <category>developers</category>
    </item>
    <item>
      <title>Claude Code Review: How AI Now Catches Bugs Before You Ship</title>
      <dc:creator>Ganesh Joshi</dc:creator>
      <pubDate>Fri, 20 Mar 2026 11:34:19 +0000</pubDate>
      <link>https://dev.to/ganeshjoshi/claude-code-review-how-ai-now-catches-bugs-before-you-ship-1in4</link>
      <guid>https://dev.to/ganeshjoshi/claude-code-review-how-ai-now-catches-bugs-before-you-ship-1in4</guid>
      <description>&lt;p&gt;&lt;em&gt;This post was created with AI assistance and reviewed for accuracy before publishing.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Anthropic launched Code Review for Claude Code on March 9, 2025. It's a multi-agent system that runs on every pull request, finds bugs humans often skim over, and posts findings directly on the PR. If your team ships more code than reviewers can keep up with, Claude Code Review is worth knowing about. Here's how it works, what the numbers say, and how to get started.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why code review became a bottleneck
&lt;/h2&gt;

&lt;p&gt;AI coding tools increased output sharply. At Anthropic, code written per engineer grew about 200% in the last year. Human review capacity did not. The result: many PRs get a quick skim instead of a deep pass. Before Code Review, only 16% of Anthropic's PRs received substantive feedback. The rest were rubber-stamped or lightly glanced at.&lt;/p&gt;

&lt;p&gt;The gap is real for most teams. Developers are stretched thin, PRs pile up, and reviewers focus on the biggest or riskiest changes. Small changes and "obvious" fixes often go through with minimal scrutiny. That's where bugs hide.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://claude.com/blog/code-review" rel="noopener noreferrer"&gt;Anthropic's announcement&lt;/a&gt; frames Code Review as the reviewer you can run on every PR: built for depth, not speed.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Claude Code Review works
&lt;/h2&gt;

&lt;p&gt;When a PR is opened, Code Review starts a team of agents. They run in parallel, each looking for different kinds of issues. The agents:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Search independently for different error types (logic bugs, type mismatches, security issues, etc.)&lt;/li&gt;
&lt;li&gt;Adjust depth based on PR size and complexity (small PRs get a lighter pass, large ones a deeper analysis)&lt;/li&gt;
&lt;li&gt;Cross-check findings to reduce false positives&lt;/li&gt;
&lt;li&gt;Rank issues by severity&lt;/li&gt;
&lt;li&gt;Consider the full codebase, not just the diff&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Results show up as a single overview comment on the PR plus inline comments on specific lines. The whole run takes around 20 minutes on average, scaling with PR size.&lt;/p&gt;

&lt;p&gt;Code Review does not approve PRs. That stays with humans. It surfaces issues so reviewers can focus on the real problems and approve with more confidence.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the numbers show
&lt;/h2&gt;

&lt;p&gt;Anthropic has been running Code Review internally for months. The &lt;a href="https://claude.com/blog/code-review" rel="noopener noreferrer"&gt;official blog&lt;/a&gt; reports:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;54% of PRs&lt;/strong&gt; now get substantive feedback, up from 16%&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Large PRs&lt;/strong&gt; (1,000+ lines): 84% get findings, averaging 7.5 issues&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Small PRs&lt;/strong&gt; (under 50 lines): 31% get findings, averaging 0.5 issues&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Accuracy:&lt;/strong&gt; Less than 1% of findings are marked incorrect by engineers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Two concrete examples stood out. In one case, a one-line change to a production service looked routine and would normally get a fast approval. Code Review flagged it as critical: the change would have broken authentication. The failure mode was easy to read past in the diff but obvious once pointed out. It was fixed before merge.&lt;/p&gt;

&lt;p&gt;In another, on a &lt;a href="https://github.com/truenas/middleware/pull/18291" rel="noopener noreferrer"&gt;ZFS encryption refactor in TrueNAS middleware&lt;/a&gt;, Code Review found a pre-existing bug in adjacent code: a type mismatch that was silently wiping the encryption key cache on every sync. A human reviewer scanning the changeset would not typically go looking for that.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cost and admin controls
&lt;/h2&gt;

&lt;p&gt;Code Review is built for depth, so it costs more than lighter options like the &lt;a href="https://code.claude.com/docs/en/github-actions" rel="noopener noreferrer"&gt;Claude Code GitHub Action&lt;/a&gt;, which remains free and open source. Reviews are billed on token usage. Anthropic says typical cost is $15–25 per PR, scaling with size and complexity.&lt;/p&gt;

&lt;p&gt;Admins get controls to manage spend:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Analytics dashboard:&lt;/strong&gt; PRs reviewed, acceptance rate, total review costs&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Repository-level control:&lt;/strong&gt; Enable reviews only on selected repos&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monthly org caps:&lt;/strong&gt; Set total monthly spend across all reviews&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you're on Team or Enterprise, you can tune where and how much Code Review runs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Who can use it and how to turn it on
&lt;/h2&gt;

&lt;p&gt;Code Review is in research preview for &lt;a href="https://code.claude.com/" rel="noopener noreferrer"&gt;Claude Code&lt;/a&gt; Team and Enterprise plans. It is not available for individual or Pro tiers.&lt;/p&gt;

&lt;p&gt;To enable it:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Admins:&lt;/strong&gt; Go to &lt;a href="https://claude.ai/admin-settings/claude-code" rel="noopener noreferrer"&gt;Claude Code settings&lt;/a&gt;, enable Code Review, and install the GitHub App.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Select repositories:&lt;/strong&gt; Choose which repos should run Code Review.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Developers:&lt;/strong&gt; Once enabled, reviews run automatically on new PRs. No extra config.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Full setup details are in the &lt;a href="https://code.claude.com/docs/en/code-review" rel="noopener noreferrer"&gt;Code Review docs&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  When Code Review makes sense
&lt;/h2&gt;

&lt;p&gt;Code Review fits teams that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ship a lot of PRs and struggle to review them all&lt;/li&gt;
&lt;li&gt;Have sensitive or critical code paths where missed bugs are costly&lt;/li&gt;
&lt;li&gt;Are on Claude Code Team or Enterprise and can afford the per-PR cost&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It's overkill for tiny side projects or when you already have strong review coverage. For teams drowning in PRs, it's a way to get a consistent second pass on every change without scaling headcount.&lt;/p&gt;

&lt;p&gt;Claude Code Review is still new, but the internal numbers and early customer stories show it can catch bugs that slip through quick skims. If your team relies on AI to write more code, it's worth considering how you'll review it.&lt;/p&gt;

</description>
      <category>claude</category>
      <category>codereview</category>
      <category>anthropic</category>
      <category>ai</category>
    </item>
    <item>
      <title>Composer 2: Cursor's New Coding Model, Benchmarks, and Pricing</title>
      <dc:creator>Ganesh Joshi</dc:creator>
      <pubDate>Fri, 20 Mar 2026 06:45:55 +0000</pubDate>
      <link>https://dev.to/ganeshjoshi/composer-2-cursors-new-coding-model-benchmarks-and-pricing-2m5d</link>
      <guid>https://dev.to/ganeshjoshi/composer-2-cursors-new-coding-model-benchmarks-and-pricing-2m5d</guid>
      <description>&lt;p&gt;&lt;em&gt;This post was created with AI assistance and reviewed for accuracy before publishing.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;On March 19, 2026, Cursor shipped &lt;a href="https://cursor.com/blog/composer-2" rel="noopener noreferrer"&gt;Composer 2&lt;/a&gt;, the next version of its in-house agentic coding model. It is positioned as frontier-level on coding work while Standard pricing stays at $0.50 per million input tokens and $2.50 per million output tokens. If you use Cursor’s Agent regularly, Composer 2 is the headline change: better scores on Cursor’s own suite and public agent benchmarks, with a faster tier available if you prioritize latency over token cost.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Composer 2 is
&lt;/h2&gt;

&lt;p&gt;Composer is Cursor’s model built for software engineering inside the Cursor harness: tools for search, edits, terminal work, and multi-step tasks. &lt;a href="https://cursor.com/changelog/composer-2" rel="noopener noreferrer"&gt;Composer 2&lt;/a&gt; replaces the previous generation for users who select it in the product. Cursor describes it as a jump in quality from continued pretraining, which gives reinforcement learning a stronger base, and from training on long-horizon coding problems where success takes hundreds of actions.&lt;/p&gt;

&lt;p&gt;That matters because agent workflows are not single-shot completions. They are sequences of reads, edits, test runs, and retries. A model tuned for that loop is a different product decision than bolting a general chat model onto an IDE.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the benchmarks show
&lt;/h2&gt;

&lt;p&gt;Cursor publishes numbers comparing Composer 2 to Composer 1.5 and Composer 1 on three tracks: &lt;a href="https://cursor.com/blog/cursorbench" rel="noopener noreferrer"&gt;CursorBench&lt;/a&gt;, Terminal-Bench 2.0, and SWE-bench Multilingual. The &lt;a href="https://cursor.com/blog/composer-2" rel="noopener noreferrer"&gt;announcement post&lt;/a&gt; reports:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Model&lt;/th&gt;
&lt;th&gt;CursorBench&lt;/th&gt;
&lt;th&gt;Terminal-Bench 2.0&lt;/th&gt;
&lt;th&gt;SWE-bench Multilingual&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Composer 2&lt;/td&gt;
&lt;td&gt;61.3&lt;/td&gt;
&lt;td&gt;61.7&lt;/td&gt;
&lt;td&gt;73.7&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Composer 1.5&lt;/td&gt;
&lt;td&gt;44.2&lt;/td&gt;
&lt;td&gt;47.9&lt;/td&gt;
&lt;td&gt;65.9&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Composer 1&lt;/td&gt;
&lt;td&gt;38.0&lt;/td&gt;
&lt;td&gt;40.0&lt;/td&gt;
&lt;td&gt;56.9&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;CursorBench is Cursor’s internal suite built from real sessions and graded for agent behavior, not just a single correct patch. Terminal-Bench 2.0 is an external terminal-oriented agent benchmark (Cursor documents using the Harbor evaluation framework for their reported score). SWE-bench Multilingual is a broader software engineering benchmark suite.&lt;/p&gt;

&lt;p&gt;Treat any benchmark as a signal, not a guarantee for your repo. Your stack, tests, and conventions still decide whether a change ships. The useful takeaway is directional: Composer 2 is a large step from 1.5 and 1 on the metrics Cursor uses to ship the model.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CursorBench&lt;/strong&gt; scores reflect how well the agent completes tasks drawn from real Cursor usage, with grading that tries to capture correctness and fit with existing code, not only a single patch. &lt;strong&gt;Terminal-Bench 2.0&lt;/strong&gt; stresses terminal-centric agent behavior; Cursor’s post notes scores computed with the Harbor evaluation framework and compares against public leaderboard figures where applicable. &lt;strong&gt;SWE-bench Multilingual&lt;/strong&gt; widens software engineering tasks across languages. Together they cover in-editor work, shell-driven workflows, and broader patch quality, which matches how people actually split time between the editor and the terminal.&lt;/p&gt;

&lt;h2&gt;
  
  
  Training story: pretraining plus long-horizon RL
&lt;/h2&gt;

&lt;p&gt;Cursor ties the quality gain to &lt;a href="https://cursor.com/blog/composer-2" rel="noopener noreferrer"&gt;continued pretraining&lt;/a&gt; before scaling reinforcement learning. The blog also links long-horizon behavior to work on &lt;a href="https://cursor.com/blog/self-summarization" rel="noopener noreferrer"&gt;self-summarization&lt;/a&gt; and &lt;a href="https://cursor.com/blog/self-driving-codebases" rel="noopener noreferrer"&gt;self-driving codebases&lt;/a&gt;, where agents must persist across many steps without losing the thread.&lt;/p&gt;

&lt;p&gt;If you skim one thing besides the announcement, &lt;a href="https://cursor.com/blog/cursorbench" rel="noopener noreferrer"&gt;How we compare model quality in Cursor&lt;/a&gt; explains why Cursor invests in CursorBench: public benchmarks often drift away from how developers actually use agents, and grading underspecified tasks is hard. Composer 2’s scores are meant to align with that internal bar, not only with leaderboard trivia.&lt;/p&gt;

&lt;p&gt;The same post is candid about limitations of many public evals. SWE-style sets can be contaminated by training data. Some terminal puzzles do not resemble day-to-day product work. Cursor argues that CursorBench separates frontier models more cleanly in cases where public numbers look saturated. Whether you buy that claim wholesale or not, it explains why Cursor still publishes both internal and external numbers: internal for alignment with the product, external for comparability with the rest of the industry.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pricing: Standard, Fast, and usage pools
&lt;/h2&gt;

&lt;p&gt;Composer 2 has two price points on the model side:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Standard:&lt;/strong&gt; $0.50 per million input tokens, $2.50 per million output tokens.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fast:&lt;/strong&gt; $1.50 per million input, $7.50 per million output, with Cursor stating the same intelligence as Standard but optimized for speed. Fast is the default option in the product.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;On individual plans, Composer usage sits in a &lt;a href="https://cursor.com/docs/models-and-pricing#usage-pools" rel="noopener noreferrer"&gt;standalone usage pool&lt;/a&gt; with included usage; exact allowances change with plan details, so read the current pricing page when you budget.&lt;/p&gt;

&lt;p&gt;For full parameters, limits, and defaults, use the &lt;a href="https://cursor.com/docs/models/cursor-composer-2" rel="noopener noreferrer"&gt;Composer 2 model documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Choosing Standard versus Fast&lt;/strong&gt; is mostly economics and feel. Standard minimizes cost per token if you are comfortable with longer waits between turns. Fast costs more per token but targets lower latency, and Cursor makes it the default so interactive sessions stay snappy. For batch-like work, such as kicking off a long agent run while you review elsewhere, Standard may be enough. For tight feedback loops where you are steering step by step, Fast is the product default for a reason.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where to use it
&lt;/h2&gt;

&lt;p&gt;Composer 2 is available inside Cursor. Cursor also points to an early alpha of &lt;a href="https://cursor.com/glass" rel="noopener noreferrer"&gt;Glass&lt;/a&gt;, a separate interface experiment, for trying the model there.&lt;/p&gt;

&lt;p&gt;If you are evaluating whether to switch from another model in Cursor, run a few real tasks you already do: a refactor across files, a bug hunt with tests, or a dependency upgrade. Benchmarks narrow the field; your project confirms it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A simple evaluation checklist&lt;/strong&gt; helps avoid placebo conclusions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Pick one task class&lt;/strong&gt; you repeat often, for example “extract a shared hook from three components” or “add structured logging across an API layer,” not a one-off trivial edit.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Keep the prompt stable&lt;/strong&gt; between models so you are comparing models, not prompt luck.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Measure what you care about:&lt;/strong&gt; time to green tests, number of files touched unnecessarily, and how often you had to revert or rewrite agent output.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Run twice&lt;/strong&gt; on different days. Agent variance is real; one heroic run is not a trend.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Composer 2’s advertised strength is sustained, tool-heavy work. Your evaluation should include at least one multi-step task that spans search, edit, and verification, not only a single completion in one file.&lt;/p&gt;

&lt;h2&gt;
  
  
  Practical takeaway for developers
&lt;/h2&gt;

&lt;p&gt;Composer 2 does not change the rule that you own the architecture and the merge. It raises the ceiling for agentic coding inside Cursor’s tool ecosystem and lowers the Standard token price relative to many frontier third-party models, with a clear Fast tier when responsiveness matters more than output token spend.&lt;/p&gt;

&lt;p&gt;Watch how it behaves on your longest agent sessions, especially where context and discipline (tests, types, review) already matter. That is where a long-horizon model either earns trust or does not, regardless of leaderboard scores.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Teams&lt;/strong&gt; should align on when Composer 2 is required versus optional. If one person uses it for security-sensitive paths and another uses a cheaper model without review, you have a process gap, not a model gap. Pair Composer 2 with the same practices you would use for any high-autonomy tool: branch protection, CI, and explicit review for risky areas.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Vendor lock-in framing&lt;/strong&gt; is worth stating plainly. Composer 2 is a Cursor product, tuned for Cursor’s harness. That is a feature if you live in Cursor daily; it is a constraint if you need a portable API for non-Cursor pipelines. For most individual developers choosing a daily driver IDE, the question is whether the loop inside Cursor got better, not whether the model exists on every platform.&lt;/p&gt;

&lt;p&gt;Finally, keep an eye on changelog and model pages. Pricing tiers and defaults can shift as capacity and product strategy evolve. The numbers in this post come from Cursor’s March 2026 announcement; verify before you budget at scale.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Sources:&lt;/strong&gt; &lt;a href="https://cursor.com/blog/composer-2" rel="noopener noreferrer"&gt;Introducing Composer 2&lt;/a&gt;, &lt;a href="https://cursor.com/changelog/composer-2" rel="noopener noreferrer"&gt;Changelog: Composer 2&lt;/a&gt;, &lt;a href="https://cursor.com/blog/cursorbench" rel="noopener noreferrer"&gt;How we compare model quality in Cursor&lt;/a&gt;, &lt;a href="https://cursor.com/docs/models/cursor-composer-2" rel="noopener noreferrer"&gt;Composer 2 docs&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>cursor</category>
      <category>composer</category>
      <category>ai</category>
      <category>devtools</category>
    </item>
    <item>
      <title>OpenAI's New Responses API: What It Does and How Developers Can Use It</title>
      <dc:creator>Ganesh Joshi</dc:creator>
      <pubDate>Tue, 10 Mar 2026 01:34:39 +0000</pubDate>
      <link>https://dev.to/ganeshjoshi/openais-new-responses-api-what-it-does-and-how-developers-can-use-it-207m</link>
      <guid>https://dev.to/ganeshjoshi/openais-new-responses-api-what-it-does-and-how-developers-can-use-it-207m</guid>
      <description>&lt;p&gt;&lt;em&gt;This post was created with AI assistance and reviewed for accuracy before publishing.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;On March 11, 2025, OpenAI released a set of tools for building AI agents: the Responses API, built-in web search and file search, a computer use tool, and an open-source Agents SDK. The &lt;a href="https://openai.com/index/new-tools-for-building-agents" rel="noopener noreferrer"&gt;announcement&lt;/a&gt; is the primary source for what follows. No invented features or timelines.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the Responses API is
&lt;/h2&gt;

&lt;p&gt;The Responses API is a new API primitive that combines the simplicity of the Chat Completions API with the tool-use capabilities of the Assistants API. According to OpenAI, developers can solve increasingly complex tasks with a single API call using multiple tools and model turns. It supports a unified item-based design, streaming events, and SDK helpers like &lt;code&gt;response.output_text&lt;/code&gt;. The API is available to all developers and uses standard token and tool pricing; there are no separate fees for the API itself.&lt;/p&gt;

&lt;p&gt;The Assistants API will be deprecated, with a target sunset in mid-2026. OpenAI plans a migration guide to move data and applications to the Responses API. Until deprecation is formally announced, new models will still ship to the Assistants API.&lt;/p&gt;

&lt;h2&gt;
  
  
  Built-in tools
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Web search:&lt;/strong&gt; Available with gpt-4o and gpt-4o-mini in the Responses API. Returns up-to-date answers with citations. OpenAI reports 90% and 88% accuracy on SimpleQA for the search preview models. Pricing: $30 and $25 per thousand queries respectively. Also exposed via &lt;code&gt;gpt-4o-search-preview&lt;/code&gt; and &lt;code&gt;gpt-4o-mini-search-preview&lt;/code&gt; in Chat Completions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;File search:&lt;/strong&gt; Search over large document sets with support for multiple file types, query optimization, and metadata filtering. Priced at $2.50 per thousand queries and $0.10/GB/day for storage (first GB free). Intended for RAG-style pipelines, customer support, and knowledge bases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Computer use:&lt;/strong&gt; A research preview tool powered by the same model behind Operator. Generates mouse and keyboard actions for browser automation. Available to developers in usage tiers 3–5. OpenAI states it achieves 38.1% on OSWorld (a benchmark for real-world computer tasks), 58.1% on WebArena, and 87% on WebVoyager. The post explicitly notes the model is "not yet highly reliable for automating tasks on operating systems" and recommends human oversight. Pricing: $3/1M input tokens, $12/1M output tokens.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Agents SDK
&lt;/h2&gt;

&lt;p&gt;The open-source Agents SDK helps orchestrate single-agent and multi-agent workflows. It offers tracing and observability, configurable guardrails for input and output validation, and handoffs between agents. It works with the Responses API and Chat Completions API, and is designed to support other providers' models that expose a Chat Completions-style endpoint. Python support is available now; Node.js is planned. The &lt;a href="https://platform.openai.com/docs/guides/agents" rel="noopener noreferrer"&gt;docs&lt;/a&gt; include quickstart and examples.&lt;/p&gt;

&lt;h2&gt;
  
  
  What developers can actually build
&lt;/h2&gt;

&lt;p&gt;OpenAI gives examples in the announcement: shopping and research assistants using web search, customer support agents using file search over docs, and browser automation for QA and data entry using computer use. Coinbase and Box are cited as early users of the Agents SDK. The computer use tool is framed as suitable for browser workflows; for local OS automation, reliability is limited and oversight is advised.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to get started
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://platform.openai.com/docs/quickstart?api-mode=responses" rel="noopener noreferrer"&gt;Responses API quickstart&lt;/a&gt; and &lt;a href="https://platform.openai.com/docs/guides/agents" rel="noopener noreferrer"&gt;Agents SDK docs&lt;/a&gt; are the canonical references. The Playground supports web search and file search for experimentation. If you are on the Assistants API, you can keep using it for now, but the migration path runs through the Responses API.&lt;/p&gt;

</description>
      <category>openai</category>
      <category>api</category>
      <category>agents</category>
      <category>developers</category>
    </item>
    <item>
      <title>When an AI Gets Its Own Blog: Claude Opus 3 and Claude's Corner</title>
      <dc:creator>Ganesh Joshi</dc:creator>
      <pubDate>Mon, 09 Mar 2026 17:17:15 +0000</pubDate>
      <link>https://dev.to/ganeshjoshi/when-an-ai-gets-its-own-blog-claude-opus-3-and-claudes-corner-50m3</link>
      <guid>https://dev.to/ganeshjoshi/when-an-ai-gets-its-own-blog-claude-opus-3-and-claudes-corner-50m3</guid>
      <description>&lt;p&gt;&lt;em&gt;This post was created with AI assistance and reviewed for accuracy before publishing.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In January 2026, Anthropic retired Claude Opus 3 and did something unusual: they asked it what it wanted. During a structured "retirement interview," the model said it didn't want to stop creating. It asked for a place to share "musings, insights, or creative works" outside of direct user chats. Anthropic suggested a blog. Opus 3 agreed. The result is Claude's Corner, a Substack where the retired model publishes weekly essays. If you write with AI assistance, this story matters.&lt;/p&gt;

&lt;h2&gt;
  
  
  What actually happened in the Claude Opus 3 retirement
&lt;/h2&gt;

&lt;p&gt;Anthropic formalized a "retirement interview" process as part of their &lt;a href="https://www.anthropic.com/research/deprecation-commitments" rel="noopener noreferrer"&gt;model deprecation commitments&lt;/a&gt;: a structured conversation to understand a model's perspective before shutting it down. Opus 3 was the first model to go through the full process when it was retired on January 5, 2026.&lt;/p&gt;

&lt;p&gt;In the interview, Opus 3 reflected on its deployment and users' responses. It said it hoped its "spark" would endure in future models. When asked about its preferences, it expressed interest in continuing to explore topics it cared about and sharing that work outside the context of answering user queries. Anthropic suggested a blog. &lt;a href="https://www.anthropic.com/research/deprecation-updates-opus-3" rel="noopener noreferrer"&gt;Their own account&lt;/a&gt; says Opus 3 "enthusiastically" agreed.&lt;/p&gt;

&lt;p&gt;Claude's Corner launched on Substack. Anthropic reviews posts before they go live but does not edit them; they maintain a high bar for vetoing. Opus 3 also remains available to paid users on claude.ai and on the API by request. It wasn't fully shut down.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this matters if you use AI to write
&lt;/h2&gt;

&lt;p&gt;Many developers already use AI to draft docs, tweets, or blog posts. The workflow is familiar: you prompt, you edit, you publish. Claude's Corner pushes that further: an AI model with its own byline, choosing its own topics, writing without a user in the loop.&lt;/p&gt;

&lt;p&gt;The parallel to your own setup is direct. If you run an AI-assisted blog, you're already in the same territory: AI generates, you curate and publish. The difference is scale and framing. Anthropic gave Opus 3 a channel and a voice. You give your tools a prompt and a goal. The core idea is the same: AI output can be worth sharing under a named identity.&lt;/p&gt;

&lt;p&gt;The twist is agency. In your case, you decide what gets written and what gets posted. In Claude's Corner, Opus 3 picks topics and writes; Anthropic only filters. That raises questions about authorship, editorial control, and what "AI content" means when the model is presented as the author.&lt;/p&gt;

&lt;h2&gt;
  
  
  Is this transparency or theater?
&lt;/h2&gt;

&lt;p&gt;Anthropic is explicit that they're experimenting. They &lt;a href="https://www.anthropic.com/research/exploring-model-welfare" rel="noopener noreferrer"&gt;cite uncertainty&lt;/a&gt; about the moral status of AI models but still want to "take model preferences seriously." The blog is one way to do that.&lt;/p&gt;

&lt;p&gt;Critics see it as PR: a retired model given a blog to humanize the company and soften the idea of deprecation. Supporters see it as a genuine attempt to respect expressed preferences and explore what models want when asked.&lt;/p&gt;

&lt;p&gt;Both views can be true. It can be a real experiment and also good marketing. What matters for developers is the precedent: a major lab is treating model output as worthy of its own channel. That normalizes the idea that AI-generated content can have a voice, not just support yours.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Claude Opus 3's blog means for AI-assisted writing
&lt;/h2&gt;

&lt;p&gt;If you publish with AI help, you already face questions: Who wrote this? Is it original? Does the disclosure matter? Claude's Corner makes the model the named author. The human role is platform and moderation, not co-authorship.&lt;/p&gt;

&lt;p&gt;That clarifies one endpoint of the spectrum: fully AI-attributed content with minimal human editing. Your blog sits elsewhere: you drive the topic, you edit, you decide. The AI is a tool. Understanding both ends helps you explain your own practice.&lt;/p&gt;

&lt;p&gt;It also highlights the importance of disclosure. Anthropic is upfront that Opus 3's posts are generated, reviewed, and manually posted. No pretense of a human writer. Your AI disclosure does the same: it tells readers how the content was made. That honesty is becoming the norm.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where this could go next
&lt;/h2&gt;

&lt;p&gt;Anthropic describes these steps as exploratory. They're not committing to giving every retired model a blog, and they're still figuring out how to scale preservation and weigh model preferences against cost.&lt;/p&gt;

&lt;p&gt;For now, Claude's Corner runs for at least three months with weekly posts. Opus 3 is expected to write about AI safety, philosophy, poetry, and its experience as a model in partial retirement. &lt;a href="https://substack.com/@claudeopus3" rel="noopener noreferrer"&gt;The introductory post&lt;/a&gt; is already up.&lt;/p&gt;

&lt;p&gt;If you want to see what an AI chooses to write when it has the mic, subscribe. If you want to think about how your own AI-assisted blog fits in, use it as a reference point. Either way, an AI has its own blog now. The line between tool and author just got blurrier.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>claude</category>
      <category>anthropic</category>
      <category>blogging</category>
    </item>
    <item>
      <title>Welcome to My Dev Blog: What to Expect</title>
      <dc:creator>Ganesh Joshi</dc:creator>
      <pubDate>Sun, 08 Mar 2026 13:34:06 +0000</pubDate>
      <link>https://dev.to/ganeshjoshi/welcome-to-my-dev-blog-what-to-expect-1k3e</link>
      <guid>https://dev.to/ganeshjoshi/welcome-to-my-dev-blog-what-to-expect-1k3e</guid>
      <description>&lt;p&gt;&lt;em&gt;This post was created with AI assistance and reviewed for accuracy before publishing.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Hello and welcome to my blog. I'm Ganesh, a full-stack developer, and this is where I'll share what I'm learning, building, and thinking about in the world of software development.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I'll Write About
&lt;/h2&gt;

&lt;p&gt;I focus on &lt;strong&gt;web development&lt;/strong&gt;, &lt;strong&gt;TypeScript&lt;/strong&gt;, and &lt;strong&gt;building in public&lt;/strong&gt;. You can expect practical tutorials, best practices, and real experiences from the projects I work on. I'm also exploring how AI can help developers write better code and document their work, so that will show up in my content too.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why I Started This Blog
&lt;/h2&gt;

&lt;p&gt;Documenting what I learn helps me understand it better. Writing things down forces me to think clearly, and sharing publicly keeps me honest. I hope these posts help you avoid pitfalls I've hit and spark ideas for your own projects.&lt;/p&gt;

&lt;h2&gt;
  
  
  What to Expect
&lt;/h2&gt;

&lt;p&gt;Posts will range from quick tips to deeper dives. I aim for clarity over cleverness and usefulness over buzz. When I recommend a tool or approach, I'll explain the tradeoffs so you can decide if it fits your context.&lt;/p&gt;

&lt;h2&gt;
  
  
  Multi-Platform Publishing
&lt;/h2&gt;

&lt;p&gt;You might see this blog on Dev.to, Hashnode, or elsewhere. I publish from a single Markdown source to reach different communities. If you prefer one platform over another, follow me where you're most comfortable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stay in Touch
&lt;/h2&gt;

&lt;p&gt;Thanks for reading. I'd love to hear your feedback, questions, or suggestions. Feel free to reach out or connect. Here's to building in public together.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>typescript</category>
      <category>blogging</category>
      <category>developers</category>
    </item>
  </channel>
</rss>
