DEV Community

Scarlett Attensil for LaunchDarkly

Posted on • Originally published at launchdarkly.com

Build AI Configs with Agent Skills in Claude Code, Cursor, or Windsurf

LaunchDarkly Agent Skills let you build AI Configs by describing what you want. Tell your coding assistant to create an agent, and it handles the API calls, targeting rules, and tool definitions for you.

In this quickstart, you'll create AI Configs using natural language, then run a sample LangGraph app that consumes them. You'll build a "Side Project Launcher"—a three-agent pipeline that validates ideas, writes landing pages, and recommends tech stacks.

Prefer video? Watch Build a multi-agent system with LaunchDarkly Agent Skills for a walkthrough of this tutorial.

What you'll build

A three-agent pipeline called "Side Project Launcher":

  • Idea Validator: researches competitors, analyzes market gaps, scores viability
  • Landing Page Writer: generates headlines, copy, and CTAs based on your value prop
  • Tech Stack Advisor: recommends frameworks, databases, and hosting based on your requirements

By the end, you'll have working AI Configs in LaunchDarkly and a sample app that fetches them at runtime.

Prerequisites

  • LaunchDarkly account (free trial works)
  • Claude Code, Cursor, or Windsurf installed
  • LaunchDarkly API access token (for creating configs)
  • Anthropic API key (for running the sample app)
  • LaunchDarkly API access token (LD_API_KEY): Used by Agent Skills to create projects and AI Configs. Get it from Authorization settings. Requires writer role or custom role with createProject and createAIConfig permissions.
  • LaunchDarkly SDK key (LAUNCHDARKLY_SDK_KEY): Used by your app at runtime to fetch AI Configs. Found in your project's SDK settings after creation.
  • Model provider API key (e.g., ANTHROPIC_API_KEY): Used to call the model. Get it from your provider (Anthropic, OpenAI, etc.).

Store all keys in .env and never commit them to version control.

Want to follow along? Start your 14-day free trial of LaunchDarkly. No credit card required.

30-second quickstart

If you just want to get started, here's the fastest path:

1. Install skills:

npx skills add launchdarkly/agent-skills
Enter fullscreen mode Exit fullscreen mode

Or ask your editor: "Download and install skills from https://github.com/launchdarkly/agent-skills"

Restart your editor after installing.

2. Set your token:

export LD_API_KEY="api-xxxxx"
Enter fullscreen mode Exit fullscreen mode

3. Build something:

Use the prompt in Build a multi-agent project below, or describe your own agents. The assistant creates everything and gives you links to view them in LaunchDarkly.

Install Agent Skills in Claude Code, Cursor, or Windsurf

Agent Skills work with any editor that supports the Agent Skills specification.

Step 1: Install the skills

You have two options:

Option A: Use skills.sh (recommended)

skills.sh is an open directory for agent skills. Install LaunchDarkly skills with one command:

npx skills add launchdarkly/agent-skills
Enter fullscreen mode Exit fullscreen mode

Option B: Ask your AI assistant

Open your editor and ask:

Download and install skills from https://github.com/launchdarkly/agent-skills
Enter fullscreen mode Exit fullscreen mode

Both methods install the same skills.

Step 2: Restart your editor

Close and reopen your editor. The skills load on startup.

How to verify: Type /aiconfig in Claude Code. You should see autocomplete suggestions. In Cursor, ask "what LaunchDarkly skills do you have?" and the assistant should list them.

Step 3: Set your API token

export LD_API_KEY="api-xxxxx"
Enter fullscreen mode Exit fullscreen mode

Get your token from LaunchDarkly Authorization settings. The writer role works, or use a custom role with createProject and createAIConfig permissions.

Build a multi-agent project

Now let's build something real: a Side Project Launcher that helps you validate ideas, write landing pages, and pick the right tech stack. Tell the assistant:

Create AI Configs for a "Side Project Launcher" with three configs.
Use Anthropic Claude models for all configs.

1. idea-validator: Analyzes startup ideas by researching competitors, estimating
   market size, and scoring viability. Use variables for {{idea}}, {{target_audience}},
   and {{problem_statement}}. Give it tools for web search and competitor analysis.

2. landing-page-writer: Generates compelling headlines, value props, and CTAs
   based on {{idea}}, {{target_audience}}, and {{unique_value_prop}}.
   Give it tools for copy generation and A/B test suggestions.

3. tech-stack-advisor: Recommends frameworks, databases, and hosting based on
   {{expected_users}}, {{budget}}, and {{team_expertise}}. Give it a tool for
   stack recommendations.

Put them in a new project called side-project-launcher.
Enter fullscreen mode Exit fullscreen mode

What the assistant creates

The assistant uses several skills automatically:

  1. aiconfig-projects: creates the LaunchDarkly project
  2. aiconfig-create: builds each agent configuration with variables
  3. aiconfig-tools: defines tools for function calling

Expected output:

Creating project: side-project-launcher
Creating AI Config: idea-validator
  - Model: anthropic.claude-sonnet-4-20250514
  - Variables: idea, target_audience, problem_statement
  - Instructions: "Validate the idea: {{idea}}. Research competitors targeting
    {{target_audience}} who have {{problem_statement}}..."
  - Tools: web_search, competitor_analysis
Creating AI Config: landing-page-writer
  - Model: anthropic.claude-sonnet-4-20250514
  - Variables: idea, target_audience, unique_value_prop
  - Instructions: "Write landing page copy for {{idea}}. The target audience is
    {{target_audience}}. Lead with: {{unique_value_prop}}..."
  - Tools: generate_copy, suggest_ab_tests
Creating AI Config: tech-stack-advisor
  - Model: anthropic.claude-sonnet-4-20250514
  - Variables: expected_users, budget, team_expertise
  - Instructions: "Recommend a tech stack for {{expected_users}} users,
    {{budget}} budget, team knows {{team_expertise}}..."
  - Tools: recommend_stack

Done! View your project:
https://app.launchdarkly.com/side-project-launcher/production/ai-configs
Enter fullscreen mode Exit fullscreen mode


Claude Code showing created AI Configs with models, tools, variables, and SDK keys

The variables ({{idea}}, {{target_audience}}, etc.) get filled in at runtime when you call the SDK. That's how each user gets personalized output.

What it looks like in LaunchDarkly


AI Configs list in LaunchDarkly showing the three agents: idea-validator, landing-page-writer, and tech-stack-advisor

After creation, your LaunchDarkly project contains:

  • 3 AI Configs with instructions, model settings, and variables
  • 3 tools with parameter definitions ready for function calling
  • Default targeting serving the configuration to all users


Default targeting settings showing the configuration served to all users

Each agent has its own configuration with instructions, variables, and tools. Here's the idea-validator:


Idea validator AI Config showing instructions, model settings, and variables

The landing-page-writer and tech-stack-advisor follow the same pattern with their own instructions and tools.

Run the Side Project Launcher

The full working code is available on GitHub: launchdarkly-labs/side-project-researcher

Clone it and run:

git clone https://github.com/launchdarkly-labs/side-project-researcher.git
cd side-project-researcher
pip install -r requirements.txt
cp .env.example .env
# Edit .env with your SDK key and Anthropic API key
python side_project_launcher_langgraph.py
Enter fullscreen mode Exit fullscreen mode

You'll need both the LaunchDarkly SDK key (from your project's SDK settings) and your Anthropic API key in the .env file. The assistant can surface the SDK key from your project details, but store it in .env rather than hardcoding it.

The app prompts you for your idea details:


Terminal prompts asking for idea, target audience, problem statement, and tech requirements

Then each agent runs in sequence, fetching its config from LaunchDarkly and generating output:


Idea validator agent output with market analysis and viability score


Tech stack advisor output recommending frameworks and infrastructure

Connect to your framework

The AI Config stores your model, instructions, and tools. The SDK fetches the config and handles variable substitution automatically.

The snippets below show the integration pattern. They omit imports, error handling, and tool wiring for brevity. For complete, runnable code, use the sample repo.

Initialize the SDK

import ldclient
from ldclient import Context
from ldclient.config import Config
from ldai.client import LDAIClient, AIAgentConfigDefault

# Initialize once at startup
SDK_KEY = os.environ.get('LAUNCHDARKLY_SDK_KEY')
ldclient.set_config(Config(SDK_KEY))
ld_client = ldclient.get()
ai_client = LDAIClient(ld_client)
Enter fullscreen mode Exit fullscreen mode

Fetch agent configs

def build_context(user_id: str, **attributes):
    """Build LaunchDarkly context for targeting."""
    builder = Context.builder(user_id)
    for key, value in attributes.items():
        builder.set(key, value)
    return builder.build()

def get_agent_config(config_key: str, context: Context, variables: dict = None):
    """Get agent-mode AI Config from LaunchDarkly."""
    fallback = AIAgentConfigDefault(enabled=False)
    return ai_client.agent_config(config_key, context, fallback, variables or {})
Enter fullscreen mode Exit fullscreen mode

Wire it to LangGraph

LangGraph orchestrates multi-agent workflows as a graph of nodes, but you can use any orchestrator—CrewAI, LlamaIndex, Bedrock AgentCore, or custom code. To compare options, read Compare AI orchestrators.

By wiring AI Configs to each node, your agents fetch their model, instructions, and tools dynamically from LaunchDarkly. This lets you swap models within a provider (e.g., Sonnet to Haiku), update prompts, or disable agents without redeploying.

The AI Config defines tool schemas, but your code must implement the actual tool handlers. The sample repo shows how to bind config.tools to LangChain tool functions. For this tutorial, the tools are defined but not wired—the agents respond based on their instructions alone.

Each agent becomes a node in your graph:

from langchain_anthropic import ChatAnthropic
from langchain_core.messages import HumanMessage, SystemMessage
from langgraph.graph import StateGraph, END

def idea_validator_node(state: SideProjectState) -> SideProjectState:
    context = build_context(state["user_id"])
    config = get_agent_config("idea-validator", context, {
        "idea": state["idea"],
        "target_audience": state["target_audience"],
        "problem_statement": state["problem_statement"]
    })

    if config.enabled:
        llm = ChatAnthropic(model=config.model.name)
        messages = [
            SystemMessage(content=config.instructions),
            HumanMessage(content="Please validate this idea and provide your analysis.")
        ]
        response = llm.invoke(messages)
        state["idea_validation"] = response.content
        config.tracker.track_success()  # Track metrics

    return state

# Build the graph
workflow = StateGraph(SideProjectState)
workflow.add_node("validate_idea", idea_validator_node)
workflow.add_node("write_landing_page", landing_page_writer_node)
workflow.add_node("recommend_stack", tech_stack_advisor_node)

workflow.set_entry_point("validate_idea")
workflow.add_edge("validate_idea", "write_landing_page")
workflow.add_edge("write_landing_page", "recommend_stack")
workflow.add_edge("recommend_stack", END)

app = workflow.compile()

# Don't forget to flush before exiting
ld_client.flush()
Enter fullscreen mode Exit fullscreen mode

To see a full example running across LangGraph, Strands, and OpenAI Swarm, read Compare AI orchestrators.

What you can do next

Once your agents are in LaunchDarkly:

  • A/B test variations: split traffic between prompt variations or model sizes (e.g., Sonnet vs Haiku) to see which performs better
  • Target by segment: premium users get one variation, free users get another
  • Kill switch: disable a misbehaving agent instantly from the UI
  • Track costs: monitor tokens and latency per variation

To learn more about targeting and experimentation, read AI Configs Best Practices.

Troubleshooting

Skills installed but not working: Restart your editor after installing skills. They load on startup.

"Permission denied" errors: Check that your API token has createProject and createAIConfig permissions. The writer role includes both.

Config comes back disabled: Your targeting rules may not match the context you're passing. Check that default targeting is enabled, or that your context attributes match your rules.

Tools defined but not executing: The AI Config defines tool schemas, but your code must implement handlers. See the sample repo for tool binding examples.

Can't find SDK key: After Agent Skills creates your project, find the SDK key in your project's Settings > Environments > SDK key. Copy it to your .env file.

FAQ

Do I need Claude Code, or does this work in Cursor/Windsurf?

Agent Skills work in any editor that supports the Agent Skills specification. This includes Claude Code, Cursor, and Windsurf. The installation process is the same.

What's the difference between Agent Skills and the MCP server?

Both give your AI assistant access to LaunchDarkly. Agent Skills are text-based playbooks that teach the assistant workflows. The MCP server exposes LaunchDarkly's API as tools. You can use either or both.

What permissions does my API token need?

The writer role works, or use a custom role with createProject and createAIConfig permissions.

Where do I see the created AI Configs?

In the LaunchDarkly UI: go to your project, then AI Configs in the left sidebar. Each config shows its instructions, model, tools, and targeting rules.

How do I delete or reset generated configs?

In the LaunchDarkly UI, open the AI Config and click Archive (or Delete if available). Or ask the assistant: "Delete the AI Config called researcher-agent in project valentines-day."

Can I use this with frameworks other than LangGraph?

Yes. The SDK returns model name, instructions, and tools as data. You wire that into whatever framework you use: CrewAI, LlamaIndex, Bedrock AgentCore, or custom code.

Does this work for completion mode (chat) or just agent mode?

Both. Use ai_client.completion_config() for completion mode (chat with message arrays) or ai_client.agent_config() for agent mode (instructions for multi-step workflows). To learn more, read Agent mode vs completion mode.

Next steps

Top comments (0)