Originally published in the LaunchDarkly Docs
Overview
When developers first encounter LaunchDarkly, they often see it as a feature flag management tool. Turns out calling LaunchDarkly a feature flag tool is like calling a Swiss Army knife "a device for opening wine bottles." Even though that would still be useful. Although technically true, you're missing about 90% of the picture.
LaunchDarkly has quietly evolved into a full feature delivery platform that happens to use flags as the foundation for four interconnected pillars: Release Management, Observability & Monitoring, and Analytics & Experimentation, and AI Configs.
Understanding how these pillars work together, including the backend infrastructure reveals why LaunchDarkly has become mission-critical for modern software delivery.
The Foundation: Feature Flag Management
At the heart of LaunchDarkly lies its feature flag management system. Think of feature flags as the control switches for your application's behavior. But unlike traditional configuration management, LaunchDarkly's flags are dynamic, real-time, and incredibly sophisticated.
Feature flag management serves as the foundation layer because it enables everything else. Without the ability to control feature visibility and behavior at runtime, none of the other pillars could function. This foundation includes:
- Feature Flags: Binary or multi-variant toggles that control feature availability.
- AI Configs: Dynamic configuration for AI model parameters and behaviors.
- Targeting Rules: Sophisticated logic for determining who sees what features.
- Context Management: User, device, and organizational context for personalized experiences.
I spent an embarrassing amount of time in my hammock thinking about why this is the foundation layer. The answer is simple: without runtime control over features, you're back to deploying code every time you want to change something. And if you've ever been on-call during a Friday deployment that went sideways, you know that's its own level of trauma.
The Four Pillars (Or how to sleep through deployments)
1. Release Management (Yellow)
The Release Management pillar focuses on safely delivering features to production. This includes:
Releases: Traditional feature rollouts with full control over timing and audience.
Guarded Rollouts: Progressive rollouts combined with real-time monitoring and automatic rollback capabilities. This is the feature that will single-handedly help you get more sleep. When you enable a guarded rollout, LaunchDarkly monitors metrics like error rates, latency, and custom business metrics. If it detects a regression, it can automatically roll back the change before users are impacted.
Progressive Rollouts: Automated gradual rollouts that increase traffic to a new feature over time (e.g., 10% -> 25% -> 50% -> 100%)
The key insight here is Release Management isn't about deploying code anymore. It's about deploying business value while your code sits safely in production, waiting for permission to run.
2. Observability & Monitoring (Blue)
This pillar answers life's most important production question: "Wait, what's happening right now?" This includes:
Session Replay: Record and replay user sessions to understand exactly what users experienced. For instance, if the user says a button didn’t work, then you can literally watch what they did.
Feature Monitoring: Track feature health, performance, and adoption in real-time.
Alerts: Proactive notifications when metrics breach thresholds.
Errors, Logs, Traces: The ultimate trio of debugging, all in one place, all correlated with which flags were active when things went sideways.
Dashboards: Customizable visualizations of all observability data.
What makes LaunchDarkly's observability unique is the feature-level granularity. Traditional monitoring says "error rate increased at 2:47pm." LaunchDarkly says "error rate increased at 2:47pm when you toggled the new-payment-processor flag to 30% rollout." One of these lets you fix the problem from your hammock. The other leads you down a git rabbit hole.
3. Analytics & Experimentation (Green)
The Analytics & Experimentation pillar helps teams make data-driven decisions:
Experimentation: Full-featured A/B testing and multivariate experiments. Run controlled experiments to measure the impact of features on business metrics.
Product Analytics: Warehouse-native analytics that integrates with your data infrastructure (like Snowflake) to provide deep insights into user behavior.
Metrics: Track both engineering metrics (error rates, latency) and business metrics (conversion, revenue, engagement).
Guarded Rollouts (also appears here): While primarily a release mechanism, Guarded Rollouts use Experimentation methodology to automatically detect regressions during rollouts.
The Experimentation pillar transforms feature flags from simple on/off switches into scientific instruments for measuring impact.
4. AI Configs (Purple)
The unveiling of AI Configs marks a shift in LaunchDarkly's shift from simply creating and storing feature flag values to storing values related to Large Language Models (LLMs). This gives way to pretty neat opportunities like customizing, testing, and rolling out new LLMs.
How the Pillars are Interconnected
The four pillars aren't just sitting next to each other making awkward small talk. They're in a deeply committed relationship with constant communication. Here's how:
- Release Management -> Observability: When you toggle a flag or start a rollout, observability tools immediately begin tracking the impact. Error rates, traces, and logs are automatically correlated with the flag change.
- Observability -> Analytics: The data collected through monitoring feeds directly into Experimentation and analytics. You're not just watching for errors; you're measuring business impact.
- Analytics -> Release Management: Experiment results inform which variations to roll out. Metrics from guarded rollouts trigger automatic decisions (rollback or continue).
- AI Configs -> All Pillars: AI configurations add a dynamic layer across the ecosystem:
- To Release Management: Model versions, prompts, and parameters can be toggled like features, enabling safe AI deployments.
- To Observability: Track model performance, latency, token usage, and output quality in real-time.
- To Analytics: A/B test different prompts, models, or parameters to optimize AI outcomes and measure business impact.
- Feature Flags Enable Everything: Without the foundational flag management system, none of these capabilities would work. Flags are the control point that makes progressive delivery, real-time monitoring, and controlled Experimentation possible.
The Infrastructure: Flag Delivery Network (FDN)
 


 
    
Top comments (0)