DEV Community

Cover image for Migrating from New Relic Drop Rules to Pipeline Cloud Rules: A Terraform Guide
Anderson Leite
Anderson Leite

Posted on

Migrating from New Relic Drop Rules to Pipeline Cloud Rules: A Terraform Guide

The Deprecation Notice

If you're using New Relic's newrelic_nrql_drop_rule resource in Terraform, you've likely seen the deprecation warning at the resource page: these resources will be deprecated on January 6th, 2026. This gives us still some months to migrate to the new newrelic_pipeline_cloud_rule resource.

If you want to check the NR official announcement, it's here)

In this quick article (which I wrote while multitasking preparing some burnt ends for the lunch!), I'll walk you through a real-world migration scenario, showing how we moved from the legacy drop rules to the new pipeline cloud rules while maintaining the same data filtering functionality.

Table of Contents

  1. Understanding the Change
  2. Our Legacy Setup
  3. The New Pipeline Cloud Rules
  4. Migration Strategy
  5. Implementation Examples
  6. Testing and Validation
  7. Best Practices

Understanding the Change

Why the Change?

New Relic is consolidating their data management capabilities under the Pipeline Control umbrella. This new system offers:

  • Unified data management: Both cloud rules and gateway rules in one place
  • Better scalability: Designed for modern, high-volume telemetry data
  • Enhanced flexibility: More granular control over data processing
  • Future-proof architecture: Built to support upcoming features

Key Differences

Aspect Drop Rules (Legacy) Pipeline Cloud Rules (New)
Resource Name newrelic_nrql_drop_rule newrelic_pipeline_cloud_rule
NRQL Format SELECT * FROM ... WHERE ... DELETE FROM ... WHERE ... (Note: In UI, you write SELECT but it converts to DELETE)
Management Standalone rules Part of Pipeline Control
Scope Account-level only Account and organization-level

Our Legacy Setup

At my previous company, we had an Terraform configuration to drop data from our QA Kubernetes cluster, it was setup like this (I removed any sensitive data before publishing it just in case):

# Define Clusters and Namespaces with a Single Enable/Disable Flag
locals {
  namespaces = {
    analytics-qa                   = true,
    azure-workload-identity-system = true,
    cert-manager                   = true,
    etl                            = true,
    kafka                          = true,
    keda                           = true,
    keycloak                       = true,
    kube-system                    = true,
    newrelic                       = true,
    rabbitmq-cluster               = true,
    kyverno                        = true,
    tools                          = true
  }

  drop_rules_enabled = [
    for namespace, enabled in local.namespaces : namespace if enabled
  ]
}

# Single Drop Rule for all types
resource "newrelic_nrql_drop_rule" "drop_data_by_namespace" {
  for_each = toset(local.drop_rules_enabled)

  action      = "drop_data"
  description = "Drop data from QA cluster in ${each.key} namespace"
  nrql        = "SELECT * FROM Log, CustomEvent, Span, Metric WHERE namespace_name = '${each.key}' and cluster_name = 'my-aks-cluster'"
}

# Drop additional data regardless of namespace
resource "newrelic_nrql_drop_rule" "drop_from_all_namespaces" {
  action      = "drop_data"
  description = "Drop additional data from QA cluster, regardless of namespace"
  nrql        = "SELECT * FROM NginxSample, K8sPodSample, K8sNodeSample, K8sContainerSample, NFSSample, WowzaSample, K8sHpaSample, K8sJobSample, NetworkSample, ProcessSample, StorageSample, ContainerSample, K8sVolumeSample, flexStatusSample, K8sClusterSample, K8sCronjobSample, K8sServiceSample, K8sEndpointSample, KafkaBrokerSample, KafkaOffsetSample, K8sDaemonsetSample where clusterName like '%qa%' or entityName LIKE '%qa%' or env LIKE '%non-prod%' or env like '%qa%'"
}
Enter fullscreen mode Exit fullscreen mode

The New Pipeline Cloud Rules

The new resource uses a different NRQL syntax and structure. Here's the equivalent configuration:

# Define Clusters and Namespaces with a Single Enable/Disable Flag
locals {
  namespaces = {
    analytics-qa                   = true,
    azure-workload-identity-system = true,
    cert-manager                   = true,
    etl-qa                         = true,
    kafka                          = true,
    keda                           = true,
    keycloak                       = true,
    kube-system                    = true,
    newrelic                       = true,
    rabbitmq-cluster               = true,
    kyverno                        = true,
    tools                          = true
  }

  drop_rules_enabled = [
    for namespace, enabled in local.namespaces : namespace if enabled
  ]
}

# New Pipeline Cloud Rule for namespace-based filtering
resource "newrelic_pipeline_cloud_rule" "drop_data_by_namespace" {
  for_each = toset(local.drop_rules_enabled)

  name        = "drop-qa-cluster-${each.key}-namespace"
  description = "Drop data from QA cluster in ${each.key} namespace"
  nrql        = "DELETE FROM Log, CustomEvent, Span, Metric WHERE namespace_name = '${each.key}' AND cluster_name = 'my-aks-cluster-name'"
  account_id  = var.new_relic_account_id
  action     = "drop_data"
}

# New Pipeline Cloud Rule for infrastructure samples
resource "newrelic_pipeline_cloud_rule" "drop_from_all_namespaces" {
  name        = "drop-qa-cluster-infrastructure"
  description = "Drop additional data from QA cluster, regardless of namespace"
  nrql        = "DELETE FROM NginxSample, K8sPodSample, K8sNodeSample, K8sContainerSample, NFSSample, WowzaSample, K8sHpaSample, K8sJobSample, NetworkSample, ProcessSample, StorageSample, ContainerSample, K8sVolumeSample, flexStatusSample, K8sClusterSample, K8sCronjobSample, K8sServiceSample, K8sEndpointSample, KafkaBrokerSample, KafkaOffsetSample, K8sDaemonsetSample WHERE clusterName LIKE '%qa1%' OR entityName LIKE '%qa%' OR env LIKE '%non-prod%' OR env LIKE '%qa%'"
  action     = "drop_data"
}
Enter fullscreen mode Exit fullscreen mode

Key Changes in the Migration

  1. Resource Type: Changed from newrelic_nrql_drop_rule to newrelic_pipeline_cloud_rule
  2. NRQL Syntax:
    • Old: SELECT * FROM ... WHERE ...
    • New: DELETE FROM ... WHERE ...
  3. Required Parameters:
    • Added name parameter (unique identifier)
    • Added account_id parameter
    • Removed action parameter (always "drop_data" in new system)
  4. Naming Convention: Used kebab-case for rule names to improve readability

Migration Strategy

Step 1: Plan the Migration

Before starting the migration, ensure you have:

  1. Inventory of existing rules: Document all current drop rules
  2. Testing environment: Set up a test account if possible
  3. Rollback plan: Keep backup of old configuration
  4. Monitoring setup: Track data ingestion before and after migration

Step 2: Create Migration Configuration

Create a new Terraform configuration file alongside your existing one:

# variables.tf
variable "new_relic_account_id" {
  description = "New Relic Account ID"
  type        = string
}

variable "enable_pipeline_rules" {
  description = "Feature flag to enable new pipeline rules"
  type        = bool
  default     = false
}

# versions.tf
terraform {
  required_version = ">= 1.10"
  required_providers {
    newrelic = {
      source  = "newrelic/newrelic"
      version = ">= 3.74.0"  # Ensure you have the latest version
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Step 3: Implement Feature Toggle

Use a feature flag to switch between old and new rules:

# main.tf
locals {
  use_pipeline_rules = var.enable_pipeline_rules
}

# Old rules (will be removed after migration)
resource "newrelic_nrql_drop_rule" "drop_data_by_namespace" {
  for_each = local.use_pipeline_rules ? {} : toset(local.drop_rules_enabled)

  action      = "drop_data"
  description = "Drop data from QA cluster in ${each.key} namespace"
  nrql        = "SELECT * FROM Log, CustomEvent, Span, Metric WHERE namespace_name = '${each.key}' and cluster_name = 'my-aks-cluster-name'"
}

# New rules
resource "newrelic_pipeline_cloud_rule" "drop_data_by_namespace" {
  for_each = local.use_pipeline_rules ? toset(local.drop_rules_enabled) : {}

  name        = "drop-qa-cluster-${each.key}-namespace"
  description = "Drop data from QA cluster in ${each.key} namespace"
  nrql        = "DELETE FROM Log, CustomEvent, Span, Metric WHERE namespace_name = '${each.key}' AND cluster_name = 'my-aks-cluster-name'"
  account_id  = var.new_relic_account_id
  action     = "drop_data"
}
Enter fullscreen mode Exit fullscreen mode

Implementation Examples

Example 1: Drop Attributes from Specific Data Types

If you need to drop specific attributes instead of entire events and on specific environments (in this example, from production):

resource "newrelic_pipeline_cloud_rule" "drop_sensitive_attributes" {
  name        = "drop-sensitive-data"
  description = "Remove PII from logs in production"
  nrql        = "DELETE email, ssn, credit_card FROM Log WHERE environment = 'production'"
  account_id  = var.new_relic_account_id
  action     = "drop_data"
}
Enter fullscreen mode Exit fullscreen mode

Example 2: Complex Filtering Rules

For more complex scenarios with multiple conditions:

resource "newrelic_pipeline_cloud_rule" "complex_drop_rule" {
  name        = "drop-complex-conditions"
  description = "Drop data based on multiple conditions"
  nrql        = <<-EOT
    DELETE FROM Metric, Span 
    WHERE (service_name IN ('test-service', 'debug-service') 
           AND environment = 'development')
       OR (error_rate > 0.95 AND environment = 'qa')
  EOT
  account_id  = var.new_relic_account_id
  action     = "drop_data"
}
Enter fullscreen mode Exit fullscreen mode

Example 3: Using Dynamic Blocks for Multiple Rules

locals {
  drop_rules = {
    "drop-test-logs" = {
      description = "Drop test environment logs"
      nrql       = "DELETE FROM Log WHERE environment = 'test'"
    }
    "drop-debug-metrics" = {
      description = "Drop debug metrics"
      nrql       = "DELETE FROM Metric WHERE debug = true"
    }
    "drop-staging-traces" = {
      description = "Drop staging traces older than 1 hour"
      nrql       = "DELETE FROM Span WHERE environment = 'staging' AND timestamp < (now() - 3600000)"
    }
  }
}

resource "newrelic_pipeline_cloud_rule" "dynamic_rules" {
  for_each = local.drop_rules

  name        = each.key
  description = each.value.description
  nrql        = each.value.nrql
  account_id  = var.new_relic_account_id
}
Enter fullscreen mode Exit fullscreen mode

Testing and Validation

Verify Rules Are Working

After creating pipeline cloud rules, validate they're working correctly:

-- Check if data is being dropped (should see count drop to 0)
SELECT count(*) FROM Log 
WHERE namespace_name = 'analytics-qa' 
  AND cluster_name = 'my-aks-cluster-name'
TIMESERIES SINCE 30 minutes ago

-- Verify other data is not affected
SELECT count(*) FROM Log 
WHERE namespace_name != 'analytics-qa'
TIMESERIES SINCE 30 minutes ago
Enter fullscreen mode Exit fullscreen mode

Monitor Data Ingestion

Create a dashboard to monitor the impact:

resource "newrelic_one_dashboard" "pipeline_rules_monitor" {
  name        = "Pipeline Rules Monitoring"
  description = "Monitor the impact of pipeline cloud rules"

  page {
    name = "Data Ingestion Overview"

    widget_line {
      title  = "Data Ingestion by Namespace"
      row    = 1
      column = 1
      width  = 6
      height = 3

      nrql_query {
        query = <<-EOT
          SELECT count(*) 
          FROM Log, Metric, Span 
          FACET namespace_name 
          TIMESERIES AUTO
        EOT
      }
    }

    widget_billboard {
      title  = "Total Data Points Dropped"
      row    = 1
      column = 7
      width  = 3
      height = 3

      nrql_query {
        query = <<-EOT
          SELECT filter(count(*), WHERE namespace_name IN (${join(",", [for ns in keys(local.namespaces) : "'${ns}'"])}))
          FROM Log, Metric, Span 
          SINCE 1 hour ago
        EOT
      }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Best Practices

1. Use Descriptive Names

Pipeline cloud rules require unique names. Use a consistent naming convention:

# Good naming examples
name = "drop-${environment}-${data_type}-${condition}"
name = "drop-qa-logs-by-namespace"
name = "remove-pii-from-production-logs"

# Avoid generic names
name = "rule1"  # Bad
name = "drop-data"  # Too generic
Enter fullscreen mode Exit fullscreen mode

2. Document Your Rules

Always include meaningful descriptions:

resource "newrelic_pipeline_cloud_rule" "example" {
  name        = "drop-high-volume-debug-logs"
  description = "Reduces data ingestion costs by dropping verbose debug logs from non-production environments. This rule saves approximately 500GB/day."
  nrql        = "DELETE FROM Log WHERE log_level = 'DEBUG' AND environment != 'production'"
  account_id  = var.new_relic_account_id
}
Enter fullscreen mode Exit fullscreen mode

3. Test in Non-Production First

Always test your rules in a non-production environment:

# Use environment-specific configurations
resource "newrelic_pipeline_cloud_rule" "test_rule" {
  count = var.environment == "test" ? 1 : 0

  name        = "test-drop-rule-${var.environment}"
  description = "Testing pipeline rule in ${var.environment}"
  nrql        = "DELETE FROM Log WHERE test_flag = true"
  account_id  = var.new_relic_account_id
}
Enter fullscreen mode Exit fullscreen mode

4. Monitor Rule Performance

Track the impact of your rules on data ingestion:

-- Query to monitor data dropped by rules
SELECT bytecountestimate() 
FROM Log, Metric, Span 
WHERE namespace_name IN ('analytics-qa', 'etl-qa', 'tools')
FACET namespace_name
SINCE 1 day ago
COMPARE WITH 1 day ago
Enter fullscreen mode Exit fullscreen mode

5. Gradual Migration

Migrate rules gradually to minimize risk:

# Phase 1: Migrate low-impact rules
# Phase 2: Migrate medium-impact rules
# Phase 3: Migrate critical rules

locals {
  migration_phase = 1

  rules_by_phase = {
    1 = ["test", "development"]
    2 = ["staging", "qa"]
    3 = ["production"]
  }

  active_environments = flatten([
    for phase, envs in local.rules_by_phase : 
    envs if phase <= local.migration_phase
  ])
}
Enter fullscreen mode Exit fullscreen mode

Common Pitfalls and Solutions

Pitfall 1: NRQL Syntax Differences

Problem: Confusion between SELECT and DELETE syntax

# Wrong - using old drop rule syntax
nrql = "SELECT * FROM Log WHERE environment = 'test'"

# Correct - pipeline cloud rules use DELETE
nrql = "DELETE FROM Log WHERE environment = 'test'"
Enter fullscreen mode Exit fullscreen mode

Note: If you're testing in the New Relic UI, you'll write SELECT queries which are automatically converted to DELETE when saved. However, in Terraform, you must use DELETE syntax directly.

Pitfall 2: Case Sensitivity

Problem: NRQL keywords must be uppercase

# Wrong
nrql = "delete from Log where environment = 'test'"

# Correct
nrql = "DELETE FROM Log WHERE environment = 'test'"
Enter fullscreen mode Exit fullscreen mode

Pitfall 3: Missing Account ID

Problem: Forgetting to specify account_id

# This will fail
resource "newrelic_pipeline_cloud_rule" "example" {
  name        = "drop-test-logs"
  description = "Drop test logs"
  action      = "drop_data" 
  nrql        = "DELETE FROM Log WHERE environment = 'test'"
  # Missing: account_id = var.new_relic_account_id
}
Enter fullscreen mode Exit fullscreen mode

Rollback Plan

If issues arise during migration, have a rollback plan ready:

# Keep old rules in a separate file: legacy_drop_rules.tf.backup
# Quick rollback procedure:
# 1. Set enable_pipeline_rules = false
# 2. Run terraform apply
# 3. Verify old rules are active
# 4. Monitor data ingestion

# Emergency rollback script
#!/bin/bash
terraform apply -var="enable_pipeline_rules=false" -auto-approve
echo "Rollback completed. Monitoring data ingestion..."
Enter fullscreen mode Exit fullscreen mode

Timeline for Migration

Here's a suggested timeline for your migration:

  1. Now - Q1 2025: Test new pipeline cloud rules in development
  2. Q2 2025: Begin migration in staging environments
  3. Q3 2025: Migrate production rules with careful monitoring
  4. Q4 2025: Complete migration and remove old resources
  5. Before Jan 6, 2026: Final verification and cleanup

Conclusion

Migrating from newrelic_nrql_drop_rule to newrelic_pipeline_cloud_rule is straightforward but requires careful planning. The new Pipeline Cloud Rules offer better integration with New Relic's evolving data management platform and will ensure your infrastructure remains compatible with future updates.

Key takeaways:

  • Start migration early to allow time for testing
  • Use feature flags for gradual rollout
  • Monitor the impact on your data ingestion
  • Keep detailed documentation of your rules
  • Have a rollback plan ready

Resources


Have you started your migration yet? Share your experiences and tips in the comments below!

newrelic #terraform #devops #observability #monitoring #infrastructure #cloudnative #kubernetes

Top comments (0)