DEV Community

Cover image for How to Send Email Alerts for Failures in Spring Boot Using Prometheus and Alertmanager
Matheus Bernardes Spilari
Matheus Bernardes Spilari

Posted on

How to Send Email Alerts for Failures in Spring Boot Using Prometheus and Alertmanager

Monitoring resilience patterns like Circuit Breakers is critical to detect downstream failures early. In this tutorial, we’ll walk through a real-world setup using Prometheus and Alertmanager to send email alerts when a Circuit Breaker is frequently triggered in a Spring Boot application.


Stack Overview

  • Spring Boot: exposes metrics via /actuator/prometheus
  • Resilience4j: manages the Circuit Breaker
  • Prometheus: collects metrics
  • Alertmanager: triggers email alerts
  • Mailtrap: email sandbox (SMTP server)

Configuration

  • The circuit breaker configuration is here on this post
  • The Prometheus configuration is here on this post

1. Create Alert manager container

In our docker-compose.yaml file, add this:

prometheus:
    image: prom/prometheus:latest
    ports:
      - "9090:9090"
    networks:
      - app_network
    volumes:
      - ./prometheus/prometheus.yaml:/etc/prometheus/prometheus.yml
      - ./alertManager/alerts.yaml:/etc/prometheus/alerts.yaml
      - prometheus_data:/prometheus

alertmanager:
    image: prom/alertmanager:latest
    container_name: alertmanager
    networks:
      - app_network
    ports:
    - "9093:9093"
    volumes:
    - ./alertManager/alertManager.yaml:/etc/alertmanager/alertmanager.yml

Enter fullscreen mode Exit fullscreen mode

2. Configuring Prometheus

In prometheus.yaml file, add:


alerting:
  alertmanagers:
    - static_configs:
        - targets: 
          - alertmanager:9093

rule_files:
  - "alerts.yaml"
Enter fullscreen mode Exit fullscreen mode

3. Create Alert Rules

Create a file alerts.yaml:

groups:
  - name: spring-boot-alerts
    rules:
      - alert: HighErrorRate
        expr: rate(http_server_requests_seconds_count{status=~"5.."}[1m]) > 0.5
        for: 1m
        labels:
          severity: warning
        annotations:
          summary: "Tax error 5xx too high"
          description: "More than 50% of the requests on the last 1 minute return error 5xx."

      - alert: InstanceDown
        expr: up{job="spring-boot-app"} == 0
        for: 30s
        labels:
          severity: critical
        annotations:
          summary: "Spring Boot instance is down."
          description: "One of the instances of the application is not responding."

      - alert: CircuitBreakerTripped
        expr: rate(http_server_requests_seconds_count{status="503"}[1m]) > 0.2
        for: 30s
        labels:
          severity: warning
        annotations:
          summary: "Circuit Breaker frequently activated."
          description: "The application is respondig with status 503 frequently, this means internal server issues."

      - alert: RateLimitExceeded
        expr: rate(http_server_requests_seconds_count{status="429"}[1m]) > 0
        for: 30s
        labels:
          severity: info
        annotations:
          summary: "Rate Limit exceed."
          description: "The NGINX return 429(Too Many Requests), this means that the tax of requests was exceeded."

Enter fullscreen mode Exit fullscreen mode

4. Configure Alertmanager with Mailtrap

In alertmanager.yml:

global:
  smtp_smarthost: 'sandbox.smtp.mailtrap.io:2525'
  smtp_from: 'alertmanager@demo.com'
  smtp_auth_username: 'your_mailtrap_username'
  smtp_auth_password: 'your_mailtrap_password'

route:
  receiver: email-alert

receivers:
  - name: email-alert
    email_configs:
      - to: 'devs@empresa.com'
        send_resolved: true
        headers:
          subject: '[Application Alert] {{ .CommonAnnotations.summary }}'
        html: |
          <h2>[{{ .Status | toUpper }}] {{ .CommonAnnotations.summary }}</h2>
          <p>{{ .CommonAnnotations.description }}</p>
          <hr />
          <ul>
          {{ range .Alerts }}
            <li>
              <strong>Alert:</strong> {{ .Labels.alertname }}<br/>
              <strong>Instance:</strong> {{ .Labels.instance }}<br/>
              <strong>Severity:</strong> {{ .Labels.severity }}<br/>
              <strong>Starts at:</strong> {{ .StartsAt }}
            </li>
          {{ end }}
          </ul>
Enter fullscreen mode Exit fullscreen mode

✅ Tip: You can test it with Mailtrap (https://mailtrap.io) before connecting to a real SMTP provider.


5. Testing the Alert

Simulate downstream failure in your Spring Boot service and trip the circuit breaker.

Then check:


Conclusion

With this setup, you now have a powerful and flexible way to monitor Circuit Breaker activity in your Spring Boot applications using Prometheus and Alertmanager.

By combining metrics from resilience4j with alerting rules, you can detect fault patterns early and react quickly to downstream issues—before they escalate.

While we used Mailtrap as a safe sandbox to test email alerts, you can easily switch to real-world notification channels by updating your alertmanager.yml. For example:

  • Gmail: for production email notifications using app passwords

  • Slack: for team-wide real-time alerting

  • Microsoft Teams, Opsgenie, or PagerDuty for advanced incident management

This flexibility makes the Prometheus + Alertmanager stack a reliable choice for both development and production observability.


📍 Reference

💻 Project Repository

👋 Talk to me

Top comments (4)

Collapse
 
werliton profile image
Werliton Silva

wow. nice

Collapse
 
mspilari profile image
Matheus Bernardes Spilari

Thanks !

Collapse
 
shameed_ali_130bc1b92ee3a profile image
Shameed Ali

Hello Sir,
How can I contact with you Sir...?

Collapse
 
mspilari profile image
Matheus Bernardes Spilari

Hi Shameed.
You can reach me on LinkedIn, ok ?
Cheers, my friend.