DEV Community

Cover image for Building a SIEM-Style Threat Detection Dashboard Using ELK Stack and Docker
Dipesh Kumar
Dipesh Kumar

Posted on

Building a SIEM-Style Threat Detection Dashboard Using ELK Stack and Docker

Building a SIEM-Style Threat Detection Dashboard Using ELK Stack and Docker

In modern cybersecurity operations, centralized log collection and real-time visibility are essential for identifying suspicious behavior before it turns into a real incident. Security teams rely heavily on log analysis platforms to detect failed logins, brute-force attempts, abnormal DNS activity, and other indicators of compromise.

To better understand how this works in practice, I built a SIEM-style threat detection lab using the ELK Stack (Elasticsearch, Logstash, Kibana) deployed with Docker. The goal of this project was to ingest logs, simulate attack patterns, and visualize security events through a dashboard that could support threat hunting and incident response.

This hands-on project gave me practical exposure to:
log ingestion and parsing
dashboard creation in Kibana
basic detection engineering
attack simulation
security monitoring workflows

🎯 Objectives:
The main goals of this project were:

Deploy the ELK Stack using Docker
Configure Logstash pipelines for log ingestion
Forward logs from a macOS host
Simulate suspicious activity from Kali Linux
Detect attack patterns such as:
credential stuffing
brute-force behavior
suspicious DNS activity
Build interactive dashboards in Kibana

πŸ–₯️ Lab Environment:
This project was implemented in a small lab setup using the following environment:

Component Specification
Host System macOS
ELK Deployment Docker Desktop
ELK Version 8.x
Attacker Machine Kali Linux VM
Log Source macOS system/application logs
Memory Allocation 8 GB for Docker
Storage 50 GB free space

πŸ—οΈ Architecture Overview:
The overall setup was designed to simulate a basic security monitoring workflow.

Kali Linux (Attacker)
|
| Simulated Attack Traffic
v
macOS Host (Docker Desktop + Log Sources)
β”œβ”€β”€ Elasticsearch
β”œβ”€β”€ Logstash
β”œβ”€β”€ Kibana
└── Filebeat / Native Log Forwarding

This setup allowed me to generate both normal and malicious traffic, send logs into the ELK pipeline, and monitor the resulting events visually in Kibana.

πŸ› οΈ Tools and Technologies Used
Tool Purpose
Elasticsearch Centralized log storage and search
Logstash Log ingestion and processing
Kibana Visualization and dashboards
Docker Containerized ELK deployment
Filebeat Lightweight log shipper
Kali Linux Attack simulation
macOS Host system and log source
🐳 Deploying ELK Stack with Docker
1) Verify Docker Installation:
Before starting, I verified that Docker was installed correctly:
docker --version
docker compose version

2) Create the Project Directory:
mkdir elk-stack && cd elk-stack
nano docker-compose.yml

3) Docker Compose Configuration:
I used the following docker-compose.yml file to deploy Elasticsearch, Logstash, and Kibana.

version: '3.8'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:8.11.0
container_name: elasticsearch
environment:
- discovery.type=single-node
- ES_JAVA_OPTS=-Xms2g -Xmx2g
- xpack.security.enabled=false
ports:
- "9200:9200"
- "9300:9300"
volumes:
- elasticsearch_data:/usr/share/elasticsearch/data
networks:
- elk-network

logstash:
image: docker.elastic.co/logstash/logstash:8.11.0
container_name: logstash
volumes:
- ./logstash/pipeline:/usr/share/logstash/pipeline
- ./logstash/config:/usr/share/logstash/config
ports:
- "5000:5000"
- "5044:5044"
- "9600:9600"
depends_on:
- elasticsearch
networks:
- elk-network

kibana:
image: docker.elastic.co/kibana/kibana:8.11.0
container_name: kibana
environment:
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
ports:
- "5601:5601"
depends_on:
- elasticsearch
networks:
- elk-network

volumes:
elasticsearch_data:
driver: local

networks:
elk-network:
driver: bridge

Note: For lab simplicity, xpack.security.enabled=false was used. In a production environment, authentication, TLS, and role-based access control should always be enabled.

4) Create the Logstash Pipeline:
mkdir -p logstash/pipeline
nano logstash/pipeline/logstash.conf

I used the following Logstash configuration:

input {
tcp {
port => 5000
codec => json
type => "application_logs"
}

udp {
port => 5001
type => "syslog"
}

beats {
port => 5044
}
}

filter {
date {
match => ["timestamp", "ISO8601"]
target => "@timestamp"
}

grok {
match => { "message" => "%{IP:client_ip}" }
}

if [message] =~ /login.*failed/ {
mutate {
add_tag => ["credential_stuffing_candidate"]
}
}

if [message] =~ /dns.*query/ {
mutate {
add_tag => ["dns_activity"]
}
}

geoip {
source => "client_ip"
target => "geo"
}
}

output {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "logs-%{+YYYY.MM.dd}"
}

stdout {
codec => rubydebug
}
}

This pipeline was used to:

parse timestamps
extract IP addresses
tag suspicious login failures
identify DNS-related activity
enrich IP addresses with GeoIP data

5) Start the ELK Stack:
docker compose up -d

To verify everything was running correctly:

 docker ps
Enter fullscreen mode Exit fullscreen mode

Expected containers:
Elasticsearch
Logstash
Kibana

6) Verify Services:
Check Elasticsearch:

  curl http://localhost:9200
Enter fullscreen mode Exit fullscreen mode

Then open Kibana in the browser:

  http://localhost:5601
Enter fullscreen mode Exit fullscreen mode

πŸ“₯ Collecting Logs from macOS
1) Install Filebeat (Optional)

To forward logs more efficiently, Filebeat can be installed:

   brew install elastic/tap/filebeat-full
Enter fullscreen mode Exit fullscreen mode

2) Configure Filebeat

  Edit the Filebeat configuration:

     nano /usr/local/etc/filebeat/filebeat.yml
Enter fullscreen mode Exit fullscreen mode

Use the following configuration:

filebeat.inputs:

  • type: log enabled: true paths:
    • /var/log/system.log
    • /var/log/apache2/*.log
    • /var/log/nginx/*.log

output.logstash:
hosts: ["localhost:5044"]

3) Start Filebeat
sudo filebeat -e -c /usr/local/etc/filebeat/filebeat.yml
4) Send a Test Log

To confirm that logs were reaching Logstash correctly:

echo '{"timestamp":"2024-01-15T10:30:00","message":"login failed for user admin from 192.168.1.100"}' | nc localhost 5000

If everything is working correctly, the event should appear in Elasticsearch and later in Kibana.

🎯 Attack Simulation

To test the detection pipeline, I simulated suspicious behavior from a Kali Linux machine and from generated log events.

1) Credential Stuffing / Failed Login Simulation

A simple way to simulate repeated failed login events:

for i in {1..100}; do
echo "{\"timestamp\":\"$(date -Is)\",\"message\":\"login failed for user test$i from 192.168.1.$((RANDOM % 255))\"}" | nc localhost 5000
done

This creates a spike of failed authentication events that can be detected and visualized.

2) Simulated Brute Force Testing

For brute-force style testing, tools such as Hydra can be used in a lab environment:

sudo apt install hydra -y

Example concept:

hydra -L users.txt -P passwords.txt http-post-form "/login:user=^USER^&pass=^PASS^:F=incorrect"

This was used only in a controlled lab environment for detection testing.

3) Suspicious DNS Activity Simulation

To simulate suspicious DNS-style logs:

for i in {1..50}; do
random_string=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1)
echo "{\"timestamp\":\"$(date -Is)\",\"message\":\"dns query for $random_string.malicious.com\"}" | nc localhost 5000
done

This helped simulate basic indicators often associated with suspicious DNS activity.

Important: This is not full DNS tunneling detection, but rather a lab-based approximation using long/suspicious DNS query patterns.

🧠 Threat Detection Logic

This project focused on detecting suspicious activity using simple but useful logic.

Credential Stuffing Detection

Credential stuffing behavior was approximated by detecting a high volume of failed login attempts in a short period of time.

Detection idea:
repeated "login failed" events
clustering by source IP
spikes over time

Example detection query:

{
"name": "Credential Stuffing Detection",
"index": "logs-*",
"query": {
"bool": {
"must": [
{ "match": { "message": "login failed" } }
],
"filter": [
{ "range": { "@timestamp": { "gte": "now-5m" } } }
]
}
}
}
Suspicious DNS Activity Detection

Suspicious DNS activity was simulated by generating log entries containing long, random-looking subdomains.

Detection idea:
DNS query logs
unusually long domain strings
repeated unusual query behavior

Example detection logic:

{
"name": "Suspicious DNS Activity",
"index": "logs-*",
"query": {
"bool": {
"must": [
{ "match": { "message": "dns query" } }
]
}
}
}
πŸ“Š Building the Kibana Dashboard

Once logs were being ingested successfully, I created a Security Monitoring Dashboard in Kibana.

1) Create an Index Pattern

In Kibana:

Go to Stack Management
Open Index Patterns / Data Views
Create:
logs-*
Select:
@timestamp

as the time field.

2) Create Visualizations

I built the following visualizations:

Failed Logins Over Time
Type: Line Chart
X-axis: @timestamp
Y-axis: count of failed login events
Top Attack Sources
Type: Horizontal Bar Chart
Bucket: client_ip
Event Types Distribution
Type: Pie Chart
Bucket: message.keyword
Geographic Source Distribution
Type: Map / Coordinate Visualization
Based on GeoIP data
3) Dashboard Layout

The dashboard was designed like this:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Security Monitoring Dashboard β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€-─┬───────────────────────────────
β”‚ Failed Logins Over Time β”‚ Top Attack Sources β”‚
β”‚ [Line Chart] β”‚ [Bar Chart] β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€-┼───────────────────────────────
β”‚ Event Types Distribution β”‚ Geographic Distribution β”‚
β”‚ [Pie Chart] β”‚ [Map] β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€-┴───────────────────────────────
β”‚ Recent Alerts / Log Table β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

This dashboard made it much easier to quickly identify spikes, suspicious IPs, and recurring event types.

πŸ§ͺ Testing and Validation

To validate the setup, I performed multiple tests.

1) Verify Elasticsearch Indices
curl http://localhost:9200/_cat/indices
2) Search Ingested Logs
curl -X GET "http://localhost:9200/logs-*/_search?pretty"
3) Confirm Detection Visibility in Kibana

Expected observations:

spike in failed logins
suspicious DNS-related events visible
top source IPs shown in charts
logs searchable through Kibana Discover

πŸ“ˆ Detection Results:
The dashboard successfully surfaced suspicious behavior during testing.

Attack / Activity Detection Method Status
Credential Stuffing Failed login spike βœ… Detected
Brute-Force Activity



 Repeated authentication attempts βœ… Detected
Suspicious DNS Activity Long/random DNS query patterns βœ… Detected
Suspicious Visibility GeoIP enrichment + IP grouping βœ… Identified

πŸ›‘οΈ Incident Response Workflow:
Once suspicious activity was identified, the following investigation workflow was used.

1) Investigate in Kibana

Example query:

client_ip: "192.168.1.100"

Time-based filtering:

@timestamp: [now-1h TO now]

Specific failed login investigation:

message: "login failed" AND client_ip: "192.168.1.100"

2) Correlate with Other Logs
client_ip: "192.168.1.100" OR source_ip: "192.168.1.100"

This helped determine whether the same IP appeared across multiple suspicious events.

3) Example Response Actions
block suspicious IPs
investigate repeated authentication failures
document findings
escalate as needed

Example ticket format:

{
"title": "Potential Credential Stuffing Activity",
"source_ip": "192.168.1.100",
"timestamp": "2024-01-15T10:30:00",
"severity": "HIGH",
"recommendation": "Investigate source IP and implement rate limiting"
}

⚠️ Challenges Faced:
A few practical challenges came up during the project:

  Challenge                           Solution
Enter fullscreen mode Exit fullscreen mode

Docker memory allocation issues Increased Docker memory to 8 GB
Logstash pipeline errors Validated pipeline configuration carefully
Kibana connectivity issues Verified Docker networking and service dependencies
Noisy or weak detections Tuned detection logic and thresholds
Inconsistent log formats Used parsing and extraction rules

These issues were useful because they reflected the kind of troubleshooting that often happens in real-world deployments.

πŸ“Œ Limitations:
This lab setup was useful for learning, but it still has limitations:

i. single-node Elasticsearch only
ii. not production-scale
iii. basic rule logic
iv. no automated response
v. detection quality depends on available logs
vi. security disabled for lab simplicity

This project was built for hands-on learning and detection workflow understanding, not as a production-ready SIEM.

πŸš€ Future Improvements:
There are several ways this project could be improved:

i. integrate Elastic Agent
ii. enable TLS and authentication
iii. add Slack / email alerting
iv. build custom SOC dashboards
v. use Elastic Security detection rules
vi. add machine learning-based anomaly detection
vii. expand to a multi-node ELK cluster

πŸ“š Key Learning Outcomes:
This project helped me build practical experience in:

i. deploying the ELK Stack with Docker
ii. configuring Logstash pipelines
iii. forwarding and parsing logs
iv. building Kibana dashboards
v. writing simple threat detection logic
vi. simulating suspicious security events
vii. performing basic log-based threat hunting

🎯 Conclusion:
This project helped me understand how centralized log collection and visualization can support security monitoring and threat detection in a practical way.

By building an ELK Stack lab using Docker, simulating suspicious activity, and creating a security-focused dashboard in Kibana, I was able to experience a simplified SIEM workflow end-to-end β€” from log ingestion to detection to investigation.

For anyone learning cybersecurity, blue teaming, SOC analysis, or SIEM fundamentals, this is a very valuable hands-on project.

πŸ”— References:
i. Elastic Stack Documentation
ii. Elasticsearch Docker Guide
iii. Logstash Pipeline Documentation
iv. Kibana Dashboard Documentation
v. OWASP Credential Stuffing Guidance

Top comments (0)