π Series: SIEM Deployment
Alright, let's talk shop. After over a decade in the trenches β from building out SOCs from scratch to wrangling SIEMs like Splunk, QRadar, and Microsoft Sentinel in some seriously high-stakes environments β I've seen a lot of tools come and go. Some are brilliant, some are overhyped, and some justβ¦ work. Wazuh falls firmly into that last category, with a generous helping of "brilliant" thrown in, especially when you consider its open-source nature.
I've been in situations where the budget was tighter than a drum, but the need for deep host visibility, file integrity monitoring (FIM), and security configuration assessment (SCA) was absolutely critical. That's where Wazuh shines. It's not just a log aggregator; itβs a full-blown host intrusion detection system (HIDS) that can give you insights into endpoint activity that even some commercial EDRs struggle to match without a hefty price tag.
Today, I want to walk you through deploying Wazuh using its all-in-one (AIO) model. Why AIO? Because it's the fastest, most straightforward way to get Wazuh up and running, especially if you're experimenting, running a small environment, or just need a proof-of-concept. Think of it as your express lane to understanding what this powerful platform can do. We're going to cut through the fluff, use actual commands, and I'll tell you why we're doing each step, not just what. This isn't some generic AI-generated guide; this is how I'd do it, and how I've advised countless junior engineers to do it.
Why Wazuh? And Why All-in-One for Starters?
Let's be clear: Wazuh isn't going to replace your Splunk Enterprise Security or your CrowdStrike Falcon. It's a different beast, but a foundational one. While those high-end platforms excel at enterprise-wide visibility, threat hunting across massive datasets, and automated response, Wazuh digs deep into the host. It gives you:
- Host Intrusion Detection (HIDS): Real-time monitoring for system calls, unauthorized access attempts, and suspicious processes.
- File Integrity Monitoring (FIM): Tracks changes to critical system files, configuration files, and registry entries. This is gold for detecting backdoor installations or unauthorized modifications.
- Security Configuration Assessment (SCA): Checks your hosts against known benchmarks (like CIS or NIST) to identify misconfigurations. Believe me, misconfigurations are often the easiest entry points for attackers.
- Vulnerability Detection: Scans for known vulnerabilities on your endpoints.
- Log Data Analysis: Collects, aggregates, and analyzes logs from operating systems and applications. This is where it starts to feel a bit like a mini-SIEM for your endpoints.
So, why Wazuh over, say, just shipping everything to a central SIEM? Because Wazuh processes and correlates much of this data at the endpoint and manager level before it even hits your SIEM (if you choose to integrate it later). This reduces noise, enriches alerts, and provides context that raw logs often lack. For organizations that are cost-conscious, or small to medium-sized businesses (SMBs) that need robust security without a seven-figure budget, Wazuh is, honestly, my go-to recommendation for deep endpoint visibility. It's a fantastic open-source alternative that punches way above its weight class.
Now, about the All-in-One (AIO) deployment. The official Wazuh documentation offers distributed deployments, which are essential for scaling to hundreds or thousands of agents. But for learning, testing, or even protecting a handful of critical servers, AIO is perfect. It bundles the Wazuh Manager, the Elastic Stack (Elasticsearch, Kibana), and Filebeat onto a single server. This means less infrastructure to manage, fewer network ports to open, and a much faster path to seeing data. You get the full Wazuh experience without the complexity of setting up a multi-node Elasticsearch cluster right out of the gate. Plus, once you're comfortable, migrating to a distributed setup isn't nearly as daunting as starting there.
The Battle Plan: Preparing Your Server
Photo by Zulfugar Karimov on Unsplash
Before we even think about running an installer, we need a solid foundation. Don't skip this part; proper preparation saves hours of troubleshooting later. I've wasted too many nights debugging issues that boiled down to insufficient resources or a firewall blocking a critical port.
For this AIO deployment, you'll need a dedicated server. I'm going to assume you're using a fresh installation of Ubuntu Server 20.04/22.04 LTS or CentOS 7/8 Stream. My examples will lean towards Ubuntu, but the concepts apply universally.
Server Specifications:
- CPU: At least 4 cores. 8 is better if you plan on more than 10-20 agents.
- RAM: Minimum 8GB. 16GB is highly recommended, especially since Elasticsearch loves RAM.
- Storage: At least 50GB, preferably 100GB+ SSD. Log data can grow quickly.
Network Considerations:
Make sure your server can access the internet to download packages. Crucially, you'll need to allow inbound connections to the following ports:
- TCP 55000: For Wazuh agents to register and communicate with the manager.
- TCP 443: For accessing the Wazuh web interface (Kibana).
- TCP 514/UDP 514: If you plan on forwarding syslog from other devices to Wazuh. (Not strictly needed for AIO core functionality, but good to keep in mind).
Essential Pre-installation Steps:
First things first, update your system. This ensures you have the latest security patches and package versions, preventing potential conflicts.
# For Ubuntu/Debian-based systems
sudo apt update -y && sudo apt upgrade -y
# For CentOS/RHEL-based systems
sudo yum update -y
Why: Always start with a clean, updated slate. It's like checking your gear before a mission β you don't want surprises.
Next, we need some common utilities that the installer (or you) might use. wget and curl are for downloading, vim (my personal preference, though nano is fine too) is for editing config files if needed.
# For Ubuntu/Debian-based systems
sudo apt install -y curl wget vim
# For CentOS/RHEL-based systems
sudo yum install -y curl wget vim
Why: These are your basic toolkit. You'd be surprised how often a barebones server lacks them.
Now, this next part is critical for a smooth installation, but comes with a huge caveat. For a proof-of-concept or a test environment, temporarily disabling the firewall and SELinux (on CentOS/RHEL) simplifies things immensely. HOWEVER, for any production environment, you must properly configure your firewall rules and SELinux policies instead of disabling them.
# --- Firewall Configuration (Ubuntu) ---
# Check firewall status
sudo ufw status
# If active, allow necessary ports (55000 for agents, 443 for web UI)
sudo ufw allow 55000/tcp
sudo ufw allow 443/tcp
# If you need to disable for testing (AGAIN, NOT FOR PROD!)
# sudo ufw disable
# --- Firewall Configuration (CentOS/RHEL) ---
# Check firewall status
sudo systemctl status firewalld
# If active, allow necessary ports
sudo firewall-cmd --add-port=55000/tcp --permanent
sudo firewall-cmd --add-port=443/tcp --permanent
sudo firewall-cmd --reload
# If you need to disable for testing (AGAIN, NOT FOR PROD!)
# sudo systemctl stop firewalld
# sudo systemctl disable firewalld
# --- SELinux Configuration (CentOS/RHEL) ---
# Check SELinux status
sestatus
# If enforcing, set to permissive for testing (NOT FOR PROD!)
sudo setenforce 0
sudo sed -i 's/SELINUX=enforcing/SELINUX=permissive/g' /etc/selinux/config
# Reboot might be required for /etc/selinux/config changes to take full effect,
# but 'setenforce 0' applies immediately.
Why: Firewalls and SELinux are security features that can prevent Wazuh components from communicating. For a quick AIO deployment, temporarily relaxing them helps confirm the Wazuh components themselves are working. Once confirmed, re-enable them and configure specific rules. Failing to do so is a common mistake that leaves systems vulnerable. I've seen teams spend days chasing a "bug" that was just a forgotten firewall rule.
Deploying Wazuh All-in-One: Step-by-Step
Now that our server is prepped, the actual deployment is surprisingly simple thanks to Wazuh's official installation script.
-
Download the Wazuh Installation Script:
We'll download the latest AIO installer script directly from the Wazuh GitHub repository. Always check the official documentation for the absolute latest version, but this pattern is generally stable.
# Download the Wazuh installation script curl -sO https://raw.githubusercontent.com/wazuh/wazuh-documentation/master/resources/create_wazuh_cluster.shWhy: This script automates the installation and configuration of all necessary components: Elasticsearch, Filebeat, Kibana, and the Wazuh Manager. It saves you from manually installing and configuring each one, which is a significant time-saver and reduces human error.
-
Make the Script Executable:
Downloaded scripts aren't executable by default for security reasons. We need to grant it execution permissions.
chmod +x create_wazuh_cluster.shWhy: Without execute permissions, your system won't allow you to run the script.
-
Run the Installer Script:
Now, execute the script. We'll use the-aflag to specify an All-in-One deployment.
sudo ./create_wazuh_cluster.sh -aWhy: The
-aflag tells the script to perform an all-in-one installation. Thesudois necessary because the script will be installing packages, creating users, and modifying system configurations. This process will take some time, typically 15-30 minutes, depending on your internet speed and server resources. It will download a lot of packages, install Java (for Elasticsearch), set up repositories, and configure services. Let it run.A quick note on a common mistake: During this process, or immediately after, many teams forget to check the logs. If something goes wrong, the output on your terminal might scroll past too fast. The
create_wazuh_cluster.shscript is usually pretty good about logging its steps, but always, always know where to look if something breaks. For system services,journalctl -xeis your best friend. For Wazuh manager specific issues, check/var/ossec/logs/ossec.log. For Elasticsearch and Kibana, their logs are usually in/var/log/elasticsearchand/var/log/kibanarespectively. Don't just stare blankly at a failed installation; dive into the logs! -
Verify Services Status:
Once the script completes, it's crucial to verify that all components are running correctly.
# Check Wazuh Manager status sudo systemctl status wazuh-manager # Check Elasticsearch status sudo systemctl status elasticsearch # Check Kibana status sudo systemctl status kibana # Check Filebeat status sudo systemctl status filebeatWhy: This confirms that the installation was successful and all the critical services for Wazuh, its data store (Elasticsearch), its visualization layer (Kibana), and its log shipper (Filebeat, which sends Wazuh alerts to Elasticsearch) are operational. You should see "active (running)" for all of them.
What's Next? Your First Agent and Basic Checks
With Wazuh Manager and its components humming along, the next step is to get some data in. This means deploying an agent to a target machine. For simplicity, let's assume you're deploying to a Linux machine (Ubuntu, CentOS, etc.).
-
Access the Wazuh UI:
Open your web browser and navigate tohttps://YOUR_WAZUH_SERVER_IP. You'll likely encounter a certificate warning (since it's a self-signed cert). Accept it.
The default credentials are:- Username:
admin - Password:
admin(You should absolutely change this immediately in a production environment!)
Once logged in, you'll see the Wazuh dashboard. It might look a bit empty, which is expected β we haven't added any agents yet!
- Username:
-
Enroll Your First Agent:
From the Wazuh UI, navigate to Wazuh > Agents > Deploy new agent.- Select your operating system (e.g., "Linux").
- Choose your architecture.
- Select "Wazuh Manager"
Top comments (0)