There's a particular kind of dread that comes with finding out about a server problem from a customer support ticket. "Your site is slow." "I can't upload files." "The page won't load." By the time a user reports an issue, they've already had a bad experience. If they're reporting it, others have had the same experience and simply left.
Proactive monitoring changes the equation. Instead of reacting to user complaints, you catch issues before they reach users. A CPU spike triggers an alert while the server is still functional. A disk filling up sends a notification days before it's full. Memory pressure gets flagged before the OOM killer starts terminating processes.
Deploynix builds monitoring directly into the server management platform. No separate monitoring service, no additional configuration, no extra cost. Every managed server is monitored in real time with configurable alerts.
What Gets Monitored
Deploynix monitors four metrics that most commonly cause server issues: CPU utilization, memory usage, disk consumption, and load average. These aren't glamorous metrics, but they're the ones that matter most for server reliability.
CPU Utilization
CPU usage tells you how hard your server's processors are working. For a Laravel application, CPU spikes typically correlate with:
Traffic surges (more requests to handle)
Expensive database queries (MySQL or PostgreSQL consuming CPU)
Queue workers processing heavy jobs
Asset compilation or build processes
Runaway processes or infinite loops
Deploynix tracks CPU utilization over time, giving you both real-time values and historical trends. A brief spike during a deployment is normal. A sustained spike at 95% for thirty minutes is a problem that needs attention.
What to watch for:
Sustained high CPU (above 80%) during normal traffic — your server may be undersized
Periodic spikes that correlate with cron jobs — your scheduled tasks may need optimization
Gradual upward trend over weeks — your application is outgrowing its server
Memory Usage
Memory is the silent killer. When a server runs out of memory, the Linux OOM (Out Of Memory) killer starts terminating processes — and it might kill your PHP-FPM workers, your MySQL process, or your queue workers. The result is immediate, visible downtime.
Laravel applications consume memory through:
PHP-FPM worker processes (each worker holds a copy of your application in memory)
Database server (MySQL/PostgreSQL buffers and caches)
Valkey/Redis (in-memory cache and session storage)
Queue workers (especially if processing large payloads)
Octane workers (persistent workers hold more state in memory)
Deploynix tracks memory usage including both used memory and available memory (which accounts for Linux's buffer/cache behavior). This distinction matters — Linux aggressively uses free memory for disk caches, which inflates the "used" number but doesn't represent actual memory pressure.
What to watch for:
Available memory consistently below 10% — you're at risk of OOM kills
Memory usage climbing over time without returning to baseline — possible memory leak
Sudden memory drops followed by spikes — OOM killer terminating and restarting processes
Disk Usage
Running out of disk space is one of the most preventable server failures, yet it still catches teams off guard. Disks fill up gradually and then suddenly — you have 30% free space for months, then log files grow, deployment artifacts accumulate, and one day the database can't write because the disk is full.
Common disk consumers on Laravel servers:
Application releases (each zero-downtime deployment creates a new release directory)
Log files (Laravel logs, Nginx access/error logs, system logs)
Database files (tables grow as your application accumulates data)
User uploads (if stored on the server filesystem)
Temporary files (compilation caches, session files, queue payloads)
Deploynix tracks disk usage as a percentage and as absolute values. The release cleanup process (part of zero-downtime deployments) automatically removes old releases, but other disk consumers need monitoring.
What to watch for:
Disk usage above 75% — time to clean up or resize (default warning threshold)
Disk usage above 85% — urgent attention needed (default critical threshold)
Rapid growth rate — investigate what's consuming space
Load Average
Load average represents how many processes are waiting for CPU time. Unlike CPU percentage, it accounts for the number of CPU cores on your server. A load average of 4.0 on a 4-core server means all cores are fully utilized. A load average of 8.0 on a 4-core server means processes are queuing for CPU time.
Deploynix tracks the 1-minute load average, giving you the most responsive indicator of current server load. The default thresholds scale with your server's CPU core count — a warning at the core count and critical at double the core count.
What to watch for:
Sustained load above your core count — the server is overloaded
Sudden spikes — a process or batch of requests is consuming all available CPU
Load average climbing while CPU percentage is moderate — I/O wait may be the bottleneck (database or disk-bound operations)
Real-Time Monitoring Architecture
Deploynix's monitoring is built for real-time visibility, not periodic snapshots.
The Monitoring Agent
When Deploynix provisions or connects to a server, a lightweight monitoring agent is installed. This agent collects system metrics every 5 minutes and reports them back to the Deploynix platform via a secure webhook endpoint.
The agent is designed to be unobtrusive:
Minimal resource usage. The agent is a lightweight bash script that consumes negligible CPU and memory. It's not running complex analysis on your server — it's collecting basic system metrics and transmitting them.
Secure communication. Metrics are transmitted over encrypted connections using a server-specific authentication token. The agent doesn't open inbound ports or expose any services.
Automatic updates. The agent is updated through the Deploynix platform, ensuring you always have the latest version without manual intervention.
Staleness detection. If the agent stops reporting for 15 minutes, Deploynix marks the server as stale and sends a notification to the server owner. This catches situations where the agent has been accidentally removed, the server is unreachable, or the cron job has stopped running.
WebSocket-Powered Dashboard
Deploynix uses Laravel Reverb for WebSocket communication. When you view a server's monitoring dashboard, metrics stream in real time. You see CPU, memory, disk, and load average values update live without page refreshes. Alert status changes — new alerts, resolutions, dismissals — are also broadcast in real time.
This real-time display is more than a cosmetic feature. When you're investigating a performance issue, watching metrics change in real time as you make changes (restarting a service, killing a process, scaling workers) gives you immediate feedback. You don't need to wait for the next polling interval to see if your action had an effect.
Historical Data
Real-time values are useful for investigation. Historical trends are useful for capacity planning. Deploynix stores metric snapshots and retains them for 30 days by default, giving you a month of historical data to work with. You can query up to 720 hours (30 days) of historical data through the monitoring API.
Historical data lets you:
View CPU, memory, disk, and load average trends over time
Identify patterns (daily traffic peaks, weekly batch job impacts)
Spot gradual trends (memory creep, disk growth)
Correlate metrics with deployments and other events
Old snapshots are automatically pruned daily to keep storage manageable.
Configuring Alerts
Monitoring data without alerting is just data. The value comes from being notified when metrics exceed acceptable thresholds.
Alert Severity Levels
Deploynix supports two severity levels for alerts:
Warning. Something needs attention soon but isn't immediately critical. A warning gives you time to investigate and plan a response.
Default warning thresholds:
CPU above 80%
Memory usage above 80%
Disk usage above 75%
Load average above your CPU core count
Critical. Something needs immediate attention. A critical alert means the server is at risk of degraded performance or failure. Critical alerts trigger an email notification to the server owner.
Default critical thresholds:
CPU above 90%
Memory usage above 90%
Disk usage above 85%
Load average above double your CPU core count
Alerts have three statuses: Active (currently triggered), Resolved (auto-resolved when the metric returns to normal or the agent stops reporting), and Dismissed (manually dismissed by an admin). Active alerts that haven't been reported in 15 minutes are automatically resolved, preventing stale alerts from cluttering your dashboard.
Setting Thresholds
Alert thresholds are configured globally with sensible defaults, and can be overridden per server through the monitoring agent's environment configuration. Different servers have different normal operating ranges. A worker server that processes heavy jobs might normally run at 70% CPU — setting a warning at 60% would generate constant noise. An application server that normally runs at 20% CPU should alert when it suddenly hits 60%.
When configuring thresholds, consider:
Baseline your server first. Run your application under normal load for a few days and observe the typical metric ranges. Set warning thresholds above normal but below dangerous.
Leave room for response. A critical alert at 95% disk gives you very little time to act. The defaults of 75% warning and 85% critical give you days or weeks to address the issue.
Account for variability. Traffic isn't constant. If your server regularly hits 60% CPU during peak hours and 20% during off-peak, a warning at 50% will trigger every day. Set thresholds for sustained anomalies, not normal peaks.
Alert Notifications
When a threshold is crossed, Deploynix sends notifications through your configured channels. The notification includes:
Which server triggered the alert
Which metric exceeded the threshold
The current value
The severity level
A link to the server's monitoring dashboard
This context lets you assess the situation immediately. Is this the production database server at 95% CPU, or the staging server doing a large import? The difference determines your response urgency.
Proactive vs. Reactive Monitoring
The fundamental value proposition of monitoring is shifting from reactive to proactive.
Reactive (without monitoring): A user reports the site is slow. You SSH into the server. You run top and see CPU at 100%. You investigate, find a runaway process, kill it. The issue has been affecting users for twenty minutes.
Proactive (with monitoring): Deploynix alerts you that CPU exceeded 80% five minutes ago. You open the dashboard, see the trend, identify the cause. You resolve it before most users notice. Total user impact: minimal.
The difference between these scenarios is the monitoring infrastructure and the alerting configuration. Deploynix provides both, integrated into the platform you already use to manage the server.
Monitoring Across Multiple Servers
When you manage multiple servers — especially across multiple cloud providers — unified monitoring becomes critical. Deploynix's dashboard shows all your servers with their current health status, color-coded for quick visual assessment.
Green: All metrics within normal ranges. No alerts.
Yellow: Warning threshold exceeded on one or more metrics. Attention needed.
Red: Critical threshold exceeded. Immediate attention required.
This overview lets you assess your entire infrastructure's health at a glance. When managing ten servers across three providers, you don't want to check each server individually. You want a single view that immediately highlights which servers need attention.
Monitoring Server Types
Different server types have different monitoring priorities.
App and Web Servers
CPU and memory are the primary concerns. High CPU indicates either high traffic (scale horizontally) or inefficient code (optimize). High memory usually means too many PHP-FPM workers for the available RAM, or a memory leak in a long-running Octane process.
Database Servers
All three metrics matter. CPU indicates query load. Memory indicates buffer pool sizing (for MySQL's innodb_buffer_pool_size, more memory for caching means fewer disk reads). Disk indicates data growth and the need for cleanup or volume expansion.
Cache Servers (Valkey)
Memory is the critical metric. A cache server that runs out of memory will evict keys according to its eviction policy, which degrades application performance. If your cache server consistently runs above 80% memory, consider increasing memory or reviewing what you're caching.
Worker Servers
CPU is the primary concern. Queue workers are CPU-intensive by nature. If CPU is consistently maxed, you either need a larger server or more worker servers to distribute the load. Memory matters if your jobs process large payloads.
Load Balancer Servers
Network metrics are most relevant, but CPU and memory usage should remain low for a properly configured load balancer. If a load balancer is showing high CPU, the configuration may need tuning.
Integrating Monitoring with Your Workflow
Monitoring data is most valuable when it's integrated with your existing workflow.
Deployment Correlation
When you see a metric spike, one of the first questions is: "Did we just deploy?" Deploynix tracks both deployments and metrics on the same platform, making it straightforward to correlate a performance change with a code change. If CPU usage doubled immediately after a deployment, the new code is likely the cause. Rollback and investigate.
Team Visibility
Monitoring data is accessible to team members based on their role permissions. Developers can see the metrics for servers they have access to. Managers can view overall infrastructure health. This shared visibility means the person who deployed the code and the person who manages the infrastructure are looking at the same data.
Alert Escalation
Warning alerts are informational — investigate when convenient. Critical alerts demand immediate response. Structure your notification channels accordingly. Warnings might go to a team Slack channel. Critical alerts might go to a PagerDuty integration or direct phone notification.
What Monitoring Won't Tell You
Server monitoring is essential but not sufficient. It tells you about infrastructure health, not application health. A server can show perfect metrics while your application throws 500 errors, returns wrong data, or has a broken user flow.
Complete observability requires:
Server monitoring (Deploynix) — Is the infrastructure healthy?
Error tracking (Sentry, Flare, Bugsnag) — Is the application throwing errors?
Uptime monitoring (Oh Dear, Pingdom) — Is the application reachable?
Application performance monitoring (New Relic, Datadog) — Is the application performing well?
Deploynix covers the first layer comprehensively. It also exposes a Prometheus-compatible /metrics endpoint, so if you're running a Grafana monitoring stack, you can scrape Deploynix metrics alongside your other observability data. The other layers require dedicated tools, and Deploynix is designed to complement them, not replace them.
Conclusion
Server monitoring isn't optional infrastructure — it's the difference between finding out about problems from an alert at 2 AM and finding out from a customer email at 9 AM. The alert is better. It's earlier, more specific, and gives you time to respond before users are impacted.
Deploynix embeds monitoring directly into the server management platform. No separate service to configure. No additional agent to install manually. No extra subscription to manage. Every server you manage through Deploynix is monitored in real time with configurable alerts, automatic stale detection, and 30 days of historical data.
Set your thresholds. Trust the alerts. Fix issues before your users notice them.
Get started at https://deploynix.io.
Top comments (0)