DEV Community

Hawkinsdev
Hawkinsdev

Posted on

I Exposed My Server to the Internet for 24 Hours: What Happened Next?

Have you ever wondered what lurks in the digital shadows, just waiting for an open door? For 24 relentless hours, I decided to find out firsthand by deliberately exposing a personal server to the vast, untamed expanse of the internet. It was an experiment born out of curiosity, a desire to understand the real-world implications of network security, and perhaps, a touch of digital bravado. What unfolded over those two dozen hours was a stark and often unsettling reminder of the constant, unseen activity happening on our networks.

My goal wasn't to create a digital honeypot in the traditional sense, designed to trap malicious actors. Instead, I wanted to observe the normal flow of internet traffic, the automated scans, the probing attempts, and perhaps, if I was unlucky, something more targeted. I’ve always been interested in cybersecurity, but theoretical knowledge only goes so far. I wanted to feel the pulse of the internet’s less savory side, to witness the digital equivalent of someone rattling your doorknobs in the middle of the night.

The server itself was a modest setup. It wasn't a production system holding sensitive data, nor was it running critical services. It was a small, dedicated machine in my home lab, running a standard Linux distribution. I configured it with a few common services: a basic web server (Apache), an SSH server (for remote access), and a simple file-sharing service (Samba). The crucial part of the experiment was that I opened up its firewall to the entire internet. No restrictions, no IP whitelisting, just a direct invitation to the digital world.

Before I flipped the switch, I took every precaution I could think of for the server itself. I ensured it was running the latest updates, that all services were configured with strong, unique passwords (or preferably, key-based authentication for SSH), and that I had robust logging enabled. I didn't want to accidentally compromise my own network or inadvertently become a launchpad for attacks. This was about observing, not about becoming a victim or a perpetrator.

The clock started at precisely 9:00 AM on a Tuesday. With a deep breath and a click of a few buttons, I removed the firewall rules that had protected my server. Suddenly, it was naked, exposed to every bot, every script, and every curious individual with a network scanner. The initial moments were quiet, almost anticlimactic. I half-expected an immediate barrage of attacks. But the internet, it turns out, doesn't always attack with the subtlety of a ninja. Often, it’s more like a stampede.

The First Few Hours: The Bots Arrive

Within minutes, the logs began to fill. It wasn't a single, concerted assault, but a wave of automated probes. The most immediate and persistent traffic came from scanners looking for open ports. These are the digital equivalent of someone walking down your street, trying every front door and window to see if anything is unlocked.

The most common scanner was searching for an open SSH port (port 22). This is hardly surprising. SSH is a powerful tool for remote administration, but it’s also a prime target for brute-force attacks. Bots attempt to guess usernames and passwords, often starting with common combinations like "root" and "admin" paired with equally common passwords. My SSH server was configured with key-based authentication and had a non-standard username, which significantly reduced the immediate success rate for these automated attacks. However, the sheer volume of connection attempts was staggering. My logs showed thousands of failed login attempts within the first hour alone.

Next up was the web server (port 80 and 443). Scanners were looking for vulnerabilities in web applications, outdated server software, or misconfigurations. They were sending requests for common files and directories that are often left exposed, such as .env files, robots.txt (though this is public by design), and administrative login pages. The Apache server was running the latest version and was configured to serve only static content, so there were no exploitable web applications to find. Still, the constant probing was a clear indication of how actively the internet searches for weaknesses.

I also observed scans targeting other common ports like FTP (port 21), Telnet (port 23), and RDP (port 3389). While I didn't have these services running, the fact that they were being scanned highlighted the broad nature of automated reconnaissance. Attackers aren't always looking for a specific vulnerability; they’re often casting a wide net, hoping to catch anything that’s misconfigured or vulnerable.

The sheer volume of this automated traffic was eye-opening. It wasn't a few curious individuals; it was a relentless, automated swarm. It felt like being in the middle of a busy highway, with countless vehicles zipping past, some of them dangerously close.

The Midday Surge: More Sophisticated Probes

As the day progressed, the nature of the traffic began to shift slightly. While the automated scanners continued their relentless work, I started to see slightly more targeted attempts. These weren't necessarily sophisticated, state-sponsored attacks, but rather scripts or tools used by individuals or small groups looking for easier targets.

One interesting observation was the use of Nmap scans. Nmap is a powerful network scanning tool that can be used for much more than just finding open ports. It can fingerprint operating systems, identify running services and their versions, and even detect specific vulnerabilities. I saw various Nmap scripts being run against my server, attempting to identify its operating system and the versions of the services I was running. This information is crucial for attackers because it allows them to tailor their attacks to known vulnerabilities in specific software versions.

I also noticed attempts to exploit known vulnerabilities in older versions of web server software. While my Apache was up-to-date, the scanners were still sending payloads designed to exploit vulnerabilities that were patched years ago. This is a common tactic; attackers often use automated tools that are configured with a vast database of known exploits, and they’ll try them all, regardless of whether the target is likely to be vulnerable.

The Samba service also received its share of attention. Samba is used for file sharing between Linux/Unix systems and Windows. Misconfigured Samba shares can be a significant security risk, allowing unauthorized access to sensitive files. The scanners were attempting to connect to the default Samba ports and probe for any accessible shares. Fortunately, my Samba configuration was locked down, requiring authentication and only exposing specific, non-sensitive directories.

A notable statistic from this period was the number of IP addresses that attempted to connect. In just the first 12 hours, my server logged connection attempts from over 10,000 unique IP addresses. This highlights the global nature of internet scanning. These weren't just from one country or region; they were from all over the world, a testament to the interconnectedness and the reach of automated scanning tools.

The Evening and Night: The Realization Sets In

As evening approached and then turned into night, the traffic didn't subside. If anything, the automated scans seemed to intensify. The constant stream of log entries was a persistent reminder of the server's exposure. It’s easy to forget about these threats when your server is protected behind a firewall, but this experiment made them tangible.

I started to reflect on what would happen if this were a real server with valuable data. The vulnerability to brute-force attacks on SSH alone is a significant risk. If I hadn’t used key-based authentication, a determined attacker could potentially gain access. The web server, even with static content, could be a stepping stone to other parts of a network if not properly isolated.

One particular type of attack I observed was the "Mirai botnet" style scanning. Mirai was a notorious botnet that primarily targeted Internet of Things (IoT) devices. It scanned for devices with default or weak credentials, particularly for services like Telnet and SSH. While my server wasn't an IoT device, the scanning patterns were similar – rapid, broad scans for easily exploitable services. This reminded me that even seemingly innocuous devices connected to the internet can become targets.

The logs also showed attempts to exploit known vulnerabilities in routers and modems. Many home routers, if not properly secured and updated, can be compromised. These compromised devices can then be used to scan and attack other devices on the network, or even become part of larger botnets. The fact that my server was being scanned for these types of vulnerabilities underscored the interconnected risks in the digital ecosystem.

I also noticed some unusual traffic patterns that were harder to categorize. These might have been more sophisticated, customized scripts, or even human actors probing for specific weaknesses. However, without deep packet inspection and more advanced intrusion detection systems, it was difficult to definitively identify their intent. The anonymity of the internet makes it a perfect playground for those who wish to operate in the shadows.

Statistics and Observations: What the Logs Told Me

After 24 hours, I decided to pull the plug on the experiment and analyze the data. The sheer volume of information was overwhelming, but some key statistics stood out:

  • Total Connection Attempts: Over 250,000 connection attempts were logged across all ports. This averages to more than 10,000 attempts per hour.
  • Unique IP Addresses: Connections originated from over 15,000 unique IP addresses. This demonstrates the truly global reach of automated scanning.
  • Most Scanned Ports:
    • Port 22 (SSH): Received the majority of attempts, with over 100,000 failed login attempts.
    • Port 80 (HTTP) and 443 (HTTPS): Saw a significant volume of requests, indicating active web vulnerability scanning.
    • Port 21 (FTP), 23 (Telnet), 135, 137-139 (SMB/NetBIOS), and 3389 (RDP): Were also frequently scanned, even though these services were not running.
  • Types of Scans: The dominant traffic consisted of port scans and brute-force login attempts against SSH. There were also indications of vulnerability scanning targeting known exploits.

These numbers are alarming, but they also provide valuable context. They aren't just abstract figures; they represent the constant, low-level hum of digital activity that exists all the time. Without protection, a server is essentially broadcasting its presence to the entire world, inviting anyone to try and find a weakness.

The Human Element: Beyond the Bots

While the majority of the traffic was automated, it’s important to remember that humans are behind these tools. Some of the more sophisticated probes might have been initiated by human actors with specific targets in mind. The internet’s anonymity can empower individuals to engage in malicious activities they might not consider in the physical world.

Lessons Learned and Best Practices

My 24-hour experiment, while limited in scope, provided invaluable insights into the constant threats lurking on the internet. Here are the key takeaways and best practices I would recommend:

  1. Never Expose Services Unnecessarily: The most fundamental lesson is to only expose services to the internet that absolutely need to be accessible. If a service is only for internal use, keep it behind your firewall.
  2. Implement Strong Authentication: For any service accessible remotely, especially SSH, use strong, unique passwords. Better yet, implement key-based authentication for SSH and disable password logins entirely.
  3. Keep Everything Updated: Regularly update your operating system, applications, and firmware. Vulnerabilities are constantly being discovered, and patches are released to fix them. Automation tools often target known exploits in older software versions.
  4. Use a Firewall Effectively: Configure your firewall to only allow traffic on the ports and to the IP addresses that are absolutely necessary. Block all other incoming traffic by default.
  5. Enable Logging and Monitoring: Keep detailed logs of all network activity. Regularly review these logs for suspicious patterns, such as a high volume of failed login attempts or scans from unusual sources. Consider using intrusion detection systems (IDS) or intrusion prevention systems (IPS).
  6. Harden Your Services: Beyond just updating, configure your services securely. For example, disable unnecessary modules in your web server, restrict file permissions, and change default configurations.
  7. Consider a VPN: For remote access to internal resources, a Virtual Private Network (VPN) is a much more secure option than exposing individual services directly.
  8. Segment Your Network: If possible, segment your network so that if one part is compromised, the entire network is not at risk. This is especially important for businesses but can also be applied in advanced home lab environments.
  9. Be Aware of IoT Vulnerabilities: If you have smart devices (IoT), ensure they are secured with strong passwords and updated firmware. Many IoT devices are prime targets for botnets.

My Server's Fate

After the 24-hour mark, I immediately re-enabled my firewall, effectively cutting off the server from the open internet. The silence that followed was almost deafening after the constant barrage of activity. It was a profound relief, but also a sobering experience.

The experiment reinforced my belief in the importance of a proactive security posture. It’s not about being paranoid, but about being prepared and informed. The internet is an incredible resource, but it’s also a wild frontier. Understanding the threats and taking the necessary steps to protect yourself is paramount.

Frequently Asked Questions (FAQs)

Q1: Was my server actually compromised during the experiment?

A: No, my server was not compromised. While it was subjected to a high volume of scanning and attempted intrusions, all services were configured with strong security measures (like key-based SSH authentication, up-to-date software, and a hardened configuration). The experiment was designed to observe the traffic, not to provide an easy entry point.

Q2: How can I protect my own server or home network from similar attacks?

A: The best defense is a layered approach: keep all software updated, use strong and unique passwords (or SSH keys), configure your firewall to block unnecessary ports, and only expose services to the internet if absolutely essential. For remote access, consider using a VPN. Regularly review your logs for suspicious activity.

Q3: What is the difference between a port scan and a brute-force attack?

A: A port scan is like checking if any doors or windows on a house are unlocked by trying to see which ones respond. It identifies open ports where services are running. A brute-force attack, on the other hand, is like trying every possible key to unlock a specific door. It involves repeatedly attempting to log in to a service (like SSH or a web application) using a large number of username and password combinations.

Q4: Is it safe to run any services on a server connected to the internet?

A: It is generally not safe to run services on a server directly connected to the internet without robust security measures in place. The experiment demonstrated the constant, automated scanning that occurs. Even seemingly harmless services can be exploited if misconfigured or running outdated software. The principle of "least privilege" and minimizing exposure is key.

Conclusion

Exposing my server to the internet for 24 hours was an eye-opening, and at times, unnerving experience. The sheer volume of automated scans and probing attempts was a stark reminder of the constant digital "noise" that exists online. It’s a landscape where vulnerabilities are actively sought, and security is not a given, but a constant battle.

This experiment reinforced the critical importance of cybersecurity best practices. From keeping software updated and using strong authentication to meticulously configuring firewalls and monitoring network activity, every step taken to secure a system is vital. The internet is a powerful tool, but like any powerful tool, it requires respect, caution, and a deep understanding of its inherent risks. I wouldn't recommend this experiment for everyone, but the lessons learned are invaluable for anyone who connects to the digital world. Stay vigilant, stay informed, and stay secure.

Top comments (0)