Hey Dev.to! π
I just launched my hosting platform with something unique: optional network isolation. Let me share the technical details.
ποΈ The Architecture
Community Shared Network (vmbr0):
- Multiple customers on same network bridge
- VM-level isolation with firewall rules between VMs
- 1 Gbps uplink
- Perfect for 90% of websites
Isolated Network (vmbr[X]):
- Your own private internal bridge per customer
- 10 Gbps internal bandwidth between your VMs
- Complete network separation from other customers
- PCI/HIPAA compliance-ready
- Add $10/mo to any plan
π₯οΈ Hardware Specs
- Dell PowerEdge R730
- 2x Xeon E5-2698v3 (32 cores, 64 threads)
- 480GB ECC DDR4 RAM
- ZFS RAID-Z2 storage
- Proxmox VE hypervisor
π‘ Why This Matters
For a personal blog? Community Shared at $29/mo is perfect.
For e-commerce with PCI requirements? Business + Isolated at $99/mo gives you:
- Dedicated network bridge
- 10 Gbps between your web server, database, and cache
- Network-level isolation from all other customers
For agencies hosting multiple clients? Isolated Network means complete separation between client sites.
π Complete Transparency
Live server stats: lightspeedup.com/health.php
You can literally watch CPU, RAM, and storage in real-time. No secrets.
π° Pricing
Community Shared Network:
- Starter: $29/mo (Beta: $14.50/mo)
- Business: $89/mo (Beta: $44.50/mo)
- Enterprise: $199/mo (Beta: $99.50/mo)
Isolated Network (+$10/mo):
- Starter + Isolated: $39/mo (Beta: $19.50/mo)
- Business + Isolated: $99/mo (Beta: $49.50/mo)
- Enterprise + Isolated: $209/mo (Beta: $104.50/mo)
Compare to:
- WP Engine Business: $25-290/mo
- Kinsta Starter: $35-260/mo
30-70% cheaper with better hardware specs.
π― Technical Deep Dive
Why Proxmox instead of Docker/K8s?
Full VM isolation with KVM. Every customer gets their own kernel. Network isolation at the bridge level (vmbr0 vs vmbr[X]) instead of just iptables rules.
Why ZFS RAID-Z2?
Enterprise-grade data protection. Can lose 2 drives without data loss. Checksumming catches bit rot.
Why Dell R730 instead of cloud VPS?
Direct hardware control. No noisy neighbors. Fixed costs = predictable pricing.
Single point of failure?
Yes, and I'm honest about it. Daily backups to offsite storage. Disaster recovery plan in place. Working toward multi-server setup, but being transparent about current state.
ποΈ My Background
15 years enterprise IT. Designed data centers for First Data/Fiserv across 3 continents. Veteran-owned.
Got tired of seeing small businesses overpay for hosting with hidden fees and offshore support.
Wanted to prove enterprise infrastructure can run profitably at small scale with complete transparency.
π Beta Program
- 50% off for 75 days
- First 20 customers get lifetime 15% discount
- Free migration from current host
π¬ Questions I'll Answer
- Network isolation architecture (vmbr0 vs vmbr[X])
- Proxmox vs alternatives
- Disaster recovery approach
- Compliance considerations (PCI, HIPAA)
- Why single server vs distributed
- Backup strategy
Apply for Beta: lightspeedup.com/beta.php
Building in public because accountability matters. AMA! π
Top comments (1)
Transparency update: Just wrapped up ZFS storage maintenance that took ~2 hours instead of the planned 30-45 min. Everything is back online now.
This is exactly the kind of honesty I'm committed to - when things take longer or break, I'll tell you. No hiding behind vague maintenance notices.
For Beta applicants: this was hardware-level maintenance (ZFS pool optimization), and I've learned to budget more time for these operations going forward.
Building in public means showing the bumps along with the wins. π οΈ