Building CloudSentinel: A Multi-Cloud Security Scanner
As a cybersecurity enthusiast and developer, I wanted to build a tool that could deploy vulnerable cloud infrastructure, scan for misconfigurations, and report findings across AWS, Azure, and GCP. The goal was to simulate real-world security risks and demonstrate automation, planning, and multi-cloud proficiency.
This blog post walks through the planning, development, and implementation of CloudSentinel — highlighting how I structured the project, solved problems, and implemented cloud-agnostic scanning.
Project Motivation
Many cloud security tutorials focus on a single provider, and most security tools are either too high-level or not hands-on. I wanted a system that could:
- Deploy test infrastructure with intentional vulnerabilities
- Scan for misconfigurations automatically
- Aggregate findings and calculate risk scores
- Work consistently across AWS, Azure, and GCP
The resulting tool, CloudSentinel, achieves all of this with a modular Python CLI and Terraform-backed infrastructure.
Planning & Project Structure
I started with a clear step-by-step plan:
mkdir cloudsentinel
cd cloudsentinel
git init
mkdir terraform scanner cli reports docs
touch README.md .gitignore
Key design decisions:
- CLI layer (cli/): handles user commands (deploy, scan, report, destroy, status)
- Scanner engine (scanner/): modular checks per cloud provider
- Terraform directories (terraform/aws, terraform/azure, terraform/gcp): separate deployments for each cloud
- Reports (reports/): structured JSON and CSV outputs
This separation of concerns ensures scalability and maintainability.
AWS Implementation Highlights
I began with AWS to prove the concept:
- Terraform deployment:
- EC2 instance with SSH open to the world Security group vulnerabilities
- Scanner checks:
- SSH open to 0.0.0.0/0
- Open ports
- Public S3 buckets
- Over-permissive IAM roles
- CloudTrail misconfigurations
- Unencrypted EBS volumes
Example Python scanner module:
def scan_open_ssh():
result = subprocess.run(
["aws", "ec2", "describe-security-groups", "--query", "SecurityGroups[*].IpPermissions", "--output", "json"],
capture_output=True, text=True
)
permissions = json.loads(result.stdout)
findings = []
for group in permissions:
for rule in group:
if rule.get("FromPort") == 22:
for ip_range in rule.get("IpRanges", []):
if ip_range.get("CidrIp") == "0.0.0.0/0":
findings.append({"issue": "SSH open to internet", "severity": "HIGH"})
return findings
Scanner Engine & Modular Architecture
To handle multiple checks without clutter, I refactored the scanner into a modular engine:
scanner/
├── engine.py # orchestrator
├── aws_scanner.py
├── azure_scanner.py
├── gcp_scanner.py
└── checks/
├── open_ssh.py
├── open_ports.py
└── public_storage.py
The engine runs all checks and aggregates findings:
def run_all_checks():
findings = []
findings.extend(run_aws_checks())
findings.extend(run_azure_checks())
findings.extend(run_gcp_checks())
return findings
This design allows scalable expansion to new checks or cloud providers.
Azure Integration
Azure required service principal authentication:
az login --service-principal -u <appId> -p <password> --tenant <tenantId>
Terraform resources included:
- Resource Group
- Virtual Network and Subnet
- Vulnerable Network Security Group (open to all inbound traffic)
- Storage Account with public blob access
Scanner modules for Azure:
- nsg_open_ssh.py → detects insecure NSG rules
- public_storage.py → detects public blobs
Status and report functions were updated to include Azure:
[CloudSentinel] Checking Azure resources...
Resource Groups: 1
Virtual Machines: 0
Network Security Groups: 1
Storage Accounts: 1
Cloud Environment Status: RESOURCES REMAIN ⚠️
Scan results:
[AWS] SSH open to internet (HIGH)
[AZURE] Public blob access enabled (HIGH)
Risk Score (0-10): 6.0
Overall Risk Level: MEDIUM
GCP Integration
For GCP, I used a service account with Terraform:
gcloud iam service-accounts create cloudsentinel-sa
gcloud projects add-iam-policy-binding YOUR_PROJECT_ID \
--member="serviceAccount:cloudsentinel-sa@YOUR_PROJECT_ID.iam.gserviceaccount.com" \
--role="roles/editor"
gcloud iam service-accounts keys create ~/cloudsentinel-sa.json \
--iam-account cloudsentinel-sa@YOUR_PROJECT_ID.iam.gserviceaccount.com
Terraform resources included:
- Compute Engine VM with public SSH
- GCS bucket with public access
- Firewall rules for open ports
Scanner checks mirrored AWS/Azure:
def run_gcp_checks():
# Check VMs, Storage, and Networks
...
CLI fully supports deploy, scan, report, destroy, status for GCP just like AWS and Azure.
Reporting & Risk Scoring
CloudSentinel generates structured JSON and CSV reports, including:
- Vulnerability list per cloud
- Severity counts (CRITICAL, HIGH, MEDIUM, LOW)
- Normalized risk score (0–10)
- Summary for SOC analysts or auditors
Example summary output:
--- Security Summary ---
CRITICAL: 1
HIGH: 3
MEDIUM: 2
LOW: 0
Risk Score (0-10): 7.33
Lessons Learned
- Terraform differs slightly across cloud providers; planning is critical
- Modular scanner architecture enables scalable vulnerability detection
- Automation with Python CLI unifies multi-cloud operations
- Validating destroy operations prevents unintended cloud costs
- Structured reporting mirrors real-world SOC workflows
Next Steps
- Expand scanner checks for Azure and GCP
- Integrate with SIEM tools like Splunk
- Build a web dashboard for vulnerability visualization
- CI/CD pipeline to automate scans and reports
Conclusion
CloudSentinel demonstrates the full lifecycle of cloud security testing:
- Multi-cloud deployment with Terraform
- Automated scanning with modular Python
- Risk scoring and structured reporting
- Cleanup and status verification
This project showcases problem-solving, planning, and hands-on cloud security skills, ready to impress potential employers and security teams.
Check out the project on GitHub: CloudSentinel Git Repo
Top comments (0)