DEV Community

Cover image for ElectricEye 2.75: Expanding Open Source Multi-Cloud Security
Jonathan Rau
Jonathan Rau

Posted on

ElectricEye 2.75: Expanding Open Source Multi-Cloud Security

Setting the Stage on ElectricEye

What better for my first dev.to post than talking about my first large-scale (and longest maintained) open-source project, ElectricEye. This post is dedicated to the release of version "2.75" (as in 2-3/4 not a real semantic version).

For the uninitiated, ElectricEye is a Python CLI tool dedicated to assessing cloud environments against 100s of checks for security, resilience, performance, and general best practices. What started as "the open source AWS Cloud Security Posture Management (CSPM) tool with the most coverage" has transcended (finally) to something even better.

ElectricEye has only recenty gone "multi-cloud", or "omni-cloud", as there is not a good term for both Public Cloud (e.g., AWS, Azure, GCP) and Software-as-a-Service (SaaS) combined. I optimized a half-decade of technical debt and bad Python to make ElectricEye more performant, broader, and more usable and approachable than ever before.

So let's get to it.

Two Point Seven Five

If you check out the Issues you may notice an ElectricEye 3.0 issue, like almost every software project in the world, after scoping a high-level Definition of Done like the world's strongest Scrum Master I quickly realized I was in deep sh*t. There was so much to do.

Between family duties, Merger & Acqusition duties, Board Advisor duties, and regular CISO-life (though as of the publishing of this piece I will not be CISO @ Lightspin any longer) -- I've managed to carve out time to take a huge chip out of the "3.0 prerequisites". This "2.75" version of ElectricEye introduces the following:

  • Coverage for Oracle Cloud Infrastructure (200+ checks, 20+ services) and Microsoft's M365 (3 "modules" and 30+ checks). Additional AWS checks have been added for EMR & Redshift Serverless, CodeDeploy, and others.
  • Loads of optimizations to allow ElectricEye to run 40% faster on pre-Python 3.11 - I've only tested down to 3.8. 3.11 is already pretty damn quick.
  • Expansion of the Attack Surface Monitoring (ASM) modules with an overhauled Shodan evaluation, new reverse-DNS function (no more socket resolving to your private IPs), support for VirusTotal and CISA KEV for malware & vulnerability exploitability enrichment.
  • Complete documentation overhaul for how to use ElectricEye and its Outputs, how to contribute and write your own Auditors, complete with a new Docker-specific section, Output examples, and...
  • New Outputs and revised Outputs. Added responsive HTML reports for audit-readiness and CSPM-focus along with Amazon SQS and two types of native Slack integrations for per-finding or summary-report alerting. Multiple Outputs are now supported.
  • Public Docker images on Docker Hub, Amazon ECR Public, and Oracle Cloud Infrastructure Artifact Registry along with a properly multi-layer Alpine 3.18.0 base image that won't break
  • Improved security-minded CI to build & scan SBOMs, better written CodeQL & Dependabot GH Actions, and I actually version pin now...
  • Improved controls mapping across NIST CSF, NIST 800-53 (rev. 4), AICPA's Trust Service Criteria, and ISO 27001:2013/2017 and information within finding descriptions & remediation. This is in preparation for future controls mapping work
  • Re-imagined "global service" and service eligiblity awareness for ALL AWS Partitions, even the SC2S/C2S zones. No more running into superfulous errors on endpoint availability or running S3 and CloudFront checks once per Region
  • Improved error handling, better written Python, and check logic for 100s of checks across ElectricEye - helped to improve speed, UX/DX, and fixed several evaluations that did not work properly.

There are probably some things I am forgetting in there, but overall, what started as a sh*tpost PR to add a few Oracle Cloud checks, ended up being one of the most comprehensive PRs I have ever done.

And now for some highlights.

Omni-cloud!

I added GCP checks about 2 months ago from the time of writing, the implementation relied on an insufferable amount of spaghetti code via click command line arguments and made a general mess of things.

To improve this experience, I wrote a new class (CloudConfig) within the codebase to handle cross-cloud/cross-SaaS integration with various API keys, OAuth tokens, X.509 certificates alongside shimming AWS Secrets Manager and AWS Systems Manager Parameter Store for retrieving sensitive values. This will be an area of focus to expand offerings for credential stores in the future, but not for awhile.

Additionally, I added much better support for AWS multi-Account/multi-Region support with a way to build partition-aware Boto3 Sessions, and as a way to keep the read-only evaluation permissions away from your local credentials that will have write and read access to secrets, buckets, queues, and the AWS Organizations APIs. You'll need to be a Delegated Administrator (or be in the AWS Management Account) to make use of the OU and Orgs-wide evaluations, however.

This also laid the path for reducing the amount of arguments needed to run evaluations and made ElectricEye more portable. You'll need to put in a tiny bit of groundwork to filling out a TOML, but you can parallelize with per-Auditor, per-Assessment Target, or per-Check CLI filters across multiple Regions and Accounts (or equivalents in other environents).

While the integration for different providers is not turnkey, it helps me write them that much faster, and can keep the discrete logic differences apart from each other. In fact, outside of pulling credentials from AWS-services, you do not need to run ElectricEye from AWS or with AWS creds at all.

That said, since every check is written against the AWS Security Finding Format, your current Session's AWS creds will be used to fill in the parts of the ASFF that require partitions, region, and Account information. This is only required for Security Hub - so it's jarring - but necessary.

Lastly, I've just about squashed all environment variables, I will still set some but no longer do you need to preset them and have a dozen or more of them in your Dockerfiles or container orchestration tools.

Outputting findings

While some of the changes happened in my last "2.5" release, I've been revamping all Outputs to make them more performant and responsible. What I mean by that is "upserting" findings into PostgreSQL and MongoDB, adding better support for TLS or cloud-native CAs for certain services (like MongoDB vs AWS DocumentDB), and adding more Outputs dedicated to the asset management side of the house.

I fixed the multi-output issue - the one where more than one output was not supported - so now you can totally generate CSVs, JSON, HTML, send findings to Security Hub, Slack, and PostgreSQL and more.

I added a native Slack integration I removed when I shuttered the "extras" part of ElectricEye that totally relied on AWS Security Hub and Lambdas + EventBridge. Slack can send filtered findings (by state and/or severity) to Slack or send a summary after each run using the new Slack Apps.

I also added support for Amazon SQS and writing in batches, this will provide the greatest flexibility for portaling findings around AWS especially if you're well-versed in parsing the ASFF which all ElectricEye findings conform to.

Finally, I added a second canned HTML report which uses some not-so-clever HTML tables, responsive CSS, and matplotlib to generate framework-level and control-level reports and impact analysis for your ElectricEye evaluations. It is not an audit report, but can help you get ready for one, or maybe decide to jettison controls you suck at operating.

Don't do that last part. Or I'll beat up your CISO. (In Minecraft).

I plan on revising DynamoDB, adding Microsoft Teams, and revising the CSV output, and selectively adding some non-AWS Outputs for other clouds. Maybe.

Expanding Attack Surface Monitoring

In the past, I added Shodan integration as a callback to some old code I wanted to use for an AWS blog post once-upon-a-time. Shodan and cloud assets is not all that exciting, until it becomes exciting, as are most things in cloud security.

Often you'll find that giant public DB or ELK stack attributed to your IP is the previous owner of whatever that regional public EC2 CIDR block the IP belonged to.

Not that it deters adversaries from swallowing the data up in their automation and smashing your edge.

I added NMAP support to do a custom "Top 20" port scan with future plans to add more functionality one day. While it is early days, I want to revisit how I can commoditize low-hanging-fruit-ASM so I've started to cherry pick other services to use.

I've added Checks that compare CVEs discovered by services such as Amazon Inspector, Oracle Cloud VSS, and Microsoft 365 Defender to the CISA Known Exploitable Vulnerabilities (KEV) catalog. I've added a way to compare sha256 hashes from Oracle Artifact Registry to hashes in VirusTotal. I will be working on this as a point of focus in the second half of the year, making certain parts more pluggable and performant, including ZAP and security header checks.

I completely rewrote the reverse-DNS lookups in ElectricEye to use Google DNS' public API and break away from using socket. While it was not always an issue, using cloud-specific DNS like AWS VPC or otherwise, could often give you back private IPs or completely wrong or empty responses. It's not perfect, but you'll miss less checks especially when it comes to DNS-only endpoints like Amazon MQ or AWS ALBs.

I have a working prototype for NMAP Script Engine (NSE) scripts as well as basic ZAP scans when connecting to one or more remote ZAP hosts. More to come on that, thinking about a better vulnerability prioritization engine as well that will bring in more exploit sources, so-called "SOCMINT", and deceptive technology.

Performance, performance, performance

Python is a fun language. Also, the only language I know. I also self-taught by writing a SOAR engine on AWS Security Hub as my first ever project. Not the best idea in retrospect.

I picked up a lot of bad habits like endlessly nesting for loops, handling errors with triple-nested try, except: continue type logic, and changing native types when I did not need to. Not using f-strings, not using list comprehensions, importing whole libraries instead of specific modules, pretty much any optimization anti-pattern.

Every PR I ended up finding more things to fix, be it evaluation logic or f-strings, or improving the consistency and readability. I made a concerted effort to be better with all new Auditors that I write for new clouds but also have gone back to fix and totally revise (in some cases) the AWS Auditors, or the "gen 1" evaluation logic in ElectricEye.

Benchmarking on Python 3.8, 3.9 and 3.10 - I was getting on average 40% quicker evaluations - even with the more I/O-intensive tasks like running detect-secrets against outputs of service environment variables written to file or running NMAP. I am trying to eek out every bit of performance as not everyone will be running ElectricEye on a M5.4xlarge instance or an over-provisioned Fargate deployment. There is still more work to do, but it vastly improves the end-user experience when your tool can run quickly and without errors with better overhead.

To add to this, I also revamped how AWS-specific checks handled service eligibility. It is no secret that AWS has multiple partitions (though only 3 are commonly known: commercial, US GovCloud, China Region) and also have various Regional support for different services. Old services like CloudSearch or odd services like Amazon Managed Blockchain obviously do not have the Regional support (nor customer usage) that something like Amazon S3 or Amazon EC2 will.

The new check uses the endpoints.json file directly from botocore and some extra parsing as endpoints are not 1:1 with service names. In the past, ElectricEye (and others) have used the "AWS Global Infrastructure" SSM Parameters which were not always accurate nor available in other Partitions. Now, no matter if you're working on TS/SCI targeting systems for the IC in SC2S or working on multimedia projects in AWS China or ATO-on-AWS in US GovCloud: your ElectricEye evaluations for AWS run only the Auditors that can be ran.

Speaking of S3, and other "global" services such as Amazon IAM and CloudFront, these Auditors will now run at-most-once and have the Regions hardcoded to aws-global (or the Partition-specific "global endpoint" such as aws-gov-global or aws-iso-global) as setting them all to us-east-1 is: jarring (especially when you're an "EU only" type shop), doesn't work for non-Commercial partitions (whose "global regions" are poorly documented), and better reflects those services.

With just S3, IAM, and CloudFront - that is more than 25 Checks per Region you only need to run once. Hence why it's in the performance section. Other Global Services include "Global" scoped WAFv2, certain Shield Advanced APIs, Trusted Advisor (well, support), and AWS Global Accelerator.

ElectricEye on Docker

So, ElectricEye could always run on Docker, the Dockerfile has been in there in various flavors since day one. ElectricEye at first was totally centered around running in on AWS Fargate (well, ECS with a Fargate launch type). The latest incarnation is Alpine 3.18.0 with a properly implemented multi-layer image that accommodates as little libraries as possible.

As ElectricEye expands to "omni-cloud" coverage, the dependencies start to bloat, and with the recent additions of rich HTML reporting (using matplotlib and pandas in the back) I was loathe to include them as using them with Alpine has been hellacious. Pro tip: use the apk and not the PyPI packages if you can help it.

So the better implemented multi-stage build helps speed up builds, manages dependencies a bit better, and allows you to bring ElectricEye anywhere. I've added documentation on using Docker where you can supply your own AWS credentials safely as well as overwrite the TOML configuration file that is pre-built into ElectricEye.

Speaking of pre-built, ElectricEye is now on Docker Hub, Amazon ECR Public, and Oracle Cloud Infrastructure Artifact Registry (for Containers), that last one is a meme and a mouthful all at once. Each of these are built and tagged with hashes and latest with GitHub Actions and I have also added Grype and Syft support to built and scan the SBOM produced from the built image for transparency and usage.

I will add additionally Registries as they make sense (cost is the main driver) and to give everyone a "compliant" offering inasmuch as using a specific registry may be dictated by internal policy or otherwise.

Do note that for the file-based reporting such as JSON, CSV, and HTML that you should supply s3:PutObject permissions to your ElectricEye profile so you can easily get them offloaded. Or, you know, run it locally instead of in a container. Or use a PV. Or whatever.

Eventually, Kubernetes is on the roadmap as I build out other parts of ElectricEye for "3.0".

Future Plans

Well, ideally, if someone wants to buy the IP and procure my services alongside it I would not be opposed. A few have kicked the tires on the idea but nothing has made it across that finish line. Tell your CorpDev folks to come hit me up?

Acquisition sh*tposting aside, that is not why I wrote ElectricEye and it is not why I continue to maintain it.

The security industry has commoditized CSPM drastically but I still find it leaves much to be desired. Sure, you get CSPM "for free" in a lot of places, but at the sacrifice of low service coverage, poorly documented APIs, not supporting all your clouds, let alone getting SSPM, ASM and asset management for free.

I give it all away just about. Sure there are "secret" features on local branches on personal devices I hold back, but the idea was always to make the broadest and deepest CSPM tool - and now with SSPM and other capabilities I hope it'll be the choice for security programs of all sizes and maturity. I cannot speak to whom, but it is being used in every single AWS partition and by companies with a footprint as large as 3K Accounts in AWS.

Pontification aside, I am still building towards "3.0" which means a bare-bones web application that can be easily spun up by any Cloud or Security Engineer/Analyst together with a performant and secure API and other "enterprise-y" features as I can get in such as MFA and SAML/OIDC. I have NO IDEA how to do most of that, but I will figure it out.

In a loose order of importance, here are some things I am working on as I have time for ElectricEye on the "road to three-dot-oh".

Expanded support for GCP

I plan to flesh out GCP to 20-30 services before I am happy with it. This will include the typical ASM-specific Auditors as well as the Firewall Rule evaluation.

Unlike AWS, there are not a ton of superfluous or overlapping service offerings for GCP. I will also add a new SSPM Assessment Target in the form of Google Workspaces along with trying to extend support to Folder- and Organization-level evaluation.

Expanded support for AWS

I want to "finish" AWS inasmuch as covering all of the likely-to-be-used services and adding support for new(ish) services.

I also want to move Checks closer to their Service-oriented Auditor. For example, moving ELBv2-specific checks within Shodan and Shield Advanced into the ELBv2 Auditor and moving more EC2-adjacent checks into the EC2 family such as AMIs and EBS volumes. A few Auditors need a complete rewrite such as IAM, Trusted Advisor, Global Accelerator, CodeBuild, and RDS.

Improved Controls Mapping & expansion

This is a two-fold change I plan on making. Firstly, to increase transparency I will be "reconfirming" mapping of NIST CSF controls (and maybe doing CSF 2.0 if it's ready by then). CSF controls are incredibly high-level versus specific benchmarks such as CIS or even CIS Controls, hell, it's even more high-level than some controls or objectives within the HIPAA "Security Rule" or GDPR Articles. CSF is great in that NIST supports mapping to a lot of popular frameworks and standards and CIS maps in reverse from it (as does AICPA, the "SOC2 people").

From there, I will "right-size" the mapping into subsequent frameworks and mapping to CIS Crticial Controls v8 and then jump across other frameworks from there.

I feel CIS has a great mapping methodology and they even qualify their mapping as a "subset" or "superset" and err on the side of "under-mapping". This will allow me to venture into frameworks I do not have much exposure too such as financial & banking-specific frameworks and OCONUS standards such as the UK NCSC Cyber Essentials, Australian Government standards, and others.

Secondly, I need to create an engine to support this mapping "just-in-time". The AWS Security Hub Finding Format (ASFF) only supports up to 32 values within the Python list (well, JSON array) in Compliance.RelatedRequirements which is why I stopped at the ElectricEye "core four" of NIST CSF (v1.1), NIST SP 800-53 (Rev.4), ISO 27001:2013/2017 Annex A and AICPA's 2017/2020 Trust Service Criteria.

In practice, Security Hub outputs will only ever have the "core four", but all other ElectricEye outputs will have the newer frameworks.

I would like to get to a dozen additional frameworks, not counting bumping 800-53 to Rev. 5 and CSF to v2.0 whenever that is ready. Will add some "Generics" such as CSA's CCM, other NIST SP's, some US Federal, some other countries, and cloud-specific CIS Benchmarks, and whatever else I can get working. I end up parsing a lot of these frameworks & mappings by hand, not even GPT-4 is reliable in this.

CIS has both their explorer and formal whitepapers & Excel spreadsheets, while NIST also offers a few native mappings, and other control framework authors provide forward-mapping between versions such as ISO 27001:2013/2017 to :2022 and PCI-DSS v3.x to v4.0 so I should at least be able to rely on their judgments instead of my own "just trust me, bro!" mapping to NIST CSF, which kinda sorta shouldn't be mapped against.

Add Support for Azure & other SaaS

Azure is a major gap, I did Oracle ahead of it as a joke, and ended up enjoying it a bit too much. Other enterprise SaaS such as Workday ERP, Salesforce, Hubspot CRM, and then a proper GitHub App are on the roadmap.

Azure I intend to use Enterprise Applications for and to get to the same 20-30 service benchmark for a "v1". I personally despise Azure, their bad documentation, but I will make do. It's popular for a reason.

The SSPM checks are usually not broadly encompassing, as there is all sorts of subscription and SKU compatibility, but I will use M365 as my model going forward to hopefully delineating exact permissions and license/SKU/subscription tiers.


There is more to do, but for now, I don't want to over-promise as I have in the past.

All for now, I'll see you in the community.

Stay Dangerous.

Top comments (0)