
If you spend enough time in AWS, your workflow usually ends up split across too many surfaces.
You review infrastructure in the AWS Console, switch accounts with local profiles or assume-role, manage Terraform from a terminal, inspect drift with a mix of plan, state, and manual checks, and keep a growing pile of scripts around for the gaps. None of those tools are bad on their own, but the overall operator experience is fragmented.
That is the problem I built AWS Lens to address.
Repo: https://github.com/BoraKostem/AWS-Lens
AWS Lens is a local-first desktop app for AWS operators, with Terraform treated as a first-class workflow, not an afterthought. The goal is not to replace the AWS Console or the Terraform CLI. The goal is to create an operator workspace that reduces context switching and makes day-to-day cloud work more coherent.
Why I built it
Most AWS tooling falls into one of these buckets:
- inventory viewers
- cloud dashboards
- Terraform runners
- security/compliance wrappers
- terminal-heavy internal tools
- What I wanted was something more operational: a single place where I could move between AWS account context, Terraform plans, drift, governance checks, service inspection, and terminal follow-up without losing state.
I also wanted it to stay local-first.
AWS Lens reads local AWS profiles, works with your local Terraform projects, and stores app state locally. It does not require a hosted control plane. Temporary assumed-role credentials stay in memory instead of being written back into AWS config files.
That local-first model matters because the target workflow is not “cloud management as a service.” It is “serious operators working from their own machine with the tools and credentials they already use.”
Terraform as a first-class service

The core idea behind AWS Lens is that Terraform should be treated like a primary operator workflow.
In many tools, Terraform support means a button that runs plan and dumps output into a panel. That was not enough for what I wanted. I wanted a real workspace around the Terraform lifecycle: project selection, variable management, plan analysis, drift review, governance checks, state inspection, and run history.
Project discovery and command execution
AWS Lens can discover and manage local Terraform project folders from a visual project browser. From there, it supports common Terraform operations including:
- init
- plan
- apply
- destroy
- import
- state
- force-unlock
- version The important part is not just that these commands can run, but that they run inside a more structured workflow.
Instead of bouncing between directories and shell history, you can work from a project-centric UI that keeps track of:
- the selected project
- workspace selection
- command output
- Git metadata like branch, commit, dirty state, and changed files
- execution history and outcomes
- Long-running apply and destroy operations are also tracked explicitly, including handling around app shutdown so they are not treated like throwaway terminal commands.
Variable sets and secret-backed inputs
Terraform variables are often where real-world workflows become messy. Teams end up with scattered .tfvars files, environment-specific overlays, local conventions, and a mix of plaintext values and secret references.
AWS Lens adds structure here through named variable sets with a base layer and environment-specific overlays. Variables can be edited inline, with sensitive values masked and inputs handled in a more type-aware way than raw text blobs.
It also supports pulling runtime secret values from:
- AWS Secrets Manager
- AWS Systems Manager Parameter Store That makes it possible to build variable configurations that stay close to your AWS operational context instead of maintaining entirely separate secret-loading habits.
The workspace also detects missing required variables before you get deep into a failed plan or apply cycle.
Plan visualization and destructive-change analysis
One of the biggest day-to-day pain points with Terraform is reading plans quickly and confidently.
Raw plan output is useful, but it is still easy to miss important changes, especially in larger workspaces. AWS Lens adds a plan visualization layer that helps surface the shape of a plan more clearly.
It supports multiple execution modes, including:
- standard plan
- refresh-only
- targeted plan
replace-oriented flows
Plans can be saved and compared, and the app groups change summaries by module, action type, and resource. It also applies heuristics to call attention to risky patterns such as:destructive changes
replacements
delete-heavy operations
This does not replace the Terraform plan itself. It helps operators review it with better visibility.
Drift reconciliation against live AWS
Drift is one of the most important parts of the app.
Terraform users generally know drift matters, but in practice drift workflows are awkward. You notice it after something breaks, infer it from unexpected plan output, or discover it while comparing state to live infrastructure manually.
AWS Lens introduces a dedicated drift reconciliation workspace that compares Terraform state against live AWS resources across a broad set of supported resource types.
Current direct support includes resource categories like:
- EC2 instances
- Lambda functions
- EKS clusters
- ECS references through Terraform state
- VPCs, subnets, route tables, security groups, internet gateways, NAT gateways, transit gateways, and network interfaces
- S3 buckets
- ECR repositories
RDS instances and clusters
Each resource is classified into statuses such as:in_sync
drifted
missing_in_aws
unmanaged_in_aws
unsupported
It also distinguishes between levels of confidence, including whether a finding is verified, inferred, or currently unsupported.
That distinction matters. I did not want the app to pretend it had perfect provider-level parity where it does not. Verified findings come from direct comparisons between Terraform state and live AWS inventory. Inferred findings are clearly labeled as heuristics so operators understand what is strong evidence and what is a likely signal.
Drift results can include:
- attribute-level differences
- tag drift
- unmanaged resources discovered in AWS
- missing resources referenced in Terraform state
- trend tracking across snapshots over time From a practical operations perspective, this turns drift into something you can inspect as a workflow, not just something you infer after the fact from a surprising plan.
Governance and safety checks before apply
Terraform execution in production usually needs more than syntax validity. Teams want policy and governance signals before they approve changes.
AWS Lens detects and integrates with common tools such as:
- terraform validate
- tflint
- tfsec
- checkov These checks can be configured with blocking versus optional behavior, and findings are categorized by severity. The app can prevent terraform apply when critical checks fail.
This is another case where the goal is not to replace the underlying tools. The goal is to make them part of a single pre-apply workflow instead of a scattered set of separate commands and CI assumptions.
State inspection, operations, and backups
State is one of those areas everyone touches carefully, but often without much ergonomic support.
AWS Lens provides visibility into raw Terraform state JSON and parsed resource inventory. You can browse managed and data resources, inspect addresses and attributes, and review tags and structure in a more navigable format.
It also supports:
- automated state backups
- state lock visibility
- state operations history
- audit-oriented tracking of what happened This is useful both for troubleshooting and for building confidence in the overall workflow. State should not feel like an opaque file you only crack open in emergencies.
Run history and auditability
Every Terraform command executed through the workspace is recorded with metadata such as:
- timestamp
- arguments, with sensitive values redacted
- exit code
- duration That turns command execution into a trackable history instead of disposable terminal scrollback. For teams or solo operators revisiting changes later, that context becomes much easier to follow.
AWS operator workspace beyond Terraform
While Terraform is the core differentiator, AWS Lens is not only a Terraform app.
It also provides a broader AWS operator workspace for navigating service inventories, switching identity context, comparing accounts or regions, and inspecting infrastructure directly.
Profile and region context
The app reads local AWS profiles from standard shared config and credentials files. From there, you can select a profile, choose a region, and carry that context through the rest of the UI.
That shared context matters because it means the service consoles, comparison tools, and terminal are all operating from the same selected identity and region instead of behaving like isolated tools.
The app also supports app-managed credential profiles directly from the desktop interface, along with pinning frequently used services in the sidebar for faster navigation.
Session Hub for assume-role workflows

Cross-account access is a normal part of AWS operations, but it is often clumsy in practice. Session Hub was built to make that smoother.
It lets you:
- save assume-role targets locally
- assume roles through STS on demand
- track active and expired sessions
- activate assumed sessions as the current app context
open the embedded terminal with the assumed-role context applied
A key design choice here is security and locality:saved target metadata is stored locally
temporary STS credentials are held in memory only
temporary credentials are not written back to AWS shared config files
That makes Session Hub useful for real cross-account operational work without turning it into a credential persistence tool.
Compare Workspace
Another operational feature is the Compare Workspace, which allows side-by-side inspection of two AWS contexts.
Each side can point to a base profile or an active assumed session, and each side can target a different region. The comparison workflow is meant to help with questions like:
- what differs between two accounts?
- what changed between regions?
- where are posture or ownership differences visible?
are there obvious cost, inventory, or configuration mismatches?
The compare mode can focus on areas like:security
compute
networking
storage
drift/compliance
cost
That makes it useful for migration work, environment comparison, and general operational inspection.
Overview dashboard
The Overview screen acts as a landing page for AWS navigation. It gives a region-aware summary across many AWS services and helps operators move quickly into deeper service workspaces.
It includes regional summary tiles, multi-region views, relationship mapping, tag-oriented search, and insight panels grouped by broad infrastructure domains such as compute, storage, networking, and security.
Rather than being just a static dashboard, it acts as a routing surface into the rest of the product.
Compliance Center
AWS Lens also includes a Compliance Center that aggregates findings for the active profile and region and groups them by severity and category.
In practice, I think of this less as a reporting screen and more as an operator queue. The goal is to surface current issues, organize them into useful buckets, and point toward remediation paths.
Direct resource access
One feature I particularly wanted was direct resource access for constrained IAM environments.
Sometimes you cannot list an entire service, but you can inspect a known resource by identifier. AWS Lens includes a direct-access workflow for opening resources like S3 buckets, Lambda functions, RDS instances, ECR repositories, ECS services, EKS clusters, CloudFormation stacks, Route 53 zones, Secrets Manager secrets, KMS keys, WAF web ACLs, ACM certificates, and others directly by known ID.
That makes the tool more useful in break-glass support scenarios and tightly scoped access models.
Service-specific AWS workspaces
AWS Lens currently includes dedicated service workspaces across a broad set of AWS services rather than trying to expose everything as a single generic data browser.
Service coverage includes categories such as:
Compute: EC2, Lambda, ECS, EKS, Auto Scaling
Storage: S3, ECR
Database: RDS
Networking: VPC, Load Balancers, Route 53, Security Groups
Management: CloudFormation, CloudTrail, CloudWatch
Security: IAM, Identity Center, KMS, WAF, ACM
Messaging: SNS, SQS
Other: Secrets Manager, Key Pairs, STS
The point of these workspaces is not to recreate the entire AWS Console one-to-one. It is to provide focused operator surfaces for common inspection and workflow needs.
A few examples:
EC2 includes instance inventory, snapshots, IAM instance profiles, bastion-oriented actions, and CloudWatch-linked follow-up.
VPC and Security Groups focus on topology, gateways, interfaces, and rule management.
EKS and ECS include cluster and service inspection plus helper actions that can flow into the terminal.
S3, ECR, Lambda, and RDS expose service-native detail views for common operational use.
IAM, Identity Center, KMS, WAF, ACM, and Secrets Manager bring identity and security workflows into the same shell.
This is important because operators often do not need a full browser-based console experience for every task. They need the right subset of visibility and actions, plus a quick path to the terminal when needed.
Embedded terminal with shared AWS context
The terminal is a core part of the app, not an afterthought.
AWS Lens includes an embedded terminal that stays synchronized with the currently selected AWS context. When you switch profiles, regions, or assumed-role sessions, the terminal environment updates accordingly.
That means the GUI and the CLI stop feeling like separate worlds. A service console can surface a follow-up action, and the operator can continue directly in a terminal that already has the correct profile, region, or temporary session wired in.
I think this matters because a lot of real infrastructure work is hybrid. You inspect visually, then verify or act in the shell. The transition should be seamless.
Observability and Resilience Lab
There is also an early beta feature called the Observability and Resilience Lab.
This feature is designed as an operator-assistant surface for:
- EKS clusters
- ECS services
Terraform workspaces
Its purpose is not automatic remediation. Instead, it analyzes posture and generates structured, reviewable output such as:findings and recommendations
resilience guidance
copyable shell commands
YAML
Terraform snippets
JSON templates
starter FIS experiment definitions
The model here is intentionally conservative. It reads state, infers likely issues where appropriate, and produces artifacts for operator review. It does not silently mutate infrastructure.
Local-first by design
One of the most important architectural choices in AWS Lens is that it relies on local workstation state rather than a hosted backend.
It reads from your local AWS configuration and stores app data under the local Electron user data directory. Terraform-related artifacts also stay tied to project directories and local workspace state.
That gives the app a very different character from a cloud management platform. It behaves more like an operator cockpit than a centralized service.
For me, that was a deliberate choice because the target user is not looking for another control plane. They are trying to work faster and more safely from the machine they already use for infrastructure operations.
What I’m trying to achieve
The app is not meant to be “the AWS Console but prettier.”
It is meant to be a practical desktop workspace for people who operate AWS and Terraform regularly and want:
- better Terraform lifecycle visibility
- less context switching
- smoother cross-account workflows
- more explicit drift and governance handling
- faster movement between GUI inspection and terminal execution The biggest differentiator is still the Terraform experience. That is where I think AWS Lens has the clearest identity. The AWS service workspaces, session handling, comparison tooling, and embedded terminal all reinforce that core operator workflow.
Closing
AWS Lens is still evolving, but the direction is clear: treat infrastructure work as an integrated operator experience instead of a pile of disconnected tools.
If you work in AWS heavily, especially with Terraform, that is the problem space I am trying to improve.
Top comments (0)