<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Pawan Sawalani</title>
    <description>The latest articles on DEV Community by Pawan Sawalani (@pawansawalani).</description>
    <link>https://dev.to/pawansawalani</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/pawansawalani"/>
    <language>en</language>
    <item>
      <title>Goodbye Bastion, Hello Zero-Trust: Our Journey to Simplified RDS Access</title>
      <dc:creator>Pawan Sawalani</dc:creator>
      <pubDate>Wed, 09 Apr 2025 11:04:26 +0000</pubDate>
      <link>https://dev.to/pawansawalani/goodbye-bastion-hello-zero-trust-our-journey-to-simplified-rds-access-221j</link>
      <guid>https://dev.to/pawansawalani/goodbye-bastion-hello-zero-trust-our-journey-to-simplified-rds-access-221j</guid>
      <description>&lt;p&gt;Connecting to a private AWS database shouldnt feel like hacking through a jungle of jump boxes and VPNs. In our teams early days, though, that was our reality. This post is a candid look at how we improved the developer experience and security of accessing Amazon RDS databases moving from an old-school Windows bastion (jump box) to AWSs shiny new Verified Access, and finally landing on a surprisingly simple solution with AWS Systems Manager Session Manager. Well cover what worked, what didnt, and how you can set up a smooth, secure database access workflow no matter your experience level.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Background: The Old Bastion Setup (RDP into RDS)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Not long ago, our developers accessed private RDS databases by &lt;strong&gt;RDP-ing into a Windows bastion host&lt;/strong&gt; in AWS. This bastion was an EC2 instance in a public subnet acting as a jump box. Team members would Remote Desktop into it, then use database GUI tools (like SQL Server Management Studio or pgAdmin) &lt;strong&gt;installed on that bastion&lt;/strong&gt; to connect to the actual RDS instances in private subnets. It was the traditional solution to avoid exposing databases directly, but it came with plenty of headaches:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Clunky User Experience:&lt;/strong&gt; Engineers couldnt use their own machines or preferred tools directly. They had to operate via a remote Windows desktop, often suffering lag and limited clipboard sharing. It felt like working through a periscope rather than directly on your workstation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Security Risks:&lt;/strong&gt; The bastion needed an open RDP port (3389) accessible (albeit restricted by IP). This inherently increases risk if the security group was misconfigured or an exploit found in RDP, our private DB network could be exposed . With more remote work, the chances of someone poking a hole in the firewall for convenience grew .&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Maintenance Burden:&lt;/strong&gt; A Windows server requires constant care OS patching, user account management, and even handling RDP license limits if multiple people use it . We had to keep the DB client software up-to-date on the bastion too. All this ops overhead for a box that didnt do any real work, except letting us in.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnwpsa39wq9rkbncvrffe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnwpsa39wq9rkbncvrffe.png" width="800" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Figure: Traditional approach using a bastion host (an EC2 jump box in a public subnet) to reach a private Amazon RDS database. Developers traffic goes from the corporate network (or internet) to the bastion, then onward to the database. This requires opening RDP/SSH access to the bastion, which introduces management overhead and potential security exposure.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;It was clear this setup didnt scale well for our growing team. We wanted a way to &lt;strong&gt;connect to RDS directly from our laptops&lt;/strong&gt; , without that clumsy remote hop but still keeping the databases locked down from the internet. VPN was one option, but managing a full-blown VPN client and infrastructure felt heavy. In late 2024, AWS announced something that caught our attention as a possible answer.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Trying AWS Verified Access for Direct Database Connectivity&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;When AWS released &lt;strong&gt;Verified Access (AVA)&lt;/strong&gt;, it sounded like a game-changer. AWS Verified Access is a service built on zero-trust principles that lets users connect securely to internal applications without a VPN . Initially it was only for web (HTTP) apps, but as of &lt;a href="https://aws.amazon.com/blogs/networking-and-content-delivery/aws-verified-access-support-for-non-http-resources-is-now-generally-available/" rel="noopener noreferrer"&gt;re:Invent 2024&lt;/a&gt;, it expanded to support non-HTTP endpoints including RDS databases . The promise was &lt;strong&gt;VPN-less, policy-controlled access&lt;/strong&gt; to private resources, with fine-grained checks on each connection (user identity, device security posture, etc.). For our use case, the appeal was huge:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Engineers could run their favorite database GUI &lt;em&gt;directly on their laptop&lt;/em&gt; and connect to the RDS endpoint as if they were in the office network. No more RDP hop &lt;strong&gt;better user experience&lt;/strong&gt; and productivity.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Security would actually &lt;em&gt;improve&lt;/em&gt;: Verified Access would evaluate every login attempt against security policies (who you are, whether your device is trusted, etc.), only then broker a connection . Its based on never trust, always verify principles, meaning even if someone somehow got credentials, if they werent on an approved device or didnt meet policy, access would be denied.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We could eliminate the exposed bastion entirely. Verified Access acts as a managed gatekeeper in AWSs cloud, so &lt;strong&gt;no need for an open port&lt;/strong&gt; in our VPC for RDP or SSH.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Setting up AWS Verified Access for our databases involved a few pieces. First, we needed to integrate it with our SSO identity provider (AWS IAM Identity Center in our case) as a trust provider. This let Verified Access confirm our engineers identities via SSO login. Next, we created a Verified Access instance and defined an &lt;strong&gt;endpoint for our RDS&lt;/strong&gt;. AWS now allows an RDS instance (or cluster or proxy) to be a target for Verified Access . We then set up an access policy in our test, we kept it simple: allow members of our engineering SSO group who passed MFA. Verified Access can get very granular (checking device OS, patch level, etc.), but we started basic just to get it working.&lt;/p&gt;

&lt;p&gt;One critical component was deploying the &lt;strong&gt;AWS Verified Access client&lt;/strong&gt; (also called the &lt;em&gt;Connectivity Client&lt;/em&gt;) on our laptops . This is a small app that runs on the users machine to facilitate the connection. It &lt;strong&gt;encrypts and funnels traffic from the laptop to AWS Verified Access&lt;/strong&gt; , including attaching the users identity and device info, so that AWS can decide if the traffic is allowed . In essence, its like a smart VPN client but application-specific and ephemeral. We installed the client, and it prompted us to log in via our SSO in a browser. Once authenticated, the client established a secure tunnel to AWS.&lt;/p&gt;

&lt;p&gt;From a user standpoint, after launching the Verified Access client and logging in, they could open their database tool (say, &lt;strong&gt;DBeaver or DataGrip&lt;/strong&gt; ), and connect to the databases endpoint (we used the regular RDS hostname) on the default port. The Verified Access client transparently routed that connection through AWS to our VPC. It really felt like magic the first time my pgAdmin on my MacBook connected to a Postgres in a private subnet without any SSH tunnels or VPN, and with AWS handling the security behind the scenes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffehjb03andai00cf6fl6.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffehjb03andai00cf6fl6.jpeg" width="800" height="473"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Figure: AWS Verified Access.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Initial benefits we observed:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Night-and-day UX improvement:&lt;/strong&gt; Everyone could use their own IDE/GUI, with native performance. Running queries or browsing tables was as snappy as if on a local network.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;No more shared jump box:&lt;/strong&gt; Each engineer authenticated individually via SSO. There was no single chokepoint server to maintain or that could be compromised to gain broader access Verified Access only let &lt;strong&gt;that one users session&lt;/strong&gt; through, and only to the specific database endpoint we configured.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Auditing and control:&lt;/strong&gt; Verified Access logs every access request. We could enforce multi-factor auth and even device compliance (e.g., only allow up-to-date company laptops). Its true zero-trust: every new connection is verified against policies rather than implicitly trusted once on a VPN.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;The Downsides of Verified Access in Practice&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;This pilot with AWS Verified Access was promising, but as we dug deeper and scaled it out, we hit some challenges that made us reconsider relying on it long-term:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Client Software Limitations:&lt;/strong&gt; Since it was a new service, the &lt;strong&gt;Verified Access connectivity client&lt;/strong&gt; had a few rough edges. It was only available for Windows and Mac at first our one engineer on Linux was out of luck . (AWS hinted Linux support was coming, but it wasnt there yet.) Additionally, the client lacked a friendly GUI; we had to configure it by dropping a JSON config file onto the machine (no simple one-click setup) . This was manageable for our tech-savvy team, but not exactly polished.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Complexity of Policies:&lt;/strong&gt; Writing policies in AWS Verified Access uses AWS Cedar (a policy language). Its powerful but introduced a learning curve. Simple policies were fine, but anything custom required understanding a new syntax and debugging in a new console. For a small team, this felt like overkill just to allow database access for devs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cost Concerns:&lt;/strong&gt; Perhaps the biggest factor &lt;strong&gt;cost&lt;/strong&gt;. AWS Verified Access is a managed service you pay for per application endpoint and per hour. In our case, each private RDS we wanted to enable access to counted as an application endpoint. The pricing in our region came out to about &lt;strong&gt;$0.27 per hour per app&lt;/strong&gt; plus a small per-GB data charge . That means roughly $200 per month &lt;em&gt;for each database&lt;/em&gt;. In a dev/test/prod scenario with multiple databases, we were looking at several hundreds of dollars monthly just for this convenience. Compared to a simple EC2 bastion (which might be ~$50 or less per month) it was an order of magnitude more expensive. As a startup, that was hard to justify beyond initial testing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Operational Maturity:&lt;/strong&gt; Being a very new service, we encountered a few hiccups occasional client disconnects and once an identity sync issue that blocked a login until we reset the client. AWS support was helpful, but it reminded us that we were early adopters on the bleeding edge. We had to ask: did we want to be pioneers here, or use something more battle-tested?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Weighing these downsides, we decided to explore alternatives. We loved the idea of ditching the bastion and having direct access, but maybe there was a simpler way to get there without the cost and complexity of Verified Access. It turned out, the solution was something we already had at our fingertips in AWS.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Switching Gears to AWS SSM Session Manager&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;After our trial with Verified Access, we took a step back and reexamined the problem. We wanted &lt;strong&gt;secure, easy access to private RDS from our laptops&lt;/strong&gt; , and we wanted to minimize infrastructure and maintenance. AWS actually provides a feature for secure remote access that we had used before for shell access: &lt;strong&gt;AWS Systems Manager Session Manager&lt;/strong&gt; (SSM Session Manager). Could we use it for database access? The answer was yes and it was surprisingly straightforward.&lt;/p&gt;

&lt;p&gt;AWS Session Manager lets you open a shell or tunnel to an EC2 instance &lt;strong&gt;without any SSH keys or open ports&lt;/strong&gt; , by using an SSM Agent installed on the instance . What many dont realize is that Session Manager can also handle &lt;strong&gt;port forwarding&lt;/strong&gt;. &lt;a href="https://aws.amazon.com/blogs/database/securely-connect-to-an-amazon-rds-or-amazon-ec2-database-instance-remotely-with-your-preferred-gui/" rel="noopener noreferrer"&gt;In late 2022&lt;/a&gt;, AWS added the ability to forward traffic not just to the instance itself, but &lt;em&gt;through&lt;/em&gt; the instance to another host essentially an SSH tunnel-like capability, but over the SSM channel . This is perfect for our use case: we can use a lightweight EC2 instance as a private relay to the database, and Session Manager will securely connect our laptop to that instance and pipe the traffic to the RDS.&lt;/p&gt;

&lt;p&gt;Heres how we built our Session Manager solution, step by step, and how it addressed our needs:&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;1. Setting Up a Small EC2 Tunnel Instance&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;First, we launched a tiny EC2 instance in the same VPC and private subnet as our RDS. (We jokingly call this our bastion, but its not accessible like a traditional one no inbound access at all.) Important details for this instance:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Instance Type &amp;amp; OS:&lt;/strong&gt; We chose an Amazon Linux 2 t4g.nano (very cheap, ~$4/month). Amazon Linux comes with the SSM Agent pre-installed, which saved setup time.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;SSM IAM Role:&lt;/strong&gt; We attached the &lt;strong&gt;AmazonSSMManagedInstanceCore&lt;/strong&gt; IAM policy via an instance role. This grants the instance permission to communicate with the SSM service. With this, the SSM Agent on the instance can register itself and receive Session Manager connection requests. (No SSH keys needed at all authentication will be handled by IAM and SSM.)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Security Groups:&lt;/strong&gt; The instances security group was locked down. We did not allow any inbound ports from anywhere (not even SSH from our IP). We only allowed outbound traffic. Specifically, outbound rules allowed HTTPS (port 443) so the agent could reach SSMs endpoints, and allowed outbound to the RDSs port. The RDSs security group in turn allowed inbound from this instances security group on the database port. This way, the EC2 can talk to the database internally, but nothing external can talk to the EC2.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Networking:&lt;/strong&gt; We gave the instance access to the internet &lt;strong&gt;only via an SSM VPC endpoint&lt;/strong&gt; (and a VPC endpoint for EC2 messages), instead of a NAT gateway. This is an optional step, but it means the SSM Agent traffic goes through a private VPC endpoint to AWS, which is more secure and avoids NAT data charges. (If you skip VPC endpoints, the agent will use the NAT to reach the Systems Manager API, which is fine but costs a bit more and traverses the internet.)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At this point, we had an &lt;strong&gt;SSM-managed instance&lt;/strong&gt; in the private subnet. Think of it as a potential one-to-one replacement of the old bastion except its not exposed to the world at all. Now we needed to actually &lt;em&gt;use&lt;/em&gt; it to reach the database from our laptops.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;2. Starting a Session Manager Port Forward&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;AWS provides a CLI command to open a Session Manager session. Instead of a normal shell session, we will start a &lt;strong&gt;port forwarding session&lt;/strong&gt;. Heres an example command we use (in a Bash script on our laptops) to connect to one of our PostgreSQL databases:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Variables for clarity
INSTANCE_ID="i-0123456789abcdef0"   # The EC2 instance acting as our SSM tunnel
RDS_ENDPOINT="mydatabase.cluster-abcdefghijkl.us-east-1.rds.amazonaws.com"
DB_PORT=5432

aws ssm start-session \
  --target "$INSTANCE_ID" \
  --document-name "AWS-StartPortForwardingSessionToRemoteHost" \
  --parameters "host=$RDS_ENDPOINT,portNumber=$DB_PORT,localPortNumber=$DB_PORT"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Lets break down what this does:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;aws ssm start-session: This initiates an SSM Session Manager session from our machine. (Make sure youve configured your AWS CLI with credentials/SSO that have permission to use Session Manager on that instance.)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;--target: The ID of the EC2 instance we launched. This tells AWS which instances SSM Agent should handle the session.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;--document-name "AWS-StartPortForwardingSessionToRemoteHost": This is an AWS-provided session document that knows how to set up port forwarding to a specified remote host. Its essentially a pre-built SSM action for tunneling.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;--parameters "host=...,portNumber=...,localPortNumber=...": Here we provide the RDS host and port we want to reach, and which local port to use on our laptop. In our example, we set host to the RDS endpoint DNS name, portNumber to 5432 (the DBs port), and localPortNumber also to 5432. This means &lt;strong&gt;the SSM Agent on the EC2 will open a connection to mydatabase...:5432 (our RDS)&lt;/strong&gt;, and forward that back through the session to &lt;a href="http://localhost:5432" rel="noopener noreferrer"&gt;&lt;strong&gt;localhost:5432&lt;/strong&gt;&lt;/a&gt; &lt;strong&gt;on our laptop&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When we run this command, a few things happen behind the scenes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The AWS CLI calls the SSM service, which in turn signals the SSM Agent on our instance to start a port forwarding session. Because our instance can reach the RDS internally, it successfully connects to the databases host and port.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The CLI also starts a local proxy listening on the specified localPortNumber (5432). Youll see output like Starting session with SessionId and Port 5432 opened for session Waiting for connections . This means everything is set the tunnel is up and idle, waiting for you to connect.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We keep that terminal running (the session stays active). Now on our &lt;strong&gt;local machine&lt;/strong&gt; , we can connect to &lt;a href="http://localhost:5432" rel="noopener noreferrer"&gt;localhost:5432&lt;/a&gt; and it will actually reach the RDS through the tunnel.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At this point, the experience is exactly like using Verified Access (or a VPN). I can fire up my database client on my laptop, but now I point it to 127.0.0.1:5432 (or a &lt;a href="http://localhost" rel="noopener noreferrer"&gt;localhost&lt;/a&gt; alias), with the usual database credentials. &lt;strong&gt;Boom Im connected to the private RDS&lt;/strong&gt;. The Session Manager tunnel carries all the traffic. From the databases perspective, it sees a connection coming from the EC2 instances IP (since that instance is acting as the client on its behalf). From my perspective, it feels local.&lt;/p&gt;

&lt;p&gt;One great aspect of Session Manager is that all of this is done using my AWS IAM credentials. If Im authenticated with AWS (for example via AWS SSO login or access keys), I dont need to juggle any SSH keys or bastion passwords. Permissions to use Session Manager can be controlled via IAM policies (for instance, only allow certain IAM roles to start sessions to that instance). And every session is logged in AWS CloudTrail (and even Session Manager can be set to log full console output to S3/CloudWatch if needed). So we &lt;strong&gt;gained auditability&lt;/strong&gt; without much effort an improvement over the old bastion where RDP logins were somewhat opaque.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0fur9e04tqd6c8j3qz4h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0fur9e04tqd6c8j3qz4h.png" width="800" height="288"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Figure: Using AWS Systems Manager Session Manager to create a secure tunnel from a client to an RDS database via a private EC2 instance. The EC2 bastion lives in a private subnet with&lt;/em&gt; &lt;strong&gt;&lt;em&gt;no inbound ports&lt;/em&gt;&lt;/strong&gt; &lt;em&gt;open. The Session Manager agent on it connects out to AWS, allowing authorized users to start an encrypted session. This lets us forward a local port on our laptop to the remote database securely.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cost impact:&lt;/strong&gt; Remember the cost comparison that motivated us? Heres how it played out:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The Session Manager approach requires a small EC2 instance running 24/7. Our t4g.nano plus storage costs about &lt;strong&gt;$5 per month&lt;/strong&gt;. We could even stop it out of hours, but at that price its not worth the hassle.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Session Manager itself doesnt cost extra; its a feature of AWS Systems Manager. There is no hourly charge for sessions, and data transfer is minimal (just the database traffic which wed have anyway; it might incur tiny charges if it goes through a NAT or VPC endpoint, but those are pennies).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Versus Verified Access, which would have been around &lt;strong&gt;$0.27/hour&lt;/strong&gt; each for our databases ($200/month per DB) , the savings are enormous. Even factoring in the old Windows bastion cost (say ~$50/month), Session Manager is an order of magnitude cheaper. Essentially, we got nearly the same functionality for &lt;strong&gt;almost no cost&lt;/strong&gt; in our AWS bill.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;3. Smoothing the Workflow (Making it Easy for Engineers)&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Running a long CLI command to start the tunnel was fine for us, but we wanted to make this as seamless as possible especially for new engineers who might not be AWS CLI wizards. We took a couple of steps to streamline usage on our laptops:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Bash Script &amp;amp; Alias:&lt;/strong&gt; We wrapped the aws ssm start-session command in a simple shell script (&lt;a href="http://connect-db.sh" rel="noopener noreferrer"&gt;connect-db.sh&lt;/a&gt;) and put it in our teams internal toolkit repository. It accepts the environment or database name as an argument, so it knows which instance and host to target. For example: &lt;a href="http://connect-db.sh" rel="noopener noreferrer"&gt;connect-db.sh&lt;/a&gt; prod reporting-db would fetch the appropriate instance ID and DB host from a config and run the above command. Developers can alias this in their shell, so bringing up the tunnel is one short command away. Each script execution opens a new terminal window with the session (so we remember to close it when done).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Auto-Connect on macOS (Launch Agent):&lt;/strong&gt; For those frequently connecting to a dev database, we created a Launch Agent on macOS to &lt;strong&gt;automatically start the tunnel at login&lt;/strong&gt;. This uses a .plist file in ~/Library/LaunchAgents. Heres a snippet of what that looks like:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;!-- ~/Library/LaunchAgents/com.mycompany.ssm-tunnel.plist --&amp;gt;
&amp;lt;plist version="1.0"&amp;gt;
  &amp;lt;dict&amp;gt;
    &amp;lt;key&amp;gt;Label&amp;lt;/key&amp;gt;
    &amp;lt;string&amp;gt;com.mycompany.ssm-tunnel&amp;lt;/string&amp;gt;
    &amp;lt;key&amp;gt;ProgramArguments&amp;lt;/key&amp;gt;
    &amp;lt;array&amp;gt;
      &amp;lt;string&amp;gt;/usr/local/bin/aws&amp;lt;/string&amp;gt;
      &amp;lt;string&amp;gt;ssm&amp;lt;/string&amp;gt;
      &amp;lt;string&amp;gt;start-session&amp;lt;/string&amp;gt;
      &amp;lt;string&amp;gt;--target&amp;lt;/string&amp;gt;
      &amp;lt;string&amp;gt;i-0123456789abcdef0&amp;lt;/string&amp;gt;
      &amp;lt;string&amp;gt;--document-name&amp;lt;/string&amp;gt;
      &amp;lt;string&amp;gt;AWS-StartPortForwardingSessionToRemoteHost&amp;lt;/string&amp;gt;
      &amp;lt;string&amp;gt;--parameters&amp;lt;/string&amp;gt;
      &amp;lt;string&amp;gt;host=mydatabase.cluster-abcdefghijkl.us-east-1.rds.amazonaws.com,portNumber=5432,localPortNumber=5432&amp;lt;/string&amp;gt;
    &amp;lt;/array&amp;gt;
    &amp;lt;key&amp;gt;RunAtLoad&amp;lt;/key&amp;gt;&amp;lt;true/&amp;gt;
    &amp;lt;key&amp;gt;KeepAlive&amp;lt;/key&amp;gt;&amp;lt;true/&amp;gt;
    &amp;lt;key&amp;gt;StandardOutPath&amp;lt;/key&amp;gt;&amp;lt;string&amp;gt;/tmp/ssm-tunnel.log&amp;lt;/string&amp;gt;
    &amp;lt;key&amp;gt;StandardErrorPath&amp;lt;/key&amp;gt;&amp;lt;string&amp;gt;/tmp/ssm-tunnel.err&amp;lt;/string&amp;gt;
  &amp;lt;/dict&amp;gt;
&amp;lt;/plist&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;In plain English, this Launch Agent definition does: when I log in, run the AWS CLI session manager command with the given parameters. The RunAtLoad means start it automatically, and KeepAlive means if it crashes or the session drops, launchd will restart it. We log output to /tmp for debugging. After loading this (launchctl load -w ~/Library/LaunchAgents/com.mycompany.ssm-tunnel.plist), the developer gets a persistent tunnel in the background. They can now connect to the DB anytime without even thinking about the tunnel its just there. (We set KeepAlive so that if the session times out after inactivity, it will try to reconnect. One caveat: Session Manager sessions do have a max duration, a few hours, so the agent will reconnect a few times a day in the background.)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Using SSH Config (alternate method, which we used eventually):&lt;/strong&gt; Another neat trick is to use the SSH client as a wrapper for Session Manager. This might sound odd since we said no SSH, but its just leveraging the SSH command as a convenient way to manage tunnels. By adding an entry in ~/.ssh/config that calls the Session Manager proxy, one can bring up a tunnel with a simple ssh invocation. For example:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Host rds-tunnel
  HostName i-0123456789abcdef0
  User ec2-user
  ProxyCommand aws ssm start-session --target %h --document-name AWS-StartPortForwardingSessionToRemoteHost --parameters "host=mydatabase.cluster-abcdefghijkl.us-east-1.rds.amazonaws.com,portNumber=5432,localPortNumber=5432"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;With such an entry, running ssh -N rds-tunnel will trigger the AWS CLI to start the session (the %h is replaced with the instance ID as HostName). The -N flag tells SSH not to execute a remote command (since we arent actually going to log in; we just want the tunnel). This is a bit of a hack and still requires the AWS CLI, but some GUI tools can invoke SSH tunnels this way as well.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;4. Results: A Happy Team with Secure Access&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Once we rolled out the Session Manager solution, feedback from the team was very positive. It achieved what we wanted:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Greatly improved UX:&lt;/strong&gt; Just like with Verified Access, engineers use their local tools and dont have to maintain a remote VM workspace. Whether its a newbie using a point-and-click SQL client or a veteran automating a psql script, they run it from their machine as if the database were local. Onboarding a new engineer to access the DB is as simple as: Install AWS CLI (or our helper script), run this command, and youre in.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Tight Security (no more open holes):&lt;/strong&gt; We completely shut down the old bastion. No RDP, no SSH nothing is exposed. The EC2 instance is invisible from the internet. Session Manager uses an encrypted TLS connection initiated from the inside, and requires the user to auth with AWS. This &lt;strong&gt;removed a major attack surface&lt;/strong&gt;. As AWSs own best practices note, Session Manager eliminates the need for bastion hosts or open inbound ports . We also benefit from audit logs; we can see which user opened a session at what time in AWS CloudTrail, and even log the I/O if we wanted to inspect what commands are run (for shell sessions).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Low Maintenance:&lt;/strong&gt; The EC2 tunnel instance is about as low-touch as it gets. Amazon Linux 2 applies security patches on reboot easily; we can also bake an AMI periodically with updates if we ever needed to replace it. The SSM Agent updates itself automatically via AWS Systems Manager. There are no user accounts or keys on this instance to manage. In fact, the instance runs with no human login at all. If we want to administer it, wed use Session Manager to get a shell. This dramatically reduces the admin overhead compared to the old Windows bastion that needed active user management and patching. And unlike Verified Access, theres no separate client software for us to deploy to everyone just the ubiquitous AWS CLI.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cost Savings:&lt;/strong&gt; We already calculated the stark difference on the order of maybe $10/month vs $200-$600/month for our scenario. Over a year, thats thousands saved, which matters for our budget. Were effectively paying only for a tiny instance and using an AWS service thats &lt;em&gt;free&lt;/em&gt; (covered by the fact we use AWS in general). For larger orgs, the cost argument might be different, but for us this was a huge win.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Room for Expansion:&lt;/strong&gt; With this setup, if we add more databases or even other internal services (e.g., an ElastiCache Redis, or an internal HTTP service), we have options. We can either use the same EC2 as a multi-purpose tunnel (starting separate sessions for different targets as needed), or create more instances if we want isolation per environment. Since its so cheap, spinning up one per environment or per service is not an issue. Session Manager even allows tunneling RDP or SSH if we ever needed GUI or console access to an instance its versatile.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion: Lessons Learned and Tips&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Our journey from a clunky bastion to a modern access solution taught us a few valuable lessons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;New and shiny isnt always better for us.&lt;/strong&gt; AWS Verified Access is a powerful service and no doubt the future for many zero-trust network scenarios. If we had strict device compliance requirements or a larger enterprise setup, its policy-based access and deep integration with corporate identity might be worth the cost. But in our case, the simpler Session Manager approach covered 90% of our needs at a fraction of the complexity and cost. It was a reminder that tried-and-true tools can sometimes beat bleeding-edge solutions, depending on the context.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;User experience matters, but balance it with security and cost.&lt;/strong&gt; We were determined to improve UX for our engineers, and we did moving away from the old jump box improved quality of life significantly. However, we had to also consider security (ensuring the new solution wasnt trading one risk for another) and cost. We found a sweet spot where UX, security, and cost were all satisfied. Whenever you introduce a new access method, evaluate it holistically: how will users feel about it, is it actually secure, and does it justify the expense?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS Session Manager is underrated.&lt;/strong&gt; Many engineers know Session Manager as that thing you can use instead of SSH to get a console. But its port forwarding capability is a game-changer for scenarios like database access. It enabled us to implement a lightweight &lt;strong&gt;bastion-as-a-service&lt;/strong&gt; without maintaining complex infrastructure. If youre still using old bastion hosts or SSH tunnels, give Session Manager a serious look it can simplify your life. As AWSs security blog notes, Session Manager can eliminate the need for bastions and open ports while still giving you necessary access .&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automation makes perfect.&lt;/strong&gt; Once you set up a solution like this, invest a bit of time to script it and integrate it into your teams workflows. Our use of Launch Agents and simple CLI wrappers means nobody is fumbling with long commands or forgetting to start their tunnel. New hires get a smooth experience from day one (Just run this script and youre connected). Little quality-of-life improvements go a long way in adoption of a new tool.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the end, our team now connects to our databases securely, quickly, and with minimal fuss. We retired the fragile old Windows jump box and significantly cut down our attack surface. We also saved money, which is always a nice bonus. And when AWS improves Verified Access (maybe a fully managed client, Linux support, lower costs?), well be ready to re-evaluate it. But for now, &lt;strong&gt;Session Manager has become our go-to solution&lt;/strong&gt; for remote access to cloud resources.&lt;/p&gt;

&lt;p&gt;If youre in a similar boat juggling bastions, VPNs, or pondering AWS Verified Access I hope our story helps you find the approach that works best for you. Sometimes the solution is hiding in plain sight (in our case, in the AWS CLI we were already using). Happy connecting, and may all your database queries be speedy and secure!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>awsrds</category>
      <category>awsssm</category>
      <category>database</category>
    </item>
    <item>
      <title>Why Security-Conscious Startups Need DevSecOps from Day One</title>
      <dc:creator>Pawan Sawalani</dc:creator>
      <pubDate>Wed, 26 Mar 2025 22:41:51 +0000</pubDate>
      <link>https://dev.to/pawansawalani/why-security-conscious-startups-need-devsecops-from-day-one-31b0</link>
      <guid>https://dev.to/pawansawalani/why-security-conscious-startups-need-devsecops-from-day-one-31b0</guid>
      <description>&lt;p&gt;I vividly remember a night early in my career when I jolted awake at 3 AM to a barrage of Slack notifications. Our companys app was under attack a database had been left exposed, and an attacker was starting to poke around our customer data. We were lucky to catch it in time, but that sleepless night was a wake-up call for me. It drove home a lesson I carry to this day: if youre a security-conscious startup (and you should be), you need to embed security practices from day one. In other words, you need DevSecOps at the core of your startups DNA from the very beginning.&lt;/p&gt;

&lt;p&gt;My name is Pawan. As a Lead DevSecOps Engineer at Prommt with 8+ years in AWS, platform engineering, and cloud security, Ive seen firsthand how proactive security can make or break a young company. In this post, I want to share why integrating DevSecOps early isnt a nice-to-have but a must-have for startups and how its helped me build safer, faster-moving teams. Ill also sprinkle in some personal war stories (including how we shaping DevSecOps workflow at Prommt) to illustrate the impact. So grab a coffee, and lets dive in.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Startup Security Blind Spot&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Startups thrive on moving fast and innovating. Move fast and break things, right? The problem is, &lt;em&gt;what&lt;/em&gt; breaks isnt always just your code it could be your security. In the rush to ship features and impress investors, its easy for a small team to push off security tasks. Ive heard founders say, Well worry about security after we get our first 100k users, only to realize (sometimes painfully) that a single security mishap can &lt;strong&gt;stop&lt;/strong&gt; them from ever getting to that milestone.&lt;/p&gt;

&lt;p&gt;The truth is, even early-stage startups are juicy targets for cyber attackers. You might think, Were too small, who would target us? But attackers often see startups as low-hanging fruit less likely to have strong defenses, but still holding valuable data. A single breach can wreck your reputation and destroy user trust overnight. It might even scare off potential investors or partners. (As someone whos had to field panicked investor calls after a security incident, I can tell you that nothing derails a promising pitch faster than news of a breach.)&lt;/p&gt;

&lt;p&gt;So whats the solution? Its not to slow down or become risk-averse its to build security into your fast-paced workflow. Thats exactly what DevSecOps is about. By weaving security into development and operations from day one, you mitigate that blind spot. Instead of a mad scramble to patch holes after an incident, youre preventing those holes from appearing in the first place. Think of it as vaccinating your product against common exploits and misconfigurations. You still move fast, but youre not flying blind.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DevSecOps on a Startup Budget: Working Smarter with Automation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;One pushback I often hear is, Were just a tiny startup we cant afford a full security team or expensive tools yet. The good news is, you dont need a dedicated army of security engineers to start practicing DevSecOps. In fact, adopting DevSecOps early can save your startup time and money in the long run by catching issues early (when theyre cheapest to fix) and by reducing the likelihood of costly breaches.&lt;/p&gt;

&lt;p&gt;DevSecOps isnt about buying fancy appliances; its a mindset and a set of practices. For a startup, that means &lt;strong&gt;working smarter with the resources you have&lt;/strong&gt;. Here are a few practical ways weve implemented DevSecOps on a lean budget:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Automate Repetitive Checks:&lt;/strong&gt; Use open-source or built-in tools to automate security tests in your CI/CD (Continuous Integration/Continuous Deployment) pipeline. For example, you can run static code analysis to catch vulnerable code, dependency scans to find risky libraries, and secret scanners to ensure no API keys slip into your repos. These can run on every code commit or pull request. In one of my previous roles, after we set up automated dependency scanning, we caught and fixed dozens of vulnerabilities &lt;em&gt;before&lt;/em&gt; they ever made it to production saving us from potential exploits down the road.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Leverage Cloud Security Features:&lt;/strong&gt; If youre on AWS (as many startups are), take advantage of the free or low-cost security features from day one. AWS has services like GuardDuty (for threat detection), Config (for policy compliance checks), and CloudTrail (for auditing), to name a few. Turning these on early creates a safety net. I like to set up simple alerts for things like unexpected public S3 bucket or unusual login locations. Its amazing how a few well-placed guardrails can prevent the classic rookie mistakes (like accidentally leaving storage buckets open to the world).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Infrastructure as Code &amp;amp; Safe Defaults:&lt;/strong&gt; Define your infrastructure in code and put it in version control. By doing this, you can enforce best practices (like secure configurations) by default. For instance, our team uses Terraform to spin up cloud resources with security built-in only necessary ports open, proper encryption enabled, least-privilege access roles, etc. Developers dont have to think about those details because the infrastructure code and templates handle it. This way, security isnt a one-off effort; its consistently applied every time we deploy.&lt;/p&gt;

&lt;p&gt;The key is &lt;strong&gt;automation and consistency&lt;/strong&gt;. As a startup, you want your small team focused on building the product, not doing tedious manual security reviews for each release. DevSecOps lets you automate those reviews. Its like having a tireless security assistant who checks everything in the background while your team concentrates on core work.&lt;/p&gt;

&lt;p&gt;By investing a bit of time to set up these automated checks and processes early, you avoid much bigger headaches (and costs) later. Trust me, spending an afternoon to script a security test is way better than spending a week doing damage control after a hack.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security Is a Team Sport: Building a Security-First Culture&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Tools and automation are fantastic, but DevSecOps is more than just tools its about culture. One of the biggest benefits of adopting DevSecOps early is the ability to instill a &lt;strong&gt;security-first culture&lt;/strong&gt; from the get-go. In a startup, culture is formed quickly by the founding teams values and habits. If security is part of that DNA, it becomes a shared responsibility rather than someone elses problem.&lt;/p&gt;

&lt;p&gt;In practical terms, a security-first culture means developers, ops, and even product folks all recognize that security is part of their job. It doesnt mean everyone becomes a security expert, but it does mean everyone cares about doing the right thing. Here are a few ways to nurture this culture:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lead by Example:&lt;/strong&gt; As a tech leader or founder, your attitude towards security sets the tone. If you treat security as an important, enabling part of building software (e.g. asking How can we make this feature secure by design?) rather than as a hindrance, the team will follow. In my teams, I make a point to celebrate when someone proactively finds and fixes a security issue. Its as worthy of praise as shipping a new feature.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Empower and Educate:&lt;/strong&gt; Give your team the knowledge and tools to act on security. At Prommt, we started DevSync, where Developer, Infra and ops get together and discuss/demo any new security enhancements and how it can be used in our web-app to improve the security even more. That knowledge paid off when we later caught a potential DDoS attack &lt;em&gt;before&lt;/em&gt; it went live, just because we embedded the security enhancement discussed/demoed in our &lt;em&gt;DevSync&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Integrate, Dont Silo:&lt;/strong&gt; Avoid the old-school model of a separate security team that only swoops in at the end. In a startup, you probably dont even have a separate security team and thats okay. Make security part of your development process. If you do have a security specialist (like me at Prommt), embed them in the design and sprint cycles rather than keeping them on the sidelines. When devs and ops see security folks as partners rather than gatekeepers, theyre more likely to engage us early to get things right.&lt;/p&gt;

&lt;p&gt;By fostering this collaborative mindset early, you reduce friction later on. Ive seen the contrast: in one organization that embraced DevSecOps from day one, developers would literally call me over to double-check their approach to storing passwords or to brainstorm the safest way to implement a feature. In another company that bolted on security much later, every security fix felt like pulling teeth developers were defensive and saw it as extra work. The difference was night and day, and it all came down to culture.&lt;/p&gt;

&lt;p&gt;Plus, a security-first culture is something you can proudly share with investors and customers. It shows that your startup is mature and trustworthy beyond its years. In the long run, that reputation can be as valuable as any feature on your roadmap.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Our DevSecOps Journey at Prommt: Integrating AWS Security Services into Our CI/CD Pipeline&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let me share a concrete example from my current role. When I joined Prommt as the Lead DevSecOps Engineer, we were a rapidly scaling startup in the payments industry. Given our responsibility for handling sensitive financial transactions, we understood immediately that security had to be integral, not an afterthought. Early on, we made deliberate choices to embed robust security measures like AWS WAF, Inspector, AWS Landing Zone, and GuardDuty directly into our development lifecycle through our CI/CD processes.&lt;/p&gt;

&lt;p&gt;What does this integration look like in practice? Here are a few highlights:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Landing Zone as a Secure Foundation&lt;/strong&gt; : We adopted AWS Landing Zone to establish a secure, multi-account AWS environment right from the start. This provided us a standardized baseline for managing account governance, access controls, and compliance across the organization, reducing operational overhead and security risks significantly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security Embedded CI/CD Pipelines&lt;/strong&gt; : Every code commit automatically triggers our CI/CD pipeline, which includes security checks leveraging SonarQube, Automated Security Helper (ASH) and AWS Inspector. If critical issues are detected, deployments are halted immediately, providing developers clear and actionable remediation guidance. Catching vulnerabilities early means fewer risks make it into production.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Real-time Threat Detection with AWS GuardDuty&lt;/strong&gt; : We integrated AWS GuardDuty into our platform to provide continuous threat detection and monitoring. GuardDuty analyzes logs and network activities across our AWS accounts, automatically identifying suspicious activities such as unauthorized access attempts or unusual API calls. This proactive approach helps us quickly pinpoint and address security incidents before they escalate.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Web Application Firewall (AWS WAF)&lt;/strong&gt;: To protect our web-facing services, we implemented AWS WAF at the platform level. WAF provides robust protection against common web exploits such as SQL injection and cross-site scripting (XSS). By integrating WAF rules directly into our deployment process, each new web application or microservice is immediately shielded from known threats, significantly reducing our attack surface.&lt;/p&gt;

&lt;p&gt;Integrating these AWS security tools delivered substantial benefits:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Streamlined Security Operations&lt;/strong&gt; : With AWS Landing Zone and GuardDuty in place, we've significantly cut down on manual security management tasks. Our security team spends less time firefighting and more time improving proactive security measures.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enhanced Developer Confidence&lt;/strong&gt; : Embedding security directly within CI/CD pipelines reduces the stress associated with deployments. Developers appreciate the rapid feedback provided by SonarQube, Automated Security Helper (ASH), AWS Inspector and GuardDuty, knowing that potential issues are flagged immediately, allowing them to address security as part of their normal workflow.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Simplified Compliance Audits&lt;/strong&gt; : As a payments-focused startup regularly audited for compliance (PCI-DSS, GDPR), the standardization offered by AWS Landing Zone and the continuous protection of GuardDuty and WAF makes audit processes smoother and less disruptive. Our security practices are transparent and consistently enforced, satisfying auditors and stakeholders alike.&lt;/p&gt;

&lt;p&gt;Perhaps most importantly, integrating AWS security services has significantly boosted team morale. Developers at Prommt now spend less time worrying about hidden vulnerabilities or security misconfigurations and more time delivering secure, quality features to our customers. Security has genuinely become an empowering aspect of their everyday workflow.&lt;/p&gt;

&lt;p&gt;The Prommt story is just one way to implement DevSecOps early, but it shows that even a small team can build powerful safeguards into their workflow. The key takeaway is to &lt;strong&gt;embed security and automation into the fabric of your development process&lt;/strong&gt;. Whether its a full-blown IDP or simply a well-tuned CI/CD pipeline with security gates, that integration will pay dividends as you scale.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Payoff: Trust, Speed, and Peace of Mind&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Embracing DevSecOps from day one isnt just about avoiding disasters it can actually &lt;strong&gt;fuel your startups growth&lt;/strong&gt;. When security is baked in early, youre not constantly putting out fires. You can ship features faster and more confidently because the team isnt bogged down by last-minute security scrambles or emergency patching.&lt;/p&gt;

&lt;p&gt;Theres also a clear business upside. Customers and investors might not see all the under-the-hood work, but they &lt;em&gt;do_notice the outcomes: your product is reliable, theres no news of data leaks, and you can speak confidently about your security practices. This builds trust. Ive been in due diligence meetings where a startups security posture was a deciding factor for an investment. Being able to say Yes, we have automated security testing, infrastructure guardrails, and a security-aware culture from day one can literally &lt;strong&gt;secure&lt;/strong&gt; the deal (pun intended). In a crowded market, a strong security story sets you apart it signals that youre not just moving fast, youre moving fast _and&lt;/em&gt; safe.&lt;/p&gt;

&lt;p&gt;Finally, consider the human factor: peace of mind. Launching a startup is stressful enough without wondering if today is the day youll get hacked or accidentally expose user data. Knowing that youve put a DevSecOps foundation in place even if its not perfect helps everyone sleep a little better at night. I can attest that its a great feeling when weeks go by without any 3 AM security emergencies. Your on-call engineers will thank you when theyre not waking up to critical pager alerts every other week.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Security doesnt have to be the enemy of innovation. In fact, when done right, its a catalyst for sustainable innovation. By adopting DevSecOps from day one, youre investing in your startups long-term resilience. The payoff comes in many forms: fewer security incidents, faster delivery, happier developers, and greater trust from users and investors.&lt;/p&gt;

&lt;p&gt;If youre a startup founder or engineer, here are a few actionable takeaways to get started:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Start Small, But Start Now:&lt;/strong&gt; Pick one or two security practices and integrate them into your development process &lt;em&gt;this week&lt;/em&gt;. For example, enable an automated dependency scan in your build, or add a step in code review to check for basic security issues. You dont have to do everything at once the important part is to begin.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automate the Basics:&lt;/strong&gt; Identify common security gotchas (like leaked credentials, open admin ports, or outdated libraries) and use scripts or tools to check for them continuously. Set up alerts for the critical ones. Early automation yields big benefits with minimal effort.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Educate &amp;amp; Involve the Team:&lt;/strong&gt; Share at least one security tip or lesson in your next team meeting. Encourage questions about security when designing features. Maybe even host a casual threat modeling session over pizza. Make security a normal part of the conversation, not a taboo topic.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Leverage the Community and Tools:&lt;/strong&gt; Youre not alone in this. There are plenty of free resources, open-source tools, and communities (DevSecOps forums, blogs, etc.) where you can learn tips tailored for startups. Dont reinvent the wheel borrow proven ideas and adapt them to your needs.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In the end, DevSecOps is about building a company that can move fast &lt;strong&gt;and&lt;/strong&gt; confidently. Ive been on both sides the frantic firefighting mode and the smooth, secure delivery mode and I cant recommend the proactive approach enough. Security-conscious startups set themselves up for success by treating DevSecOps as a day-one priority.&lt;/p&gt;

&lt;p&gt;Thanks for reading! If you have questions or want to share your own startup security stories, drop a comment I’d love to hear your experiences.&lt;/p&gt;

</description>
      <category>landingzoneaccelerator</category>
      <category>devops</category>
      <category>devsecops</category>
      <category>aws</category>
    </item>
  </channel>
</rss>
