<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Hejun Wong</title>
    <description>The latest articles on DEV Community by Hejun Wong (@herjean7).</description>
    <link>https://dev.to/herjean7</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/herjean7"/>
    <language>en</language>
    <item>
      <title>Decoding MongoDB Metrics: A Practical Guide for DBAs, Developers and DevOps Engineers</title>
      <dc:creator>Hejun Wong</dc:creator>
      <pubDate>Wed, 27 Aug 2025 04:07:30 +0000</pubDate>
      <link>https://dev.to/herjean7/decoding-mongodb-metrics-a-practical-guide-for-dbas-developers-and-devops-engineers-e3n</link>
      <guid>https://dev.to/herjean7/decoding-mongodb-metrics-a-practical-guide-for-dbas-developers-and-devops-engineers-e3n</guid>
      <description>&lt;p&gt;When you're new to MongoDB, diving into the monitoring dashboard can feel like trying to read a foreign language. You're met with a sea of charts and numbers, but what do they all mean? What's good? What's bad? What demands immediate attention?&lt;/p&gt;

&lt;p&gt;This is a critical skill for anyone managing or building on MongoDB, from DBAs and DevOps engineers to developers. Understanding these metrics is the key to proactive performance tuning, efficient resource provisioning, and preventing outages before they happen.&lt;/p&gt;

&lt;p&gt;Over the years, I've spent a lot of time walking new team members, customers, and SREs through these charts. This article is my attempt to distill that knowledge and help you understand your MongoDB deployment's health a little better.&lt;/p&gt;

&lt;p&gt;Let's demystify some of the most important metrics you should be watching.&lt;/p&gt;




&lt;h3&gt;
  
  
  Core Performance Indicators
&lt;/h3&gt;

&lt;p&gt;These metrics give you a high-level overview of the database's workload and responsiveness.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb8qd26e44slske2vmp7a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb8qd26e44slske2vmp7a.png" alt="MongoDB Atlas Opcounters Metric" width="800" height="128"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  1. OpCounters
&lt;/h4&gt;

&lt;p&gt;While &lt;code&gt;OpCounters&lt;/code&gt; (operation counters) don't directly signal a health problem, they provide essential context. This metric breaks down the database operations (&lt;code&gt;insert&lt;/code&gt;, &lt;code&gt;query&lt;/code&gt;, &lt;code&gt;update&lt;/code&gt;, &lt;code&gt;delete&lt;/code&gt;, etc.) happening over a specific period. By itself, it tells you &lt;em&gt;what&lt;/em&gt; the database is doing. When correlated with other metrics like CPU or disk I/O, it helps you understand &lt;em&gt;why&lt;/em&gt; the system is behaving a certain way.&lt;/p&gt;

&lt;h4&gt;
  
  
  2. Operation Execution Times
&lt;/h4&gt;

&lt;p&gt;This chart shows the average time, in milliseconds, that database operations take to execute.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Good:&lt;/strong&gt; Low values. This indicates your database is processing requests efficiently.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bad:&lt;/strong&gt; Rising values or Spikes. Increasing execution times are a clear signal of performance degradation. This could be due to inefficient queries, resource contention, or network issues.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By default, MongoDB considers any operation that takes longer than &lt;strong&gt;100ms&lt;/strong&gt; to be slow. If you see a trend of rising execution times, it's time to use the &lt;strong&gt;Query Profiler&lt;/strong&gt; to investigate and identify the specific queries that are slowing things down.&lt;/p&gt;

&lt;h4&gt;
  
  
  3. Normalized CPU (System &amp;amp; Process)
&lt;/h4&gt;

&lt;p&gt;CPU is a fundamental resource. In most monitoring tools, you'll see a "normalized" value, which is incredibly helpful. Normalization divides the absolute CPU usage by the number of CPU cores, giving you an easy-to-read percentage from 0-100%.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Normalized Process CPU:&lt;/strong&gt; Tracks the CPU usage of the &lt;code&gt;mongod&lt;/code&gt; process itself. This is your primary indicator of the database's CPU load.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Normalized System CPU:&lt;/strong&gt; Tracks the total CPU usage of all processes on the host machine.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A healthy range for Normalized Process CPU is often between &lt;strong&gt;40-70%&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Under 40%:&lt;/strong&gt; You might be over-provisioned for your current workload.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Over 70% (sustained):&lt;/strong&gt; You may be under-provisioned, and the CPU could become a bottleneck. When provisioning and sizing, CPU should always be considered alongside memory, storage, and IOPS.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  4. Queues
&lt;/h4&gt;

&lt;p&gt;The &lt;code&gt;Queues&lt;/code&gt; metric shows the number of operations waiting for the database to process them. It's a direct measure of demand versus capacity.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Good:&lt;/strong&gt; &lt;code&gt;0&lt;/code&gt;. When there are no queues, your database is keeping up with incoming requests.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bad:&lt;/strong&gt; Any sustained number greater than zero. A large queue indicates the database cannot process operations in a timely fashion, leading to increased latency for your application.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Queues are often a symptom of other problems, such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Inefficient queries that need indexes.&lt;/li&gt;
&lt;li&gt;Hardware bottlenecks (CPU, IOPS).&lt;/li&gt;
&lt;li&gt;Inefficient data models, such as many applications trying to update the very same document, causing lock contention.&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Storage and Disk Metrics
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgx5ha4avv9ia599qaket.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgx5ha4avv9ia599qaket.png" alt="MongoDB Atlas Disk Space % Free Metric" width="800" height="128"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  5. Disk Space Percent Free
&lt;/h4&gt;

&lt;p&gt;This one is straightforward: it's the percentage of your disk that is available. Monitoring the percentage is often more intuitive than tracking absolute gigabytes free, as it saves you a few mental steps.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Good:&lt;/strong&gt; Consistently above &lt;strong&gt;20%&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Caution:&lt;/strong&gt; Falling below &lt;strong&gt;10%&lt;/strong&gt;. If available space is fully depleted, your database will stop accepting writes, leading to downtime. While 10% of a very large disk is still a lot of space, you may consider setting alerts for this.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  6. Disk IOPS (I/O Operations Per Second)
&lt;/h4&gt;

&lt;p&gt;This metric reflects the read and write throughput of your disk. It's crucial to ensure this value stays comfortably below the maximum IOPS provisioned for your hardware.&lt;/p&gt;

&lt;p&gt;If the Disk IOPS metrics is consistently high, check the Disk Queue Depth metric - is it also showing a consistently high value (&amp;gt;10) as well? If yes to both, then you should consider increasing our provisioned IOPS or scaling up your cluster to get more RAM (when we can fit more data into our WiredTiger Cache, we will not need to go to the disk to retrieve data as often).&lt;/p&gt;

&lt;p&gt;If your average IOPS is consistently hovering near the maximum, you are approaching a performance cliff. When disk IOPS are saturated, the storage subsystem cannot service read and write requests in a timely manner. This causes a cascading failure:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; The database journaling system may block, waiting to write to disk.&lt;/li&gt;
&lt;li&gt; The storage engine cannot flush modified data from memory to disk in a timely manner(checkpointing).&lt;/li&gt;
&lt;li&gt; This leads to a surge in queue length and operation latency, causing the cluster to become unresponsive or &lt;strong&gt;stall&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To fix this, you can either provision more IOPS (which can be costly) or increase your storage size, as IOPS often scale with disk capacity on cloud providers.&lt;/p&gt;

&lt;p&gt;Monitoring the Disk IOPS metrics is more important than the Max Disk IOPS metrics as it is the actual IOPS. Max Disk IOPS records the max IOPS observed and is not representative of your workload.&lt;/p&gt;

&lt;h4&gt;
  
  
  7. Disk Latency
&lt;/h4&gt;

&lt;p&gt;This is the average time, in milliseconds, for read and write operations to complete on the disk. It's a direct measure of your storage performance.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Excellent:&lt;/strong&gt; Consistently under &lt;strong&gt;5ms&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Acceptable:&lt;/strong&gt; Between &lt;strong&gt;5ms&lt;/strong&gt; and &lt;strong&gt;20ms&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bad:&lt;/strong&gt; Sustained latency over &lt;strong&gt;20ms&lt;/strong&gt; signals a disk bottleneck. If you see latency spiking above 100ms, it's a critical issue that needs immediate investigation with your infrastructure provider.&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Memory Related Metrics
&lt;/h3&gt;

&lt;p&gt;MongoDB loves memory. It uses RAM to cache your working set (frequently accessed data and indexes), which dramatically reduces the need for slow disk I/O.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc4zjwmq1qq348u54y7wp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc4zjwmq1qq348u54y7wp.png" alt="MongoDB Atlas Memory Metric" width="800" height="128"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  8. Memory (Resident vs. Virtual)
&lt;/h4&gt;

&lt;p&gt;You'll typically see two memory metrics: virtual and resident. While virtual memory is the total address space allocated by the process, &lt;strong&gt;resident memory&lt;/strong&gt; is the one to watch. It shows the actual amount of physical RAM the &lt;code&gt;mongod&lt;/code&gt; process is using.&lt;/p&gt;

&lt;p&gt;After your database has been running for a while and has loaded its working set into memory, the resident memory usage should stabilize into a relatively flat line. This indicates it has reached a steady state.&lt;/p&gt;

&lt;h4&gt;
  
  
  9. Cache Ratio (Fill &amp;amp; Dirty)
&lt;/h4&gt;

&lt;p&gt;The WiredTiger storage engine uses an internal cache to hold data.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cache Fill Ratio:&lt;/strong&gt; This measures how full that cache is. In a healthy, active deployment, this value should hover around &lt;strong&gt;80%&lt;/strong&gt;. If your working set (the data you access frequently) fits in memory, this ratio will be high. If it approaches 100%, it could mean your working set is larger than your cache. Increasing the instance's RAM could reduce disk I/O and improve performance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Dirty Fill Ratio:&lt;/strong&gt; This represents the percentage of the cache that contains "dirty" data—data that has been modified in memory but not yet written (flushed) to disk. This value should stay &lt;strong&gt;below 5%&lt;/strong&gt;. If it consistently goes above this, MongoDB may employ application threads to help with data eviction, directly degrading your database's performance.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  10. Connections
&lt;/h4&gt;

&lt;p&gt;Every open connection to your database consumes resources, typically around 1MB of RAM. It's vital to manage your connection pool effectively. Uncontrolled connections can exhaust RAM and bring a database to its knees.&lt;/p&gt;

&lt;p&gt;For applications running in containerized environments like Kubernetes, where many pods can spin up, it's easy to create a connection storm. I typically recommend setting these two connection string options in your driver:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;maxIdleTimeMS=60000&lt;/code&gt;: Closes connections that have been idle for 60 seconds, preventing unused connections from lingering.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;maxPoolSize&lt;/code&gt;: Limits the number of connections per application instance. This setting is critical, but there's no single magic number—it truly depends on your application's workload and the infrastructure it runs on. For a microservice in a small pod (1-2 CPU cores), a conservative value like maxPoolSize=5 is an excellent starting point. However, if you have a monolithic application on a large 16-core VM, the default of 100 might be a perfectly reasonable choice to support the application's concurrency.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Your total connections can be estimated with:&lt;br&gt;
&lt;code&gt;Total Connections ≈ (Number of Application Pods) * maxPoolSize&lt;/code&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Developer-Focused Metrics
&lt;/h3&gt;

&lt;p&gt;These metrics provide direct feedback on the efficiency of your queries and data model.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo21j5cjxfjpacv3am5im.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo21j5cjxfjpacv3am5im.png" alt="MongoDB Atlas Scan and Order Metric" width="800" height="128"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  11. Query Targeting
&lt;/h4&gt;

&lt;p&gt;This is a powerful metric that measures index efficiency. It's the ratio of documents scanned to documents returned.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Query Targeting Ratio = documents examined OR index items examined / documents returned&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Good:&lt;/strong&gt; A ratio of &lt;strong&gt;1:1&lt;/strong&gt; is perfect. This means your query / index was so effective that for every document MongoDB had to look at, it was a document your query needed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bad:&lt;/strong&gt; A high ratio indicates your queries are scanning many irrelevant documents to find the ones they need. This points to missing or suboptimal indexes.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  12. Scan and Order
&lt;/h4&gt;

&lt;p&gt;This metric tracks queries that perform an in-memory sort. Sorting large result sets in memory after fetching them is very expensive, consuming significant CPU and memory.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Good:&lt;/strong&gt; &lt;code&gt;0&lt;/code&gt;. This means all sorting is being done efficiently using an index's inherent order.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bad:&lt;/strong&gt; Any value greater than zero. If you see &lt;code&gt;scanAndOrder&lt;/code&gt; operations, you should review your queries to see if a new or modified index can provide the requested sort order, eliminating the costly in-memory step.&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Replication Metrics
&lt;/h3&gt;

&lt;p&gt;For any production replica set, ensuring data is copied efficiently and reliably is paramount.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkdjdn6odzqqrpj1vs1p6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkdjdn6odzqqrpj1vs1p6.png" alt="MongoDB Atlas Oplog Window Metric" width="800" height="128"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  13. Replication Lag
&lt;/h4&gt;

&lt;p&gt;This is the approximate time, in seconds, that a secondary member is behind the primary's write operations.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Good:&lt;/strong&gt; Low and stable lag, typically under &lt;strong&gt;5 seconds&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bad:&lt;/strong&gt; High (over &lt;strong&gt;10 seconds&lt;/strong&gt;) or growing lag. This can lead to stale reads from secondaries.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  14. Replication Oplog Window
&lt;/h4&gt;

&lt;p&gt;The oplog (operations log) is the special collection that records all write operations. The "oplog window" is the duration of time that those operations are retained. A sufficiently large window allows a secondary that falls behind (e.g., due to a network issue or downtime) to catch up without needing an "&lt;a href="https://www.mongodb.com/docs/manual/core/replica-set-sync/#initial-sync" rel="noopener noreferrer"&gt;initial sync&lt;/a&gt;" — a highly resource-intensive process of re-copying the entire dataset.&lt;/p&gt;

&lt;p&gt;We typically recommend customers maintain an oplog window of at least &lt;strong&gt;3 days (72 hours)&lt;/strong&gt;. Why? Imagine a secondary stops replicating on a Friday evening. When the DBA comes in on Monday, they have a comfortable buffer to fix the node before it falls too far behind and requires a full resync.&lt;/p&gt;

&lt;h4&gt;
  
  
  15. Oplog GB/Hour
&lt;/h4&gt;

&lt;p&gt;So, how do you size your oplog for a 3-day window? Use the &lt;code&gt;Oplog GB/Hour&lt;/code&gt; metric. Find the amount of oplog data generated during your busiest hour and use that as a baseline.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Required Oplog Size = (Peak Oplog GB/Hour) * 72&lt;/code&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Final Thoughts
&lt;/h3&gt;

&lt;p&gt;Monitoring is not a passive activity. It's about understanding the story your database is telling you. By keeping an eye on these key metrics, you can move from a reactive to a proactive approach, ensuring your MongoDB deployment remains healthy, scalable, and performant.&lt;/p&gt;

&lt;p&gt;What are your go-to metrics? Do you have other tips for interpreting database health? Share your thoughts and suggestions in the comments below!&lt;/p&gt;

&lt;p&gt;&lt;em&gt;(This article was inspired by and builds upon some concepts from &lt;a href="https://www.mongodb.com/resources/products/capabilities/how-to-monitor-mongodb-and-what-metrics-to-monitor" rel="noopener noreferrer"&gt;MongoDB's monitoring guide&lt;/a&gt;.)&lt;/em&gt;&lt;/p&gt;

</description>
      <category>mongodb</category>
      <category>monitoring</category>
      <category>database</category>
    </item>
    <item>
      <title>Integrating MongoDB Atlas Alerts with Lark Custom Bot via AWS Lambda</title>
      <dc:creator>Hejun Wong</dc:creator>
      <pubDate>Sat, 21 Dec 2024 15:29:00 +0000</pubDate>
      <link>https://dev.to/herjean7/integrating-mongodb-atlas-alerts-with-lark-custom-bot-via-aws-lambda-4adh</link>
      <guid>https://dev.to/herjean7/integrating-mongodb-atlas-alerts-with-lark-custom-bot-via-aws-lambda-4adh</guid>
      <description>&lt;h2&gt;
  
  
  Why Integrate Atlas with Lark?
&lt;/h2&gt;

&lt;p&gt;MongoDB Atlas provides a sophisticated monitoring system that can alert teams to performance issues, security concerns, or other critical system events. Lark, on the other hand, is a powerful, versatile communication platform gaining popularity among businesses for its efficient collaboration tools. &lt;/p&gt;

&lt;p&gt;However, integrating these alerts with communication platforms like Lark can be challenging since Atlas doesn't directly support it. Integrating Atlas alerts with Lark can streamline incident management by ensuring that critical alerts are immediately communicated to the right team members through their preferred communication channels.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Integration Strategy
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Understanding the Workflow&lt;/strong&gt;&lt;br&gt;
To integrate Atlas alerts with Lark, we need to create a middleware that receives alerts from Atlas, transforms the data into a Lark-compatible format, and forwards it to Lark. &lt;/p&gt;

&lt;p&gt;Here's a step-by-step overview of the process:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Setup a Lark Custom Bot in the Lark Group you would like to receive the Atlas Alerts&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Set up an AWS Lambda Function: This function will serve as the middleware that processes incoming alert data from Atlas, transforms it into a format compatible with Lark and forwards it to the Lark group.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configure API Gateway: Use API Gateway to expose a public REST endpoint that Atlas can send webhooks to. This gateway triggers the Lambda function.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Integrate Atlas with the Webhook: Configure your Atlas project to send alerts to the API Gateway endpoint.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Test out the Solution&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  Implementing the Solution
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Define Environment Variables&lt;/strong&gt;: Configure your Lambda function with the following environment variables to facilitate communication with Lark:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;LARK_HOSTNAME: The base hostname for the Lark API. &lt;code&gt;_open.larksuite.com_&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;LARK_PATH: The specific webhook path provided by Lark's Custom Bot &lt;code&gt;/open-apis/bot/v2/hook/...&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;LARK_SECRET: The secret token provided by Lark's Custom Bot when signature verification has been enabled&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;AWS Lambda Function&lt;/strong&gt;: Create a Lambda function that receives JSON payloads from Atlas and formats them for Lark.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { request } from 'https';
import crypto from 'crypto';
function genSign(timestamp,secret) {
    // Take timestamp + "\n" + secret as the signature string
    const stringToSign = `${timestamp}\n${secret}`;
    // Use the HmacSHA256 algorithm to calculate the signature
    const hmac = crypto.createHmac('sha256', stringToSign);
    const signData = hmac.digest();
    // Return the Base64 encoded result
    return signData.toString('base64');
  }
export const handler = async (event) =&amp;gt; {
    try {
        const parsedBody = JSON.parse(event.body);
        // Retrieve the humanReadable portion
        const humanReadableContent = parsedBody.humanReadable;
        const timestamp = Math.floor(Date.now() / 1000); // Example timestamp
        const secret = process.env.LARK_SECRET; // Replace with your Lark secret (enable Set signature verification)
        const signature = genSign(timestamp, secret);
        // Transform the payload to Lark's format
        const larkPayload = JSON.stringify({
            timestamp: timestamp,
            sign: signature,
            msg_type: 'text',
            content: {
                text: `Alert from MongoDB: \n${humanReadableContent}`,
            },
        });
        console.log('lark payload:', larkPayload);
        const options = {
            hostname: process.env.LARK_HOSTNAME, // Accesses the environment variable
            path: process.env.LARK_PATH,         // Accesses the environment variable
            method: 'POST',
            headers: {
                'Content-Type': 'application/json',
                'Content-Length': larkPayload.length,
            },
        };
        await new Promise((resolve, reject) =&amp;gt; {
            const req = request(options, (res) =&amp;gt; {
                let data = '';
                res.on('data', (chunk) =&amp;gt; {
                    data += chunk;
                });
                res.on('end', () =&amp;gt; {
                    if (res.statusCode === 200) {
                        resolve();
                    } else {
                        reject(new Error(`Request failed. Status code: ${res.statusCode}`));
                    }
                });
            });
            req.on('error', (e) =&amp;gt; {
                reject(e);
            });
            // Write the larkPayload data
            req.write(larkPayload);
            req.end();
        });
        return {
            statusCode: 200,
            body: JSON.stringify({ message: 'Alert forwarded to Lark' }),
        };
    } catch (error) {
        console.error('Error sending alert to Lark:', error);
        return {
            statusCode: 500,
            body: JSON.stringify({ error: 'Failed to send alert to Lark' }),
        };
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;AWS API Gateway&lt;/strong&gt;: Configure it to trigger the above Lambda function when an REST request from Atlas is received.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Integrate Atlas with the Webhook&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Within your MongoDB Atlas Project Settings Page&lt;/li&gt;
&lt;li&gt;Navigate to the Integrations section&lt;/li&gt;
&lt;li&gt;Configure a new "Webhook"&lt;/li&gt;
&lt;li&gt;In the "Webhook" field, enter the API Gateway endpoint URL that you configured&lt;/li&gt;
&lt;li&gt;Save the settings&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Test out the Solution&lt;/strong&gt;&lt;br&gt;
Add the Webhook as a new notifier within an existing active alert (e.g. Host has restarted) and perform a Resiliency Test on your MongoDB Cluster&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;By creating this middleware, we've effectively integrated MongoDB Atlas alerts with Lark, enhancing the operational communication within your organization. This setup allows for immediate and streamlined alert management, ensuring that your team can respond quickly to any issues that arise. Feel free to adapt and expand upon this solution to suit your organization's specific needs.&lt;/p&gt;

&lt;p&gt;Integrating Atlas alerts with Lark can be a simple yet powerful improvement to your operational workflow. I hope this guide helps you in implementing it. Let me know your thoughts and feel free to share any enhancements you make.&lt;/p&gt;

</description>
      <category>mongodb</category>
      <category>lark</category>
      <category>webhook</category>
      <category>alerts</category>
    </item>
    <item>
      <title>Load Testing MongoDB with Locust</title>
      <dc:creator>Hejun Wong</dc:creator>
      <pubDate>Wed, 21 Aug 2024 15:43:23 +0000</pubDate>
      <link>https://dev.to/herjean7/load-testing-mongodb-with-locust-518</link>
      <guid>https://dev.to/herjean7/load-testing-mongodb-with-locust-518</guid>
      <description>&lt;h2&gt;
  
  
  What is &lt;a href="https://locust.io/" rel="noopener noreferrer"&gt;Locust.io&lt;/a&gt;?
&lt;/h2&gt;

&lt;p&gt;Locust is an open source load testing tool written in Python. Since it is written in Python, it is possible to containerise it and deploy it on Kubernetes. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/sabyadi/mongolocust/" rel="noopener noreferrer"&gt;Mongolocust&lt;/a&gt; allows you to code up MongoDB CRUD operations in Python and visualise their executions in a browser.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Load Test?
&lt;/h2&gt;

&lt;p&gt;Load test helps us to understand how well our database performs under various levels of stress, identify performance bottlenecks and presents an opportunity to test our database's auto scaling capabilities. &lt;/p&gt;

&lt;h2&gt;
  
  
  Goals
&lt;/h2&gt;

&lt;p&gt;In this article, you will learn how to deploy locust on AWS EKS and load test your MongoDB cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pre-requisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://git-scm.com/book/en/v2/Getting-Started-Installing-Git" rel="noopener noreferrer"&gt;Git installed on your local machine&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html" rel="noopener noreferrer"&gt;AWS CLI installed on your local machine&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html" rel="noopener noreferrer"&gt;AWS CLI Profile setup on your local machine&lt;/a&gt;, I named my profile "aws-tester"&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Steps
&lt;/h2&gt;

&lt;p&gt;Install kubectl, the Kubernetes CLI version of a swiss army knife&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;brew install kubectl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Install eksctl, a simple CLI tool for creating and managing clusters on AWS EKS&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;brew tap weaveworks/tap
brew install weaveworks/tap/eksctl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create AWS EKS Cluster&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;eksctl --profile aws-tester create cluster \
--name mongo-locust-cluster \
--version 1.30 \
--region ap-southeast-1 \
--nodegroup-name locust-nodes \
--node-type t3.medium \
--nodes 2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Pull a copy of Mongolocust codes&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/sabyadi/mongolocust.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Set &lt;code&gt;CLUSTER_URL&lt;/code&gt; within &lt;code&gt;k8s/secret.yaml&lt;/code&gt; to your MongoDB Cluster's URL&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd mongolocust/k8s
vi secret.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Deploy Mongolocust on AWS EKS&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ..
./redeploy.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Forward 8089 port from master service to localhost&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl port-forward service/master 8089:8089
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Access Locust Web Interface at &lt;a href="http://localhost:8089" rel="noopener noreferrer"&gt;(http://localhost:8089)&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Scale Locust Workers&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl scale deployment locust-worker-deployment --replicas 10
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Obtain Kubernetes nodes' IP addresses and whitelist on MongoDB Atlas&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get nodes -o wide 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Have fun load testing your MongoDB Cluster! &lt;/p&gt;

</description>
      <category>mongodb</category>
      <category>locust</category>
      <category>loadtest</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Implement a DevSecOps Pipeline with GitHub Actions</title>
      <dc:creator>Hejun Wong</dc:creator>
      <pubDate>Sun, 16 Jun 2024 06:24:22 +0000</pubDate>
      <link>https://dev.to/herjean7/implement-a-devsecops-pipeline-with-github-actions-2lbb</link>
      <guid>https://dev.to/herjean7/implement-a-devsecops-pipeline-with-github-actions-2lbb</guid>
      <description>&lt;h2&gt;
  
  
  Introduction - Birth of DevSecOps
&lt;/h2&gt;

&lt;p&gt;The story of DevSecOps follows the the story of Software Development closely. We saw how the industry moved from Waterfall to Agile and everything changed after Agile. With much shorter development cycles, there was also a need for faster deployments to production. &lt;/p&gt;

&lt;p&gt;It was no longer feasible for Security teams to get the Dev / Ops teams to wait till Vulnerability Assessment and Penetration Testing (VAPT) was complete before changes could be pushed to production. If not, we nullify the advantage the team had with speed and agility.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;DevOps is a set of practices intended to reduce the time between committing a change to a system and the change being placed into production, while ensuring high quality - Bass, Weber and Zhu &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;By definition, DevOps already includes Security as part of Operations  but the Security industry wanted more focus and emphasis on Security hence the term DevSecOps or Secure DevOps came about.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security Terms
&lt;/h2&gt;

&lt;p&gt;Before diving into the implementation phase, let's familiarise ourselves with these 3 security terms. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SCA&lt;/strong&gt; - stands for Software Composition Analysis. It is a technique used to find security vulnerabilities in third-party components that we use in our projects / products. They can be libraries, packages that we install.  &lt;/p&gt;

&lt;p&gt;We will be using &lt;strong&gt;Snyk&lt;/strong&gt; (pronounced as "Sneak") for our SCA tool. Snyk is a developer-first SCA solution, helping developers find, prioritize and fix security vulnerabilities and license issues in open source dependencies&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a free SNYK account at &lt;a href="https://snyk.io/" rel="noopener noreferrer"&gt;SNYK&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Navigate to Account Settings&lt;/li&gt;
&lt;li&gt;Generate Auth Token&lt;/li&gt;
&lt;li&gt;This key would be your &lt;strong&gt;SNYK_TOKEN&lt;/strong&gt;. Store it within your GitHub Actions Secrets. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;SAST&lt;/strong&gt; - stands for Static Application Security Testing. It is a technique used to analyse source codes, binary and byte codes for security vulnerabilities without running the code. Since the codes are not running but examined in static state, it is called static analysis. SAST, SCA and Linting are typical examples of static analysis&lt;/p&gt;

&lt;p&gt;For SAST, we will be using &lt;strong&gt;Sonar&lt;/strong&gt;. SonarCloud is a cloud based static analysis tool for your CI/CD pipeline. It supports dozens of popular languages, development frameworks and IaC platforms. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a free Sonar account at &lt;a href="https://www.sonarsource.com/products/sonarcloud/" rel="noopener noreferrer"&gt;SonarCloud&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Create a new &lt;strong&gt;Organisation&lt;/strong&gt; and &lt;strong&gt;Project&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Navigate to My Account, Security Tab&lt;/li&gt;
&lt;li&gt;Generate a new Token&lt;/li&gt;
&lt;li&gt;This token would be your &lt;strong&gt;SONAR_TOKEN&lt;/strong&gt;. Store it within your GitHub Actions Secrets. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;DAST&lt;/strong&gt; - stands for Dynamic Application Security Testing. This is a technique used to analyse the running application for security vulnerabilities. Since the application is running and being examined dynamically, it is called dynamic analysis. &lt;/p&gt;

&lt;p&gt;For DAST, we will be using &lt;strong&gt;OWASP ZAP&lt;/strong&gt;. ZAP is the world’s most widely used web app scanner. It is a free, open-source penetration testing tool and at its core, ZAP is known as a “man-in-the-middle proxy”. You would find 3 Github actions belonging to OWASP ZAP within the GitHub Marketplace.&lt;/p&gt;

&lt;h2&gt;
  
  
  GitHub Actions
&lt;/h2&gt;

&lt;p&gt;GitHub Actions is a CI/CD platform that allows us to automate our build, test and deployment pipeline&lt;/p&gt;

&lt;p&gt;You can find it within your code repository on GitHub. It is the Actions Tab.&lt;/p&gt;

&lt;p&gt;And when you click onto any of these workflow runs, you would be able to see the jobs that ran under that workflow&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi8l9pijj06fkgbu88wpe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi8l9pijj06fkgbu88wpe.png" alt="GitHub Actions" width="512" height="177"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2288ui6ut5ht7xyaxcy9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2288ui6ut5ht7xyaxcy9.png" alt="Sample GitHub Actions Workflow" width="512" height="182"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Above is a sample workflow. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Workflows&lt;/strong&gt; are automated processes that you can configure to run one or more jobs. Workflows are defined by YAML files checked into your repository. These yaml files are stored in the &lt;code&gt;.github/workflows&lt;/code&gt; directory. You can have multiple workflows, each of them doing a different set of tasks&lt;/p&gt;

&lt;p&gt;The workflow can be triggered by an event (e.g. a pull request / when a developer pushes a change into the code repository)&lt;/p&gt;

&lt;p&gt;When that happens, one or more jobs will start running.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Jobs&lt;/strong&gt; are steps are executed in order and are dependent on each other. You can share data from one step to another as they are ran on the same runner.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Runners&lt;/strong&gt; are servers that run your workflows when triggered and Github provides Linux, Windows and MacOS virtual machines for you to run your workflows. &lt;/p&gt;

&lt;h2&gt;
  
  
  Our Workflow
&lt;/h2&gt;

&lt;p&gt;Our DevSecOps pipeline will consist of 3 jobs. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Build&lt;/strong&gt; - requests for the latest ubuntu server, installs the latest github actions (v4), installs nodeJS version 20, installs our project's dependencies &lt;code&gt;npm run install&lt;/code&gt;, runs our unit test &lt;code&gt;npm run test&lt;/code&gt; and performs SAST using Sonar.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SCA&lt;/strong&gt; - requests for the latest ubuntu server and for this job to start running, it &lt;em&gt;needs&lt;/em&gt; the build job to be complete. It will install the latest github actions (v4) and runs Snyk against our code repository.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DAST&lt;/strong&gt; - requests for latest ubuntu server, waits for SCA job to be complete, checks out the latest github actiosn (v4) and runs OWASP ZAP against a sample website (example.com).&lt;/p&gt;

&lt;p&gt;The entire CICD pipeline can be implemented in just 50 lines of codes below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Build code, run unit test, run SAST, SCA, DAST security scans for NodeJs App
on: push

jobs:
  Build:
    runs-on: ubuntu-latest
    name: Unit Test and SAST
    steps:
    - uses: actions/checkout@v4
    - uses: actions/setup-node@v4
      with:
        node-version: '20.x'
        cache: npm
    - name: Install dependencies
      run: npm install
    - name: Test and coverage
      run: npm run test
    - name: SonarCloud Scan
      uses: sonarsource/sonarcloud-github-action@master
      env:
        GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
        SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
      with:
        args: &amp;gt;
          -Dsonar.organization=[YOUR_SONAR_ORGANISATION]
          -Dsonar.projectKey=[YOUR_SONAR_PROJECT]
  SCA:
    runs-on: ubuntu-latest
    needs: Build
    name: SCA - SNYK
    steps:
      - uses: actions/checkout@v4
      - name: Run Snyk to check for vulnerabilities
        uses: snyk/actions/node@master
        continue-on-error: true
        env:
          SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }}
  DAST:
    runs-on: ubuntu-latest
    needs: SCA
    name: DAST - ZAP
    steps:
      - name: Checkout
        uses: actions/checkout@v4
        with:
          ref: main
      - name: ZAP Scan
        uses: zaproxy/action-baseline@v0.11.0
        with:
          target: 'http://example.com/'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Troubleshooting
&lt;/h2&gt;

&lt;p&gt;You would notice that your unit test coverage report isn't being uploaded into SonarCloud. To fix this, create a sonar-project.properties file in the root of your repository. The file will inform Sonar where to retrieve your code coverage reports.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sonar.organization=[INPUT_YOUR_ORGANISATION]
sonar.projectKey=[INPUT_YOUR_PROJECT_KEY]

# relative paths to source directories. More details and properties are described
# in https://sonarcloud.io/documentation/project-administration/narrowing-the-focus/
sonar.sources=.
sonar.exclusions=**/tests/*.js
sonar.language=js

sonar.javascript.lcov.reportPaths=./coverage/lcov.info
sonar.testExecutionReportPaths=./test-report.xml

sonar.sourceEncoding=UTF-8
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;p&gt;For a working example, you can refer to my &lt;a href="https://github.com/herjean7/DevSecOpsTest" rel="noopener noreferrer"&gt;repository&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Cheers!&lt;/p&gt;

</description>
      <category>devsecops</category>
      <category>githubactions</category>
      <category>node</category>
      <category>cicd</category>
    </item>
    <item>
      <title>NodeJS Security Middlewares</title>
      <dc:creator>Hejun Wong</dc:creator>
      <pubDate>Tue, 11 Jun 2024 15:20:59 +0000</pubDate>
      <link>https://dev.to/herjean7/nodejs-security-middlewares-36o3</link>
      <guid>https://dev.to/herjean7/nodejs-security-middlewares-36o3</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Many backend endpoints are written in NodeJS and it is crucial for us to protect our endpoints. A quick and simple way to do so would be to use middlewares. &lt;/p&gt;

&lt;h2&gt;
  
  
  Middleware
&lt;/h2&gt;

&lt;p&gt;Middlewares allow us intercept and inspect requests, which makes it ideal for logging, authentication and inspecting requests. Here are 6 security middlewares which you can embed into your NodeJS project to secure it. &lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://www.npmjs.com/package/helmet" rel="noopener noreferrer"&gt;Helmet&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;The Helmet package sets security headers in our API responses. These headers provide important security-related instructions to the browser or client about how to handle the content and communication, thus helping to prevent various types of attacks. &lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://www.npmjs.com/package/cors" rel="noopener noreferrer"&gt;CORS&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;The CORS package allows us to whitelist domains, controlling access to our web resources.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://www.npmjs.com/package/express-xss-sanitizer" rel="noopener noreferrer"&gt;Express XSS Sanitizer&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;This package sanitizes user input data to prevent Cross Site Scripting (XSS) attacks&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://www.npmjs.com/package/express-rate-limit" rel="noopener noreferrer"&gt;Express Rate Limit&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;If your Backend Servers are not fronted with a Web Application Firewall (WAF) or protected by DDoS mitigation services, you should definitely install this package to protect your endpoints from getting spammed by setting rate limits.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://www.npmjs.com/package/express-mongo-sanitize" rel="noopener noreferrer"&gt;Express Mongo Sanitizer&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;This package sanitizes user-supplied data to prevent MongoDB Operator Injection.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://www.npmjs.com/package/hpp" rel="noopener noreferrer"&gt;HPP&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;As Express populates HTTP request parameters with the same name into an array, attackers may pollute the HTTP parameters to exploit this mechanism. &lt;/p&gt;

&lt;h2&gt;
  
  
  Sample Code on Usage
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const express = require('express');
const app = express();

const cors = require("cors");
const helmet = require("helmet");
const { xss } = require("express-xss-sanitizer");
const rateLimit = require("express-rate-limit");
const hpp = require("hpp");
const mongoSanitize = require("express-mongo-sanitize");


// Rate limit 

// Trust the X-Forwarded-* headers
app.set("trust proxy", 2);

const IP_WHITELIST = (process.env.IP_WHITELIST || "").split(",");

const limiter = rateLimit({
  windowMs: 10 * 60 * 1000, // 10 mins
  max: 500, // Limit each IP to 500 requests per 10 mins
  standardHeaders: true, //Return rate limit info in the `RateLimit-*` headers
  legacyHeaders: false, // Disable the 'X-RateLimit-*' headers
  skip: (request, response) =&amp;gt; IP_WHITELIST.includes(request.ip),
});

app.use(limiter);

//Sanitize data
app.use(mongoSanitize());

//Set security headers
app.use(helmet());

//Prevent XSS attacks
app.use(xss());

//Prevent http param pollution
app.use(hpp());

//CORS

const whitelist = ['http://localhost:4000']; 

const corsOptions = {
  origin: function (origin, callback) {
    if (whitelist.indexOf(origin) !== -1) {
      callback(null, true)
    } else {
      callback(new Error('Not allowed by CORS'))
    }
  }
}

app.use(cors(corsOptions));

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>security</category>
      <category>middleware</category>
      <category>api</category>
      <category>node</category>
    </item>
    <item>
      <title>How to write Better User Stories</title>
      <dc:creator>Hejun Wong</dc:creator>
      <pubDate>Fri, 29 Jul 2022 04:18:00 +0000</pubDate>
      <link>https://dev.to/herjean7/how-to-write-user-stories-better-579a</link>
      <guid>https://dev.to/herjean7/how-to-write-user-stories-better-579a</guid>
      <description>&lt;h2&gt;
  
  
  User Stories
&lt;/h2&gt;

&lt;p&gt;In Scrum, features are usually written in the form of User Stories. &lt;/p&gt;

&lt;p&gt;There are 3 parts (the 3 Cs) to a User Story. Card, Conversations and Confirmations. However what's commonly missing are the set of Confirmations that should be appended to each user story.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Card
&lt;/h2&gt;

&lt;p&gt;User stories are typically written in this format:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;As a&lt;/strong&gt; [&lt;em&gt;Administrator&lt;/em&gt;], &lt;strong&gt;I want the system to&lt;/strong&gt; [&lt;em&gt;allow me to administer user access&lt;/em&gt;], &lt;strong&gt;so that&lt;/strong&gt; [&lt;em&gt;I can approve and reject user access requests&lt;/em&gt;]. &lt;/p&gt;




&lt;h2&gt;
  
  
  The Confirmations
&lt;/h2&gt;

&lt;p&gt;Confirmations are written during the discussions between the Product Owner and the Dev Team. They are akin to a set of Acceptance Criteria. &lt;/p&gt;

&lt;p&gt;Using the example above, we can create the following confirmations.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Email notification to be sent out to users holding onto the "Administrator" role whenever a new user request is submitted.&lt;/li&gt;
&lt;li&gt;Only users holding onto the "Administrator" role can approve and reject users.&lt;/li&gt;
&lt;li&gt;Email notification to be sent out to approved and rejected users.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;With the set of confirmations, developers are now clearer on the requirements. There will be less ambiguity and we get closer to our ideal product at a faster rate. &lt;/p&gt;

&lt;p&gt;Try it today🥳&lt;/p&gt;

</description>
      <category>scrum</category>
      <category>userstories</category>
      <category>confirmations</category>
    </item>
    <item>
      <title>Securing Windows Server 2019</title>
      <dc:creator>Hejun Wong</dc:creator>
      <pubDate>Tue, 28 Jan 2020 13:31:58 +0000</pubDate>
      <link>https://dev.to/herjean7/securing-windows-server-2019-27po</link>
      <guid>https://dev.to/herjean7/securing-windows-server-2019-27po</guid>
      <description>&lt;p&gt;This is my first blog post. I guess one of the best ways to learn fast, is by writing and sharing. &lt;/p&gt;

&lt;p&gt;Today's post focuses on securing Windows Server 2019. I'm a developer, not a server guy but in my role, i'm exposed to the setup, configuration and patching of servers. To allow us to perform patching/maintenance as and when we like, we need a cluster of servers to achieve High Availability (HA). There are many ways to harden these servers and the method we have chosen is by enforcing group policies (GPOs) using the Domain Controller(s). Security policies are set centrally and propagated down to servers within the same domain.&lt;/p&gt;

&lt;p&gt;The Center for Internet Security (CIS) publishes guidelines for securing both Windows and Linux servers. They are broadly categorized into Level 1 (L1) and Level 2 (L2). You can view L1 as baseline security and L2 as in-depth security where security is a must-have. There are thousands of policies which can be applied to help harden the servers. To do this efficiently, it is best to download the CIS-CAT scanner (there is a free "lite" version available!) and run it on your server. The scanner compares your server's configuration against the CIS Benchmarks. &lt;/p&gt;

&lt;p&gt;With the scanning results, we can quickly decide to apply the necessary GPOs to reduce the number of attack surfaces. However, don't do this blindly. This may break your app. It is always about taking a prudent balance between functionality and security. Don't take unnecessary risks but going for a no-risk approach is no-go as well.&lt;/p&gt;

&lt;p&gt;CIS has worked with the various cloud providers (AWS, Azure etc.) and users can now spin up pre-configured/CIS hardened VMs. This saves us huge amount of time and effort hardening the VMs, allowing us to sleep peacefully at night.&lt;/p&gt;

&lt;p&gt;I'm new to this and would like to learn more about administering group policies efficiently. Are there templates available for download?&lt;/p&gt;

</description>
      <category>security</category>
    </item>
  </channel>
</rss>
