<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Damika-Anupama</title>
    <description>The latest articles on DEV Community by Damika-Anupama (@damikaanupama).</description>
    <link>https://dev.to/damikaanupama</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/damikaanupama"/>
    <language>en</language>
    <item>
      <title>Designing Asynchronous APIs with a Pending, Processing, and Done Workflow</title>
      <dc:creator>Damika-Anupama</dc:creator>
      <pubDate>Thu, 12 Mar 2026 19:09:28 +0000</pubDate>
      <link>https://dev.to/damikaanupama/designing-asynchronous-apis-with-a-pending-processing-and-done-workflow-4gpd</link>
      <guid>https://dev.to/damikaanupama/designing-asynchronous-apis-with-a-pending-processing-and-done-workflow-4gpd</guid>
      <description>&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;Modern backend systems often integrate OCR, machine learning inference, or heavy data processing jobs that cannot complete within a typical HTTP request lifecycle. When a user sends a request that triggers a long-running operation, keeping the HTTP connection open until processing completes is usually a poor design choice. Long-running synchronous requests can still increase the risk of timeouts, tie up resources, and make failure handling more difficult, even when a platform supports longer request durations.&lt;/p&gt;

&lt;p&gt;Although longer synchronous timeouts are possible in some environments, asynchronous APIs are still valuable as an architectural choice for resilience and better user experience. For example, AWS API Gateway increased its &lt;a href="https://aws.amazon.com/about-aws/whats-new/2024/06/amazon-api-gateway-integration-timeout-limit-29-seconds" rel="noopener noreferrer"&gt;integration timeout limit beyond 29 seconds&lt;/a&gt; in June 2024 for Regional and private REST APIs, though with trade-offs such as possible reductions in account-level throttle quota.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flj5sk7tttc990ch4vu50.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flj5sk7tttc990ch4vu50.png" alt="Long Time taking Process" width="703" height="165"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Core Idea
&lt;/h2&gt;

&lt;p&gt;A practical way to handle this problem is to accept the request, create a job record, and return immediately, while a background worker processes the job asynchronously. The client can then periodically check the job status until the work is completed or fails.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1bcdpisstoffgh8zoh8c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1bcdpisstoffgh8zoh8c.png" alt="Core Idea" width="704" height="314"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  HTTP Contract
&lt;/h4&gt;

&lt;p&gt;A common HTTP approach for long-running work is to return &lt;a href="https://httpwg.org/specs/rfc9110.html" rel="noopener noreferrer"&gt;&lt;code&gt;202 Accepted&lt;/code&gt;&lt;/a&gt; as soon as the request is accepted, then let the client check a separate status resource for updates. This matters because HTTP cannot later push the final result back into that same original response.&lt;/p&gt;

&lt;h4&gt;
  
  
  How Clients Receive Updates
&lt;/h4&gt;

&lt;p&gt;In this article, the client checks job progress by polling a status endpoint. Polling is the simplest option, but it is not the only one: systems can also deliver updates using &lt;a href="https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events" rel="noopener noreferrer"&gt;Server-Sent Events&lt;/a&gt; (SSE), &lt;a href="https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API/index.html" rel="noopener noreferrer"&gt;WebSockets&lt;/a&gt;, or &lt;a href="https://docs.github.com/en/webhooks" rel="noopener noreferrer"&gt;Webhooks&lt;/a&gt;, depending on the use case.&lt;/p&gt;

&lt;h4&gt;
  
  
  A Simple Endpoint Shape
&lt;/h4&gt;

&lt;p&gt;A common way to expose this pattern is to separate &lt;strong&gt;job creation&lt;/strong&gt; from &lt;strong&gt;job tracking&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;POST /jobs&lt;/code&gt; → accepts the request and returns with a &lt;code&gt;jobId&lt;/code&gt; or monitor URL&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;GET /jobs/{id}&lt;/code&gt; → returns the current job state, such as &lt;code&gt;status&lt;/code&gt;, &lt;code&gt;progress&lt;/code&gt;, &lt;code&gt;result&lt;/code&gt;, or &lt;code&gt;error&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This style is commonly used in APIs for long-running jobs, where the initial request stays fast and the client follows a separate status resource until the job reaches a terminal state. See also MDN’s &lt;a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Prefer" rel="noopener noreferrer"&gt;&lt;code&gt;Prefer&lt;/code&gt; header&lt;/a&gt; for &lt;code&gt;respond-async&lt;/code&gt;, and this practical guide on &lt;a href="https://restfulapi.net/rest-api-design-for-long-running-tasks/" rel="noopener noreferrer"&gt;REST API design for long-running tasks&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Status Lifecycle
&lt;/h2&gt;

&lt;p&gt;Once the request has been accepted and the client has a way to track it, the job lifecycle can be described with a small set of states:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;PENDING&lt;/strong&gt;: When the backend accepts the request and creates a job record.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PROCESSING&lt;/strong&gt;: When a separate worker starts processing the long-running job.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DONE&lt;/strong&gt;: When the long-running job completes successfully.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ERROR&lt;/strong&gt;: If the job fails during any stage, the status response should ideally include structured error details such as an error code, a human-readable message, whether the operation is retryable, and the step that failed if that information is known.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fedgblfilm2cbi779rpjj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fedgblfilm2cbi779rpjj.png" alt="State Machine" width="484" height="213"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Formats such as &lt;a href="https://www.rfc-editor.org/rfc/rfc9457" rel="noopener noreferrer"&gt;Problem Details for HTTP APIs (RFC 9457)&lt;/a&gt; are useful for standardizing machine-readable error responses, while the initial 202 &lt;code&gt;Accepted&lt;/code&gt; response still only means the work was &lt;a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Status/202" rel="noopener noreferrer"&gt;accepted, not completed&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;A similar idea appears in production APIs as well. For example, &lt;a href="https://docs.stripe.com/error-handling" rel="noopener noreferrer"&gt;Stripe’s error handling documentation&lt;/a&gt; shows how structured error objects can include fields such as &lt;code&gt;code&lt;/code&gt;, &lt;code&gt;message&lt;/code&gt;, &lt;code&gt;param&lt;/code&gt;, &lt;code&gt;type&lt;/code&gt;, and links to relevant documentation, making debugging and client-side handling easier.&lt;/p&gt;

&lt;h4&gt;
  
  
  Example Error Payload
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"jobId"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"12345"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"status"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ERROR"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"error"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"code"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"OCR_TIMEOUT"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"message"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Document processing exceeded the allowed time limit."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"retryable"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"failedStep"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"text-extraction"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A job normally moves from &lt;strong&gt;PENDING&lt;/strong&gt; to &lt;strong&gt;PROCESSING&lt;/strong&gt; to &lt;strong&gt;DONE&lt;/strong&gt;, but failures can occur during either &lt;strong&gt;PENDING&lt;/strong&gt; or &lt;strong&gt;PROCESSING&lt;/strong&gt;, transitioning the job into &lt;strong&gt;ERROR&lt;/strong&gt;. Optional states may include &lt;strong&gt;RETRYING&lt;/strong&gt;, &lt;strong&gt;CANCELLED&lt;/strong&gt;, and &lt;strong&gt;PARTIAL_SUCCESS&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;For better user experience, the status resource can optionally expose a &lt;code&gt;progress&lt;/code&gt; field, such as &lt;code&gt;0–100&lt;/code&gt;, and the client can poll the status endpoint to show updates over time. However, this value is not always exact. In practice, progress may be estimate-based, stage-based, or omitted entirely when the backend cannot measure it reliably. See Google Cloud’s guide to &lt;a href="https://docs.cloud.google.com/storage/docs/using-long-running-operations" rel="noopener noreferrer"&gt;long-running operations&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmedia2.giphy.com%2Fmedia%2Fv1.Y2lkPTc5MGI3NjExNXowd2ptMDJrdGM5eGU2YXN3ZDJodzc4b3B3bjZ1YzFjamhzMmJqMyZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw%2FgWPQVRX5DrBBxBLikq%2Fgiphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmedia2.giphy.com%2Fmedia%2Fv1.Y2lkPTc5MGI3NjExNXowd2ptMDJrdGM5eGU2YXN3ZDJodzc4b3B3bjZ1YzFjamhzMmJqMyZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw%2FgWPQVRX5DrBBxBLikq%2Fgiphy.gif" alt="Loading" width="480" height="192"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementation Approaches
&lt;/h2&gt;

&lt;p&gt;To make this pattern more concrete, let's go through a practical example using Python, AWS, and SAM. You can access the code example &lt;a href="https://github.com/Damika-s-Play-Ground/Asynchronous-Job-Processing-Pattern-Examples/tree/main/Example%201" rel="noopener noreferrer"&gt;here&lt;/a&gt;. Follow the instructions provided in the README, and don't forget to shut down any AWS service you launch during the experiment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9eu8lxoa0l407nicjuoi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9eu8lxoa0l407nicjuoi.png" alt="Code Example Architecture" width="800" height="539"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Pattern Works Well
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Responsive UX&lt;/strong&gt;: The API can return quickly while the long-running work continues in the background, so the user is not left waiting with no visibility into what is happening. The request is accepted first and completed later through a separate status endpoint.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Retry Capability&lt;/strong&gt;: Because the job state is stored separately from the original request, the system can apply timeouts, retries, and backoff more safely when transient failures occur. In practice, retries should be paired with idempotency and strategies such as &lt;a href="https://aws.amazon.com/builders-library/timeouts-retries-and-backoff-with-jitter/" rel="noopener noreferrer"&gt;timeouts, retries, and backoff with jitter&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Fault Isolation&lt;/strong&gt;: The workflow can be split into smaller stages and handled by separate workers, which makes it easier to narrow failures down to a specific step instead of treating the whole process as one opaque unit. This kind of decoupling also prevents one slow or failing stage from directly blocking the initial request-response path.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Observability&lt;/strong&gt;: When each stage is separated, it becomes easier to attach logs, metrics, and traces to each part of the workflow and understand where time is spent or where failures occur. Tools and standards such as &lt;a href="https://opentelemetry.io/docs/concepts/observability-primer/" rel="noopener noreferrer"&gt;OpenTelemetry’s observability primer&lt;/a&gt; help connect those signals into a clearer end-to-end view.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scalability&lt;/strong&gt;: Background workers can often be scaled independently from the API layer, which is useful when the long-running job needs more compute, memory, or concurrency than the initial request handler. For example, AWS documents that &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/scaling-behavior.html" rel="noopener noreferrer"&gt;Lambda functions scale independently with concurrency limits and scaling behavior&lt;/a&gt;, which makes this split especially useful in serverless designs.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Real Challenges
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Duplicate Execution&lt;/strong&gt;: In asynchronous systems, retries and queue semantics can cause the same job to be delivered or processed more than once. For that reason, background workers and mutating endpoints should be designed to be idempotent, so repeating the same operation does not produce unintended side effects. With &lt;a href="https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/standard-queues.html" rel="noopener noreferrer"&gt;Amazon SQS standard queues&lt;/a&gt;, duplicate delivery is expected as part of &lt;a href="https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/standard-queues-at-least-once-delivery.html" rel="noopener noreferrer"&gt;at-least-once delivery&lt;/a&gt; behavior.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Stuck jobs&lt;/strong&gt;: A job can remain in &lt;code&gt;PENDING&lt;/code&gt; or &lt;code&gt;PROCESSING&lt;/code&gt; longer than expected if a worker crashes, loses connectivity, or never updates its final state. In production, this usually needs timeouts, heartbeats, lease expiry, or a reconciliation process that detects and recovers stalled work. See &lt;a href="https://aws.amazon.com/about-aws/whats-new/2024/03/aws-batch-alerts-detect-jobs-runnable-state/" rel="noopener noreferrer"&gt;AWS Batch alerts for stuck jobs&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Race conditions&lt;/strong&gt;: When multiple workers, retries, or client actions try to update the same job at nearly the same time, the system can end up with lost updates or invalid state transitions. This is usually handled with conditional writes, optimistic locking, or version checks so only valid state changes are accepted. See &lt;a href="https://notes.kodekloud.com/docs/AWS-Certified-Developer-Associate/Databases/DynamoDB-Optimistic-Locking" rel="noopener noreferrer"&gt;DynamoDB optimistic locking&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Retry storms&lt;/strong&gt;: If many clients or workers retry immediately after a failure, they can create a second wave of load that makes recovery even harder. &lt;a href="https://aws.amazon.com/blogs/architecture/exponential-backoff-and-jitter/" rel="noopener noreferrer"&gt;Exponential backoff with jitter&lt;/a&gt; is a standard way to spread retries out and avoid synchronized spikes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Visibility gaps&lt;/strong&gt;: Once work moves into queues, workers, and downstream services, it becomes harder to understand where a job is failing or slowing down. Propagating &lt;a href="https://opentelemetry.io/docs/concepts/context-propagation/" rel="noopener noreferrer"&gt;correlation IDs and tracing context&lt;/a&gt; across components helps connect logs, traces, and metrics into one end-to-end view.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  When NOT to Use This Pattern
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Simple, fast operations&lt;/strong&gt;: If the job finishes quickly, this pattern can add unnecessary complexity.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Strong consistency requirements&lt;/strong&gt;: If the caller must know the final committed outcome immediately, asynchronous processing may be the wrong fit.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Transactional workflows&lt;/strong&gt;: If several steps must succeed or fail together, a simple job-status pattern may not be sufficient.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Lessons From Real Usage
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Async APIs improve UX, but production systems still need timeouts, retries, and cleanup rules.&lt;/li&gt;
&lt;li&gt;A plain &lt;code&gt;ERROR&lt;/code&gt; status is usually not enough; clients need structured error details and retry guidance.&lt;/li&gt;
&lt;li&gt;A &lt;code&gt;DONE&lt;/code&gt; status is often not enough on its own; clients may also need result metadata, timestamps, or follow-up links.&lt;/li&gt;
&lt;li&gt;Progress values are useful, but they are often estimates rather than exact measurements.&lt;/li&gt;
&lt;li&gt;Idempotency matters once retries and duplicate delivery become possible.&lt;/li&gt;
&lt;li&gt;Polling is a good entry point, but not the only update model.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This pattern is a simple but powerful way to design APIs around long-running work. It improves responsiveness and separates request handling from background execution, but it also introduces operational concerns such as retries, idempotency, stuck jobs, and observability. In a follow-up article, I’ll go deeper into implementation details, production hardening, and more concrete code examples.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>tutorial</category>
      <category>productivity</category>
    </item>
    <item>
      <title>My GSoC '25 Experience</title>
      <dc:creator>Damika-Anupama</dc:creator>
      <pubDate>Sun, 22 Feb 2026 12:10:33 +0000</pubDate>
      <link>https://dev.to/damikaanupama/my-gsoc-25-experience-20n0</link>
      <guid>https://dev.to/damikaanupama/my-gsoc-25-experience-20n0</guid>
      <description>&lt;h2&gt;
  
  
  What's GSoC
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://summerofcode.withgoogle.com" rel="noopener noreferrer"&gt;Google Summer of Code&lt;/a&gt; (gsoc) is one of open source code programs in the world. Every year summer, Google conducts this program to link open source organisations with new open source contributors. You can find gsoc organisations &lt;a href="https://www.gsocorganizations.dev" rel="noopener noreferrer"&gt;from here&lt;/a&gt;. Google also selects mentors per each organisation, mentors guide and evaluate contributors through out the summer to contribute organisations. In the beginning of the program, organisations define what are the projects they need to get completed or improved during summer, from the contributors, and mentors are allocated per each project. If you check &lt;a href="https://developers.google.com/open-source/gsoc/timeline" rel="noopener noreferrer"&gt;gsoc timeline&lt;/a&gt;, you can get an idea how this normally works.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqdkxns0yhc1f5fmem5rx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqdkxns0yhc1f5fmem5rx.png" alt=" " width="400" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why I'm Writing This Article
&lt;/h2&gt;

&lt;p&gt;I want to share my 2025 gsoc experience with you guys. Furthermore what are the advantages of applying gsoc, working with open source organisations. Please consider this is not a success story or guide.&lt;/p&gt;

&lt;h2&gt;
  
  
  Who this Article for
&lt;/h2&gt;

&lt;p&gt;This is more useful for the people who can contribute for gsoc. Please check the &lt;a href="https://summerofcode.withgoogle.com/rules" rel="noopener noreferrer"&gt;eligibility criteria for the contributors&lt;/a&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;be eighteen (18) years of age or older upon registration for the Program;&lt;/li&gt;
&lt;li&gt;for the duration of the Program, be eligible to work in the country in which they reside;&lt;/li&gt;
&lt;li&gt;be a student or a beginner to open source software development.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you're an undergraduate or a beginner to open source software development with a strong interest in programming, you can participate gsoc this time to grow your knowledge and profile. Here are some articles you may read why gsoc is so important: &lt;a href="https://opensource.googleblog.com/2025/12/shape-future-with-google-summer-of-code.html" rel="noopener noreferrer"&gt;[1]&lt;/a&gt; &lt;a href="https://google.github.io/gsocguides/student/why-should-i-apply" rel="noopener noreferrer"&gt;[2]&lt;/a&gt; &lt;a href="https://developers.google.com/open-source/gsoc/faq" rel="noopener noreferrer"&gt;[3]&lt;/a&gt; &lt;a href="https://www.quora.com/Why-should-I-apply-for-Google-summer-of-code" rel="noopener noreferrer"&gt;[4]&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can enter gsoc as a contributor. First, you need to select an organisation and one of their projects they've listed. Then you need to create a project proposal for the project and submit on gsoc organisation portal during "GSoC contributor application period begins" to "GSoC contributor application deadline" period, please check the &lt;a href="https://developers.google.com/open-source/gsoc/timeline" rel="noopener noreferrer"&gt;timeline&lt;/a&gt;. Normally, this is half a month of time period. &lt;/p&gt;

&lt;h2&gt;
  
  
  Why I wanted to do GSoC
&lt;/h2&gt;

&lt;p&gt;During my university program, I learnt about gsoc from our seniors and lecturers, and the advantages novel programmers like us can get from open source code contribution. I researched about gsoc and found out famous open source organisations like Postgres, GitLab, Debian, Python and many more organisations are participating each year gsoc program and we can work on those organisation codebases and improve our experience. Isn't it great? &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8z4o84y3hlq8973vznry.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8z4o84y3hlq8973vznry.png" alt=" " width="800" height="453"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Furthermore we can work and get mentored by high experienced open source programmers all over the world. Rather than coding experience, novel programmers like us can improve engineering disciplines and soft skills too.&lt;/p&gt;

&lt;h2&gt;
  
  
  My Application History
&lt;/h2&gt;

&lt;p&gt;I applied gsoc and got rejected in 2 previous years. When I was reviewing what went wrong, I identified:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;There's a huge competition among new contributors and the contributors who applied once to get selected to the program again&lt;/li&gt;
&lt;li&gt;Due to high competition, some projects may have multiple applicants and project proposal needs to be more descriptive and follow the rules that organisation has published.&lt;/li&gt;
&lt;li&gt;Some mentors/ organisations expect contributors to solve their code repository (GitHub) issues and put some PRs to the project codebase to show their performance to the organization.&lt;/li&gt;
&lt;li&gt;It's better to introduce ourselves to the organisation and communicate with organisation administrators and mentors regarding projects&lt;/li&gt;
&lt;li&gt;Participate online meetings conducted by gsoc during contributor application period&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So I understood it's not that easy to get selected and need to be more competitive. We need to show to the organisation why they choose our project proposal over the other applicants to the same project we're applying.&lt;/p&gt;

&lt;h2&gt;
  
  
  How competitive GSoC actually is
&lt;/h2&gt;

&lt;p&gt;Although normally acceptance rate for gsoc is normally considered as 20%, year by year it's getting reduced, due to high competition. Here's the statistics about gsoc 25 program: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgPw8W_pCEbDZA2VTGKTiOyBOiSIZWAMmLBw1OP2D8Wc-hafFH8-HNfTG5RltghYCW-bxYcd4R6JTCMS9bp_UtP9b5-Zc2TN-E4l26wZEzdhQS2qwa-2hrs3hhHV6FYgLZ0u4uRJwLs5Z57a_PPL7Dejm67G8z21sDjdYI15vNEmoOQAi5dVW1UHbLrz3w/s1600/GSOC%202025%20Infographic.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy01j529nh6chewcse4qk.png" alt="Google Summer of Code 2025 Program Statistics" width="691" height="1079"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Based on the official &lt;a href="https://opensource.googleblog.com/2025/08/google-summer-of-code-2025-contributor-statistics.html#:~:text=Registrations,about%20GSoC%202025%2C%20stay%20tuned!" rel="noopener noreferrer"&gt;Google Open Source Blog announcements&lt;/a&gt;, Google Summer of Code (GSoC) 2025 statistics: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Total Applicants (&lt;strong&gt;Submitting Proposals&lt;/strong&gt;): &lt;strong&gt;15,240&lt;/strong&gt; individuals from 130 countries submitted a total of 23,559 proposals.&lt;/li&gt;
&lt;li&gt;Total Registrations: A record 98,698 people from 172 countries registered for the 2025 program.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Accepted Contributors&lt;/strong&gt;: &lt;strong&gt;1,280&lt;/strong&gt; contributors were accepted into the program for 2025.&lt;/li&gt;
&lt;li&gt;Program Completion: While the coding period began on June 2, 2025, and concludes later in the year, the preliminary data indicates that &lt;strong&gt;1,261&lt;/strong&gt; projects were completed by 185 mentoring organizations in 2025. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Key 2025 Statistics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Acceptance Rate: The acceptance rate for applicants (1,280 accepted out of 15,240 applicants) was approximately 8.4%, making it one of the most competitive years in GSoC history.&lt;/li&gt;
&lt;li&gt;Demographics: 92.32% of contributors are participating in their first GSoC.&lt;/li&gt;
&lt;li&gt;Prior Experience: 66.3% of applicants had no prior open-source experience.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What organisations actually look for
&lt;/h2&gt;

&lt;p&gt;Organisations and mentors really like when contributors get familiarised with their selected project(s) and related codebase. Most of the time project has well explained documentation(s) or wikis. It's better to go through most of documentations and understand the project as far as you can, because your project proposal reflects your understanding of the codebase. So gsoc is not only about your coding ability, it's a blend of understanding the codebase, communication, following instructions, and initiatives such as case studies, PRs and discussions.&lt;/p&gt;

&lt;h2&gt;
  
  
  My organization: Checker Framework
&lt;/h2&gt;

&lt;p&gt;During gsoc 25 I got selected to the &lt;a href="https://www.gsocorganizations.dev/organization/checker-framework" rel="noopener noreferrer"&gt;Checker Framework&lt;/a&gt;. It's a compile time tool that enhances Java development by preventing bugs through pluggable type-checking, addressing issues like null pointer dereferences and concurrency errors that Java's basic type system does not cover. It serves as a robust bug-finding and verification tool, ensuring specific error types are absent in programs. The framework is user-friendly, aligns with existing practices. I chose Checker Framework during 25 summer, because I'm familiar with java programming language and was curious to work with &lt;a href="https://docs.oracle.com/javase/tutorial/java/annotations/" rel="noopener noreferrer"&gt;java annotations&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkgtfa5fxbynihfw26aez.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkgtfa5fxbynihfw26aez.png" alt=" " width="800" height="288"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As like other open source organisations, there was a competition to apply this organisation too, they told me I got selected from dozens of applications! They have provided their list of &lt;a href="https://checkerframework.org/manual/new-contributor-projects.html" rel="noopener noreferrer"&gt;projects&lt;/a&gt; for new contributors and &lt;a href="https://raw.githubusercontent.com/typetools/checker-framework/master/docs/developer/gsoc-ideas.html" rel="noopener noreferrer"&gt;guidelines for gsoc contributors&lt;/a&gt;, as many organisations do.&lt;/p&gt;

&lt;h2&gt;
  
  
  Contribution before acceptance
&lt;/h2&gt;

&lt;p&gt;During application period, they asked me to do a case study: Use their annotation tools on one of my previous java projects or openly available smaller java project. This case study was the critical reason I got selected to the organisation. From the case-study they identify whether I understood their project. &lt;a href="https://github.com/Damika-Anupama/Email-Client" rel="noopener noreferrer"&gt;Here's the link&lt;/a&gt; of my case study, compare with main branch with other branches. Furthermore I had to go through &lt;a href="https://checkerframework.org/manual/" rel="noopener noreferrer"&gt;checker framework documentation&lt;/a&gt; and their &lt;a href="https://checkerframework.org/manual/developer-manual.html" rel="noopener noreferrer"&gt;developer manual&lt;/a&gt; to understand the functionalities of their annotations. &lt;/p&gt;

&lt;h2&gt;
  
  
  Communication with mentors and community
&lt;/h2&gt;

&lt;p&gt;Communication medium is different due to the organisation: slack, google groups, discord, GitHub discussions and issue threads and different chat platforms. Always mentors and organisation administrators have emails related to the organisations, but be careful some of them don't like applicants put them private emails, they ask to join organisation main communication channel, introduce yourself and discuss about projects. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmedia3.giphy.com%2Fmedia%2Fv1.Y2lkPTc5MGI3NjExOTluejhjMXMxdXIyOHhoNThnMDFuZTE2M2E1aXRoZDIyMTNuZWtxaSZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw%2Fhr4Ljjyj0L9RYlihLr%2Fgiphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmedia3.giphy.com%2Fmedia%2Fv1.Y2lkPTc5MGI3NjExOTluejhjMXMxdXIyOHhoNThnMDFuZTE2M2E1aXRoZDIyMTNuZWtxaSZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw%2Fhr4Ljjyj0L9RYlihLr%2Fgiphy.gif" alt="Welcome to the team" width="478" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Communication is one of the best skills you need to have during gsoc. Always you need to be concise and comprehensive about your work. &lt;a href="https://maddevs.io/customer-university/importance-of-documentation/" rel="noopener noreferrer"&gt;&lt;strong&gt;Documenting your work is another format of communication&lt;/strong&gt;&lt;/a&gt;. After getting selected to an organization, you definitely need to have meetings with your mentors. In my case, I had 2 meetings per week, in there I had to explain my work, blockers, suggestions, my problems and so on. Before meetings I had to email my mentors regarding meeting agenda and after the meeting I had to email them meeting summary and the to do list before next meeting in the same email thread.&lt;/p&gt;

&lt;p&gt;You may face different challenges like time zones, clarity feedback cycles and so on. It's better and more professional if you clarify most of these challenges with your mentors in the communication medium, because most of the time they are super busy with their own schedules. This is why I mentioned communication is important. To enhance contact with your mentor during gsoc, you can refer these links:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://google.github.io/gsocguides/student/working-with-your-mentor" rel="noopener noreferrer"&gt;Working With Your Mentor&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://google.github.io/gsocguides/student/communication-best-practices" rel="noopener noreferrer"&gt;Communication Best Practices&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmedia0.giphy.com%2Fmedia%2Fv1.Y2lkPTc5MGI3NjExdDhwbzk0ZWRpY2t0Zmpoa2ZqcTl3cXV2bWswcm9ybnMwbHk4NHNyMCZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw%2Fczsyg3h7B3MMWiX7qW%2Fgiphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmedia0.giphy.com%2Fmedia%2Fv1.Y2lkPTc5MGI3NjExdDhwbzk0ZWRpY2t0Zmpoa2ZqcTl3cXV2bWswcm9ybnMwbHk4NHNyMCZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw%2Fczsyg3h7B3MMWiX7qW%2Fgiphy.gif" alt="Description" width="480" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What changed after getting accepted
&lt;/h2&gt;

&lt;p&gt;I got selected to a 350hrs hard project (projects are &lt;a href="https://google.github.io/gsocguides/mentor/defining-a-project-ideas-list" rel="noopener noreferrer"&gt;categorised due to their scope&lt;/a&gt;). My project's whole plan and milestones were not defined. Since I had 2 meetings per week with my mentors, considerable changes were made related to the plan, how we were going to implement my project. So in reality I had to work on implementation as well as the defining the milestones of the project with my mentors. Furthermore week by week the responsibility was increased, I had to update the documentation, follow their &lt;a href="https://homes.cs.washington.edu/~mernst/advice/github-pull-request.html#logical-units-after-the-fact" rel="noopener noreferrer"&gt;guidelines&lt;/a&gt; before making pull request on their main branch, time management and etc.&lt;/p&gt;

&lt;p&gt;Every organisation has their own code quality standards. This is important for open source developers, although you change your organisation you need to follow your current organisation coding standards, otherwise it's hard them to keep the code quality consistent for the future developments.  I also went multiple times in my organisation documentations to get aware about their own quality standards and this was really helpful during coding and meetings with my mentors.&lt;/p&gt;

&lt;p&gt;My code was reviewed in multiple levels. First, the checker framework has integrated an &lt;a href="https://azure.microsoft.com/en-us/products/devops/pipelines" rel="noopener noreferrer"&gt;Azure pipeline&lt;/a&gt; to automatically execute my implementation changes on 20 different open source codebases (most of the cases, legacy open jdk versions), and if any of the checks fail, I must go to the Azure pipeline dashboard to discover and resolve the issue. If all automated tests pass, the next two or three mentors will review my modifications, and I must work on their code reviews. If all of these processes are completed successfully, the changes will be merged into the main repository. Another significant consideration is that each pull request/branch should only serve one purpose. If a PR consists of modifications relating to the implementation of a new Java annotation as well as the updating of documentation for another related annotation, I had to submit two separate pull requests.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I learned (technical + non-technical)
&lt;/h2&gt;

&lt;p&gt;GSoC'25 was my first time working on a large open source codebase. &lt;a href="https://www.reddit.com/r/ExperiencedDevs/comments/16gxkft/how_to_quickly_understand_large_codebases/" rel="noopener noreferrer"&gt;Reading a large codebase&lt;/a&gt;, extracting the required code segment in the correct file and the correct directory is a skill every developer needs to have. Running/ debugging code, reading terminal outputs and logs, checking documentations and reworking on relevant specific case I improved this skill. After me, checker framework new open source contributors also need to continue my project. So I was careful to write a maintainable code during my work.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmedia4.giphy.com%2Fmedia%2Fv1.Y2lkPTc5MGI3NjExMzRrY2FrZGVlMXNxZXVzMGN2N2FmazkxczRzOHA4NTZmN3lxeXA4eSZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw%2FVcdbi5o470i9FACaZO%2Fgiphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmedia4.giphy.com%2Fmedia%2Fv1.Y2lkPTc5MGI3NjExMzRrY2FrZGVlMXNxZXVzMGN2N2FmazkxczRzOHA4NTZmN3lxeXA4eSZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw%2FVcdbi5o470i9FACaZO%2Fgiphy.gif" alt="Reading large codebases" width="500" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;During GSOC, it's essential to interact with mentors—who are experienced professionals—by demonstrating patience and professionalism. Initially lacking in professionalism, I learned to pay attention to mentor advice over time. Effective communication is key; when raising issues, specificity is crucial. Rather than express general confusion about a module, you should first go through the documentation, experiment with the module, and document results and errors. If confusion still exists, a detailed email to the mentor outlining the steps taken can facilitate understanding and guidance.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I would do differently
&lt;/h2&gt;

&lt;p&gt;In retrospect, I often overestimated my ability to manage various aspects of my project while underestimating the time commitment needed for each. If I have another opportunity applying gsoc, I would allocate more time to implement a crucial component of my work.&lt;/p&gt;

&lt;h2&gt;
  
  
  Advice to future applicants
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Be careful about your project scope, tech stack and time period&lt;/li&gt;
&lt;li&gt;Manage the time you spend on the project so that it does not interfere with your academic work or other personal activities.&lt;/li&gt;
&lt;li&gt;Always try to learn from every mistake you make &lt;/li&gt;
&lt;li&gt;Don't always make your mentor angry :)&lt;/li&gt;
&lt;li&gt;Document well and read documents&lt;/li&gt;
&lt;li&gt;Take a brake during burnouts!
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmedia2.giphy.com%2Fmedia%2Fv1.Y2lkPTc5MGI3NjExNW5odXlhYXdyM2d1ZXpvNDFjNmFlMG4wdHc2dTR2MTQzbmRhZWN3dCZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw%2F3n65nUEEvQ7c7iOjt4%2Fgiphy.gif" alt="Keep calm and code" width="320" height="480"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;GSoC is a great experience and a really good challenge every developer need to experience in their life. This time is your turn, so take it! &lt;/p&gt;

</description>
      <category>programming</category>
      <category>opensource</category>
      <category>coding</category>
      <category>learning</category>
    </item>
    <item>
      <title>Understanding Declaration Merging in TypeScript</title>
      <dc:creator>Damika-Anupama</dc:creator>
      <pubDate>Tue, 21 Jan 2025 15:03:22 +0000</pubDate>
      <link>https://dev.to/damikaanupama/understanding-declaration-merging-in-typescript-3c55</link>
      <guid>https://dev.to/damikaanupama/understanding-declaration-merging-in-typescript-3c55</guid>
      <description>&lt;p&gt;Typescript compiler merges two declarations with the same name into a &lt;strong&gt;single definition&lt;/strong&gt; while keeping their characteristics, and it may merge any number of declarations. This declaration generates entities from at least one of three categories:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Namespace-creating declaration&lt;/strong&gt;s create a namespace, which contains names that are accessed using a dotted notation. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Type-creating declaration&lt;/strong&gt;s do just that: they create a type that is visible with the declared shape and bound to the given name. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Value-creating declaration&lt;/strong&gt;s create values that are visible in the output JavaScript.&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Declaration Type&lt;/th&gt;
&lt;th&gt;Namespace&lt;/th&gt;
&lt;th&gt;Type&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Namespace&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;  &lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Class&lt;/td&gt;
&lt;td&gt;  &lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Enum&lt;/td&gt;
&lt;td&gt;  &lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Interface&lt;/td&gt;
&lt;td&gt;  &lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt; &lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Type Alias&lt;/td&gt;
&lt;td&gt;  &lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt; &lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Function&lt;/td&gt;
&lt;td&gt;  &lt;/td&gt;
&lt;td&gt; &lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Variable&lt;/td&gt;
&lt;td&gt;  &lt;/td&gt;
&lt;td&gt; &lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Why is Declaration Merging Important?
&lt;/h2&gt;

&lt;p&gt;Understanding declaration merging is crucial for several reasons. Firstly, it gives developers a significant advantage when working with existing JavaScript by allowing for more advanced abstraction concepts. Secondly, it enables the enhancement of library functionality, module augmentation, and global augmentation without altering the original source code. This not only streamlines the development process but also simplifies the maintenance of codebases by keeping modifications and extensions organized and consistent.&lt;/p&gt;

&lt;p&gt;Through declaration merging, TypeScript developers can effectively extend existing types in a type-safe manner, introduce new functionality to existing libraries, and more seamlessly integrate third-party libraries into their projects. Whether you are dealing with interfaces, namespaces, or modules, mastering declaration merging opens up a world of possibilities in software development.&lt;/p&gt;

&lt;p&gt;In the following sections, we will delve deeper into the basics of declaration merging, explore practical examples, and unveil best practices to harness this powerful feature to its full potential. Stay tuned as we unlock the advanced capabilities of TypeScript, making complex concepts more accessible and enhancing your coding proficiency.&lt;/p&gt;




&lt;h2&gt;
  
  
  Understanding the Basics of Declaration Merging
&lt;/h2&gt;

&lt;p&gt;At the core of TypeScript's functionality is the compiler's ability to merge declarations. This capability is pivotal for leveraging TypeScript's full potential, allowing developers to define entities across three main groups: namespaces, types, and values. Each type of declaration interacts uniquely within the TypeScript environment, playing a critical role in the structure and behavior of your code.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Three Pillars of TypeScript Declarations
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Namespace-Creating Declarations: These declarations introduce a namespace, which is essentially a named container for a set of identifiers or names. Namespaces are accessed using dotted notation and are fundamental for organizing code and preventing name collisions in larger applications.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Type-Creating Declarations: As the name suggests, these declarations create types. TypeScript is known for its robust typing system, and type-creating declarations are at the heart of this system, defining the shape and behavior of the data structures used throughout your code.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Value-Creating Declarations: These declarations are responsible for creating values that are visible in the output JavaScript. Functions and variables are typical examples of value-creating declarations, forming the executable part of your code.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Understanding the distinctions and interactions between these declarations is essential for mastering declaration merging. Let's illustrate these concepts with a table:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgta6ob2x7ejc7vdc978s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgta6ob2x7ejc7vdc978s.png" alt="Image description" width="800" height="322"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This table clarifies how different declarations contribute to the structure of TypeScript code, laying the foundation for understanding the intricacies of declaration merging.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Significance of Merging Types
&lt;/h3&gt;

&lt;p&gt;In TypeScript, the merging of declarations unfolds a new dimension of coding flexibility and abstraction. By merging interfaces or namespaces, developers can incrementally build up existing types or functionalities without overwriting or duplicating code. This not only promotes DRY (Don't Repeat Yourself) principles but also enhances code readability and maintainability.&lt;/p&gt;




&lt;h2&gt;
  
  
  Merging Interfaces: The Foundation of Declaration Merging
&lt;/h2&gt;

&lt;p&gt;In TypeScript, interfaces are used to define the shape of an object or function, specifying the expected properties, types, and methods that an entity should have. When two interfaces of the same name are defined, TypeScript doesn't throw an error or ignore one of them; instead, it merges their definitions into a single interface. This merged interface then contains all the members of the original interfaces, effectively combining their specifications.&lt;/p&gt;

&lt;h3&gt;
  
  
  How Interface Merging Works
&lt;/h3&gt;

&lt;p&gt;Consider the following example to understand the mechanics of interface merging:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;interface Box {
  height: number;
  width: number;
}

interface Box {
  scale: number;
}

let box: Box = { height: 5, width: 6, scale: 10 };

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this scenario, TypeScript merges the two Box interfaces into one, allowing the box variable to include properties defined in both interface declarations (&lt;code&gt;height&lt;/code&gt;, &lt;code&gt;width&lt;/code&gt;, and &lt;code&gt;scale&lt;/code&gt;). This behavior showcases the seamless integration of separate type definitions, enhancing the modularity and scalability of your code.&lt;/p&gt;

&lt;h3&gt;
  
  
  Managing Member Conflicts in Merging
&lt;/h3&gt;

&lt;p&gt;When merging interfaces, TypeScript enforces certain rules to ensure type safety:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Non-function members (properties) must be unique or have the same type if declared more than once. If a conflict arises (i.e., the same property is declared with different types), TypeScript will issue an error.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Function members are treated as overloads. This means that if multiple interfaces declare a function with the same name, TypeScript merges them into a single function with multiple overload signatures. The order of these signatures follows a specific precedence, with later declarations having higher priority.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Consider the &lt;code&gt;Cloner&lt;/code&gt; interface example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;interface Cloner {
  clone(animal: Animal): Animal;
}

interface Cloner {
  clone(animal: Sheep): Sheep;
}

interface Cloner {
  clone(animal: Dog): Dog;
  clone(animal: Cat): Cat;
}

// Merged interface Cloner
interface Cloner {
  clone(animal: Dog): Dog;
  clone(animal: Cat): Cat;
  clone(animal: Sheep): Sheep;
  clone(animal: Animal): Animal;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The merged &lt;code&gt;Cloner&lt;/code&gt; interface illustrates how TypeScript organizes overload signatures, ensuring the most specific types appear first in the merged definition.&lt;/p&gt;

&lt;h3&gt;
  
  
  Advantages of Interface Merging
&lt;/h3&gt;

&lt;p&gt;Merging interfaces offers several advantages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Flexibility&lt;/strong&gt;: It allows for the incremental definition or extension of interfaces across different parts of a program or in different files, contributing to a more flexible codebase.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Extensibility&lt;/strong&gt;: Libraries and frameworks can extend types defined by their users without requiring modifications to the original interface declarations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Maintainability&lt;/strong&gt;: By organizing related properties and methods under a single named entity, merged interfaces enhance code readability and maintainability.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Advanced Declaration Merging Scenarios: Merging Namespaces
&lt;/h2&gt;

&lt;p&gt;Namespaces in TypeScript are used for organizing code into named groups, thereby avoiding naming collisions in larger applications. Similar to interfaces, when two or more namespaces with the same name are declared, TypeScript merges their contents into a single namespace. This feature is particularly useful for modularizing code and extending existing namespaces with additional functionality.&lt;/p&gt;

&lt;h3&gt;
  
  
  How Namespace Merging Works
&lt;/h3&gt;

&lt;p&gt;Namespace merging combines the members of each namespace declaration into a single namespace. This merged namespace contains all exported members from each of the original namespaces. Let’s consider an example to illustrate this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;namespace Animals {
  export class Zebra { }
}

namespace Animals {
  export interface Legged { numberOfLegs: number; }
  export class Dog { }
}

// Resulting merged namespace Animals
namespace Animals {
  export class Zebra { }
  export interface Legged { numberOfLegs: number; }
  export class Dog { }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, both &lt;code&gt;Animals&lt;/code&gt; namespace declarations are merged into one, encompassing the &lt;code&gt;Zebra&lt;/code&gt; class, the &lt;code&gt;Legged&lt;/code&gt; interface, and the &lt;code&gt;Dog&lt;/code&gt; class. This merging process facilitates a cohesive and organized structure for grouping related entities.&lt;/p&gt;

&lt;h3&gt;
  
  
  Merging Namespaces with Classes, Functions, and Enums
&lt;/h3&gt;

&lt;p&gt;TypeScript's declaration merging extends beyond interfaces and namespaces to include classes, functions, and enums. This versatile feature allows for a range of flexible design patterns:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Namespaces with Classes&lt;/strong&gt;: You can use namespaces to add static members to classes or to define inner classes. This pattern is useful for creating classes within classes, offering a neat organizational structure.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class Album {
  label: Album.AlbumLabel;
}

namespace Album {
  export class AlbumLabel { }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Namespaces with Functions&lt;/strong&gt;: Functions can be extended with additional properties through namespaces, allowing for a functional programming style combined with the structured organization of object-oriented programming.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function buildLabel(name: string): string {
  return buildLabel.prefix + name + buildLabel.suffix;
}

namespace buildLabel {
  export let suffix = "";
  export let prefix = "Hello, ";
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Namespaces with Enums&lt;/strong&gt;: Enums can be extended with static members using namespaces, enhancing the functionality of enum types.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;enum Color {
  red = 1,
  green = 2,
  blue = 4,
}

namespace Color {
  export function mixColor(colorName: string): number {
    // Implementation
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Considerations and Best Practices
&lt;/h3&gt;

&lt;p&gt;When leveraging declaration merging, particularly with namespaces, it’s important to maintain clear and consistent documentation to ensure that the merged structure is understandable and maintainable. Additionally, be mindful of the visibility and accessibility of members, especially when dealing with private or non-exported members across merged declarations.&lt;/p&gt;

&lt;p&gt;Namespace merging in TypeScript offers a powerful mechanism for organizing and extending code, providing developers with the flexibility to structure applications in a modular and extensible manner.&lt;/p&gt;

&lt;p&gt;In the following sections, we will delve into practical examples of declaration merging, showcasing its application in real-world scenarios. Stay tuned for an in-depth exploration of how to leverage declaration merging to enhance your TypeScript projects.&lt;/p&gt;




&lt;h2&gt;
  
  
  Practical Examples of Declaration Merging
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Merging Interfaces for Extensible Models
&lt;/h3&gt;

&lt;p&gt;One of the most straightforward uses of declaration merging is to extend existing interfaces, allowing for incremental enhancements and compatibility with evolving codebases. Consider a scenario where you're building a library for UI components, and you need to extend an interface to include new properties without breaking existing implementations.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Initial interface in the library
interface ButtonProps {
  label: string;
  onClick: () =&amp;gt; void;
}

// Extension in a consumer's code
interface ButtonProps {
  color?: string;
}

// The resulting merged interface includes both sets of properties
function createButton(props: ButtonProps) {
  // Implementation that uses label, onClick, and optionally color
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This example demonstrates how declaration merging facilitates the evolution of API interfaces in a backward-compatible manner.&lt;/p&gt;

&lt;h3&gt;
  
  
  Enhancing Functionality with Merged Namespaces
&lt;/h3&gt;

&lt;p&gt;Namespaces can be merged with functions to augment the functions with additional properties or metadata, enabling a pattern often used in JavaScript libraries:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function networkRequest(url: string): any {
  // Implementation omitted for brevity
}

// Merging a namespace with the function to add properties
namespace networkRequest {
  export let timeout = 3000; // Default timeout for requests
}

// Usage
networkRequest.timeout = 5000; // Adjusting the default timeout
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This pattern provides a flexible way to associate configurations or metadata with functions, enhancing their functionality without altering their core implementation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Combining Enums and Namespaces for Richer Structures
&lt;/h3&gt;

&lt;p&gt;Enums in TypeScript are a powerful way to define a set of named constants. By merging enums with namespaces, you can add static methods or properties to enums, enriching their functionality:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;enum StatusCode {
  Success = 200,
  NotFound = 404,
  ServerError = 500,
}

namespace StatusCode {
  export function getMessage(code: StatusCode): string {
    switch (code) {
      case StatusCode.Success:
        return "Request succeeded";
      case StatusCode.NotFound:
        return "Resource not found";
      case StatusCode.ServerError:
        return "Internal server error";
      default:
        return "Unknown status code";
    }
  }
}

// Usage
console.log(StatusCode.getMessage(StatusCode.NotFound)); // "Resource not found"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This approach allows enums to serve not only as simple constants but also as namespaces for related functions or data, providing a more structured and intuitive way to manage related sets of values and behaviors.&lt;/p&gt;

&lt;h3&gt;
  
  
  Best Practices in Using Declaration Merging
&lt;/h3&gt;

&lt;p&gt;When utilizing declaration merging, keep the following best practices in mind:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Document Merged Declarations&lt;/strong&gt;: Ensure that merged interfaces, namespaces, and other entities are well-documented to maintain clarity and ease of use for other developers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Avoid Overuse&lt;/strong&gt;: While declaration merging offers great flexibility, overuse can lead to complicated code structures that are difficult to understand and maintain. Use it judiciously.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ensure Compatibility&lt;/strong&gt;: When extending libraries or third-party code, ensure that your extensions do not break existing functionality or expected behaviors.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Declaration merging in TypeScript opens up a multitude of possibilities for enhancing and structuring your code. By understanding and applying this powerful feature wisely, you can create more flexible, extensible, and maintainable applications.&lt;/p&gt;




&lt;h2&gt;
  
  
  Module Augmentation: Extending Existing Modules
&lt;/h2&gt;

&lt;p&gt;Module augmentation is a powerful feature in TypeScript that leverages the concept of declaration merging to enhance or modify modules. This is particularly useful when working with third-party libraries or modules, as it allows you to tailor them to your specific needs without waiting for the library maintainers to make changes.&lt;/p&gt;

&lt;h3&gt;
  
  
  How Module Augmentation Works
&lt;/h3&gt;

&lt;p&gt;To augment a module, you first import it, then declare additional properties, methods, or even interfaces within the same module scope. TypeScript automatically merges these declarations with the original module's declarations.&lt;/p&gt;

&lt;p&gt;Consider a scenario where you're using a library that defines an &lt;code&gt;Observable&lt;/code&gt; class, but you want to add a &lt;code&gt;map&lt;/code&gt; function to its prototype:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Original module in observable.ts
export class Observable&amp;lt;T&amp;gt; {
  // Original class implementation
}

// Augmentation in your own code
import { Observable } from './observable';

declare module './observable' {
  interface Observable&amp;lt;T&amp;gt; {
    map&amp;lt;U&amp;gt;(f: (x: T) =&amp;gt; U): Observable&amp;lt;U&amp;gt;;
  }
}

Observable.prototype.map = function &amp;lt;T, U&amp;gt;(this: Observable&amp;lt;T&amp;gt;, f: (x: T) =&amp;gt; U): Observable&amp;lt;U&amp;gt; {
  // Implementation of map
  return new Observable&amp;lt;U&amp;gt;();
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This example illustrates how module augmentation allows for seamless extensions of existing modules, enriching their capabilities without modifying the original source.&lt;/p&gt;

&lt;h3&gt;
  
  
  Practical Applications of Module Augmentation
&lt;/h3&gt;

&lt;p&gt;Module augmentation can be employed in various scenarios, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Adding new functionalities to third-party libraries&lt;/strong&gt;: Enhance libraries by introducing new methods or properties that fit your specific requirements.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Plugin or theme development&lt;/strong&gt;: Develop plugins or themes that extend core functionalities of frameworks or libraries.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Typing external libraries&lt;/strong&gt;: Improve or correct the types of external libraries for better type checking and developer experience.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Best Practices and Considerations
&lt;/h3&gt;

&lt;p&gt;While module augmentation offers significant flexibility, it's essential to use it judiciously:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Maintainability&lt;/strong&gt;: Ensure that augmented modules remain maintainable and that the extensions are well-documented.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compatibility&lt;/strong&gt;: Regularly check for updates to the original module to ensure that your augmentations do not conflict with new versions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scope&lt;/strong&gt;: Use module augmentation primarily for extending functionalities or fixing types. Avoid overusing it to the point where the original module's purpose or behavior becomes obscured.
Module augmentation is a testament to TypeScript's versatility, enabling developers to tailor modules to their needs dynamically. As we progress, we'll explore how global augmentation can be utilized to extend the global scope with additional declarations.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The exploration of declaration merging in TypeScript demonstrates its profound impact on improving code organization, extensibility, and maintenance. Through practical examples and advanced techniques like module augmentation, developers can leverage these features to build more robust, flexible, and maintainable applications.&lt;/p&gt;

&lt;p&gt;This concludes our deep dive into the intricacies of Declaration Merging in TypeScript, from the basics of merging interfaces to the advanced concepts of module and global augmentation. Armed with this knowledge, you're well-equipped to harness the full potential of TypeScript in your projects.&lt;/p&gt;




</description>
      <category>typescript</category>
    </item>
    <item>
      <title>Online Machine Learning</title>
      <dc:creator>Damika-Anupama</dc:creator>
      <pubDate>Sat, 07 Sep 2024 19:37:24 +0000</pubDate>
      <link>https://dev.to/damikaanupama/online-machine-learning-5g6p</link>
      <guid>https://dev.to/damikaanupama/online-machine-learning-5g6p</guid>
      <description>&lt;p&gt;The concept of online learning emerged in the early 1990s, influenced by the increasing availability of real-time data and the need for models that adapt without retraining on full datasets. The Perceptron algorithm, introduced in the 1950s, laid the groundwork for modern online learning methods. Online machine learning is a learning paradigm in which the model learns progressively in real time by processing input points sequentially. As new data comes in, the model is updated continuously, which enables it to dynamically adjust to changing data distributions without requiring a full dataset retraining. This method works especially well in situations when data is streamed or varies over time.&lt;/p&gt;

&lt;p&gt;There are few key characteristics:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Incremental learning involves updating the model with each new data point without the need to re-train on the entire dataset.&lt;/li&gt;
&lt;li&gt;Real-time Adaptation is a crucial feature that allows a model to quickly adapt to continuous data arrivals.&lt;/li&gt;
&lt;li&gt;The system efficiently reduces memory and computation costs by processing one data point at a time.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fljd2m166oaft423ls8j9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fljd2m166oaft423ls8j9.png" alt="Online Learning vs Batch / Offline Learning" width="800" height="587"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;image description: Online Learning vs Batch / Offline Learning, source: &lt;a href="https://www.linkedin.com/pulse/types-machine-learning-techniques-training-method-based-sharma/" rel="noopener noreferrer"&gt;https://www.linkedin.com/pulse/types-machine-learning-techniques-training-method-based-sharma/&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Few use cases of Online Learning,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Real-time stock market predictions to analyze the most recent market data&lt;/li&gt;
&lt;li&gt;Online advertising and click-through rate prediction&lt;/li&gt;
&lt;li&gt;RL agent to predict the relevant insulin amount for a short time horizon, according to the real-time glucose level of type 1 diabetes patient&lt;/li&gt;
&lt;li&gt;Spam filtering and network intrusion detection systems. As new patterns of spam or cyber-attacks emerge, the models are continuously updated to improve accuracy, ensuring timely detection of malicious activities with minimal manual intervention.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Meanwhile, here are few advantages and disadvantages in this learning paradigm.&lt;br&gt;
Advantages:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Since small amounts of data are processed, this reduces resource requirements and improves memory efficiency and computation.&lt;/li&gt;
&lt;li&gt;Suitable for cases in which data is delivered continuously, such as streaming data.&lt;/li&gt;
&lt;li&gt;Capable of responding to changes in the non-stationary data distribution over time.&lt;/li&gt;
&lt;li&gt;The model's fast updates, assisted by the availability of new data, allow for quick responses to changes in data trends, particularly in dynamic situations.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Disadvantages:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Small batches might introduce fluctuations, making them susceptible to noise.&lt;/li&gt;
&lt;li&gt;In comparison to batch learning (opposite of online learning), it may take longer to converge to an optimal solution.&lt;/li&gt;
&lt;li&gt;The learning rate and regularization parameters must be carefully tuned for performance.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let's talk about the mathematical foundation of online learning. Gradient descent-based optimization methods are frequently employed, specifically stochastic gradient descent (SGD), which updates model parameters iteratively based on small batches or individual data points to minimize the objective function over time. Perceptron algorithm and Hoeffding tree (Streaming Decision Tree) are another algorithms that use online learning. Here's a &lt;a href="https://colab.research.google.com/drive/1IeUHtYAAC3hbJNV5TtCkXnBgLcxuisOf?usp=sharing" rel="noopener noreferrer"&gt;sample code to demonstrate online learning to use SGD&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fegl5pg1gmk22qd42xqg0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fegl5pg1gmk22qd42xqg0.png" alt="SGD with online learning code output" width="800" height="757"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Above online learning with SGD code output, shows how accuracy of the model output may change with time, when new data samples are continuously coming. Check how above-mentioned advantages and disadvantages may apply to code, and output. &lt;/p&gt;

&lt;p&gt;During online learning cumulative error rate is calculated by analyzing performance over time across all data points. This calculates how quickly the model adjusts to new patterns in the data. Memory efficiency relates to the processing of small data chunks. There are few related learning paradigms that crosses with ours:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Batch Learning / Offline Learning&lt;/strong&gt;: Whole dataset all at once during training, only making updates to the model after the dataset has been thoroughly examined. Compared to online learning, this method usually produces updates that are more accurate and reliable, but it uses more memory and processing power. Batch learning is less effective in dynamic or streaming situations where data is always changing, as it is less flexible than online learning when it comes to real-time adaptation to new data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Incremental Learning&lt;/strong&gt;: Updates models incrementally as new data becomes available, retaining previous knowledge and incorporating new information. It's suitable for large datasets or computationally expensive scenarios, combining online learning and batch-based updates.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Active Learning&lt;/strong&gt;: Involves selecting the most informative data points for learning, useful in scenarios with limited data.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Finally, let's discuss some modern applications regarding online learning. Real-time recommendation systems like Netflix, Amazon use online learning to recommend content as user preferences evolve. Banks and financial institutions use online models to detect fraudulent transactions in real-time. Continuously updating models in self-driving cars handle new sensor data from changing environments. &lt;br&gt;
Future directions of this learning paradigm include Adversarial Robustness, Federated Online Learning, and Online Deep Learning.&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>ai</category>
      <category>learning</category>
    </item>
    <item>
      <title>Let's Play Snyk 🐶</title>
      <dc:creator>Damika-Anupama</dc:creator>
      <pubDate>Wed, 06 Mar 2024 16:56:56 +0000</pubDate>
      <link>https://dev.to/damikaanupama/lets-play-snyk-4h87</link>
      <guid>https://dev.to/damikaanupama/lets-play-snyk-4h87</guid>
      <description>&lt;p&gt;Hi folks, I'm diving into &lt;a href="https://snyk.io/" rel="noopener noreferrer"&gt;Snyk&lt;/a&gt; this time. This is a platform for developer security that helps protect infrastructure as code, dependencies, containers, and code. Snyk includes the following products and mostly focuses on security and dependency monitoring:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fflrfxctew10xy6jhsg0c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fflrfxctew10xy6jhsg0c.png" alt="Snyk Plans" width="800" height="195"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://snyk.io/product/snyk-code/" rel="noopener noreferrer"&gt;&lt;strong&gt;Snyk Code&lt;/strong&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Static application security testing (SAST):&lt;/strong&gt; helps developers find and fix vulnerabilities in their code as they write it. This offers real-time scanning, fix advice, broad language and platform support, machine learning engine, risk prioritization, and workflow integration. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgw9eri0qa38z3gldd2n0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgw9eri0qa38z3gldd2n0.png" alt="How Snyk shows vulnerabilities in code" width="800" height="385"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Features:&lt;/strong&gt; Secure code without disrupting development workflow, save time and money by preventing code delays and security issues, and become quasi-security professionals with comprehensive security tooling and knowledge.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://snyk.io/product/open-source-security-management/" rel="noopener noreferrer"&gt;&lt;strong&gt;Snyk Open Source&lt;/strong&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzsf7e970ppi7mhwzui24.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzsf7e970ppi7mhwzui24.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt; &lt;a href="https://www.google.com/url?sa=i&amp;amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3D4ng5usM6fd8&amp;amp;psig=AOvVaw1x14K5isWMgtEHwQrdWPMQ&amp;amp;ust=1709829513164000&amp;amp;source=images&amp;amp;cd=vfe&amp;amp;opi=89978449&amp;amp;ved=0CBMQjRxqFwoTCOi30buJ4IQDFQAAAAAdAAAAABAD" rel="noopener noreferrer"&gt;&lt;em&gt;source&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;What is Snyk Open Source?&lt;/strong&gt; Software composition analysis (SCA) solution that helps developers find and fix security vulnerabilities and license issues in open source dependencies. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;How does it work 🤔&lt;/strong&gt; Integrates with various developer tools, scans open source packages and dependencies for vulnerabilities and license issues, providing actionable advice and automated workflows for fixing them. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Why use it?&lt;/strong&gt; Enables developers to secure open source code using industry-leading security and application intelligence, reducing risk and ensuring compliance with regulatory and internal security policies.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://snyk.io/product/container-vulnerability-management/" rel="noopener noreferrer"&gt;&lt;strong&gt;Snyk Container&lt;/strong&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;What is Snyk Container?&lt;/strong&gt; Developer-first solution that helps find, prioritize, and fix vulnerabilities in container images and Kubernetes workloads throughout the software development lifecycle.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwk4148vsh38wnmm7wmhn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwk4148vsh38wnmm7wmhn.png" alt="Snyk Container Preview" width="800" height="466"&gt;&lt;/a&gt; &lt;a href="https://www.google.com/url?sa=i&amp;amp;url=https%3A%2F%2Fsnyk.io%2Fblog%2Ftips-best-practices-building-secure-container-images%2F&amp;amp;psig=AOvVaw1_CdvdqcB6f0kA9Qxwitvx&amp;amp;ust=1709829852629000&amp;amp;source=images&amp;amp;cd=vfe&amp;amp;opi=89978449&amp;amp;ved=0CBMQjRxqFwoTCKjdmtuK4IQDFQAAAAAdAAAAABAD" rel="noopener noreferrer"&gt;&lt;em&gt;Source&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;How does Snyk Container work?&lt;/strong&gt; Snyk Container integrates with daily developers' tools, scans for vulnerabilities in base images, dependencies, Dockerfile commands, and Kubernetes manifests, provides remediation advice, recommendations, and priority scoring.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Why use Snyk Container?&lt;/strong&gt; Snyk Container allows developers to secure containers and Kubernetes workloads without disrupting daily workflows, thereby saving development time, reducing security risks, and achieving compliance objectives.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://snyk.io/product/infrastructure-as-code-security/" rel="noopener noreferrer"&gt;&lt;strong&gt;Snyk Infrastructure as Code&lt;/strong&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;What is Snyk IaC?&lt;/strong&gt; Snyk IaC is a tool that helps developers secure their infrastructure as code (IaC) configurations from code to cloud. It scans IaC files for vulnerabilities and misconfigurations, provides remediation advice and fixes, and detects drift in running cloud environments.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb44szwblz0lw046mdtzr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb44szwblz0lw046mdtzr.png" alt="Snyk IaC preview" width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Snyk IaC integrates with developer workflows, providing security feedback and suggested fixes. It enforces consistent security and compliance rules across SDLC and cloud using OPA's Rego query language. It enables proactive security issue fixation and unifies visibility and governance across multiple IaC frameworks and cloud providers.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://snyk.io/product/snyk-apprisk/" rel="noopener noreferrer"&gt;&lt;strong&gt;Snyk AppRisk&lt;/strong&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;What's Snyk AppRisk&lt;/strong&gt;: A solution that helps teams &lt;strong&gt;build, deploy, and operate securely&lt;/strong&gt; in the cloud by embedding security in developer workflows from code to cloud.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftej1jp4c9ytrs44x0d25.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftej1jp4c9ytrs44x0d25.png" alt="Snyk AppRisk preview" width="800" height="500"&gt;&lt;/a&gt; &lt;a href="https://www.google.com/url?sa=i&amp;amp;url=https%3A%2F%2Fsnyk.io%2Fblog%2Fcritical-webp-0-day-cve-2023-4863%2F&amp;amp;psig=AOvVaw3F51I_RHVisXXiWZJ3xdwv&amp;amp;ust=1709830381461000&amp;amp;source=images&amp;amp;cd=vfe&amp;amp;opi=89978449&amp;amp;ved=0CBMQjRxqFwoTCJj-4NuM4IQDFQAAAAAdAAAAABAD" rel="noopener noreferrer"&gt;&lt;em&gt;source&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Key Features&lt;/strong&gt;: Snyk AppRisk provides security feedback and fixes for &lt;strong&gt;code, dependencies, container images, and cloud infrastructure as code (IaC)&lt;/strong&gt; across the software development life cycle (SDLC) and running cloud environments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Benefits&lt;/strong&gt;: Snyk AppRisk enables developers to &lt;strong&gt;proactively fix security issues&lt;/strong&gt; in their IDE, CLI, and Git workflows, reducing backlogs and time to fix. It also &lt;strong&gt;unifies visibility and governance&lt;/strong&gt; from code to cloud with a single policy engine and ruleset, and &lt;strong&gt;speeds up and scales&lt;/strong&gt; developer-led fixes for cloud misconfigurations with direct links to the source IaC file in Git workflows.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Supported Technologies&lt;/strong&gt;: &lt;strong&gt;Terraform, CloudFormation, ARM, Kubernetes, Docker, AWS, Azure, Google Cloud&lt;/strong&gt;, and more. It also integrates with &lt;strong&gt;Sysdig&lt;/strong&gt; for runtime security and &lt;strong&gt;OPA&lt;/strong&gt; for policy enforcement.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Before we go any farther, I need to volunteer to tackle two key difficulties that you could be having 😉&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What's the meaning of Snyk?&lt;/li&gt;
&lt;li&gt;How to pronounce this word 👀 
This &lt;a href="https://support.snyk.io/hc/en-us/articles/360000890358-How-do-you-pronounce-Snyk#:~:text=Snyk%20is%20short%20for%20'So%20Now%20You%20Know'." rel="noopener noreferrer"&gt;Snyk support&lt;/a&gt; provides answers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvrsoj9mpumsrfpj6cq2m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvrsoj9mpumsrfpj6cq2m.png" alt="Image description" width="800" height="462"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Let's use Snyk&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;There're couple of ways we can use Snyk&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You can install &lt;a href="https://docs.snyk.io/snyk-cli/getting-started-with-the-snyk-cli" rel="noopener noreferrer"&gt;Snyk CLI&lt;/a&gt; using terminal, and scan your project using Snyk&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fez86iuybp9p15ytj793z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fez86iuybp9p15ytj793z.png" alt="Image description" width="800" height="469"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://docs.snyk.io/integrate-with-snyk/ide-tools" rel="noopener noreferrer"&gt;IDE Plugins&lt;/a&gt; - my main IDE is VSCode, but you can also use Snyk in Jetbrains IDEs, Eclipse and Visual Studio &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr9i9o9slnze6otis6inf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr9i9o9slnze6otis6inf.png" alt="Snyk VSCode plugin preview" width="800" height="432"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F99w5ghjyftuimwaz6rfv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F99w5ghjyftuimwaz6rfv.png" alt="after the installation how it previews" width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://docs.snyk.io/integrate-with-snyk/git-repositories-scms-integrations-with-snyk" rel="noopener noreferrer"&gt;Git Repositories&lt;/a&gt; - GitHub, Bitbucket, Gitlab and Azure (TFS). From these GitHub and Bitbucket integrations are popular. For this you have to login with your relevant account&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx1vxv2h74ti7260ya2rt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx1vxv2h74ti7260ya2rt.png" alt="login with git repository integration" width="800" height="241"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;after you add github repositories to Snyk, you can see vulnerabilities in each repository Snyk dashboard's projects section&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fruyqb6b7nm60k6phenxt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fruyqb6b7nm60k6phenxt.png" alt="Snyk dashboard projects" width="800" height="390"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can check each security vulnerability, in each project by going inside, and it'll show like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F03r4gt6f0v6dtmwohqco.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F03r4gt6f0v6dtmwohqco.png" alt="Security vulnerabilities in a project" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Settings allows you to adjust git repository Snyk configurations. This has an incredible collection of capabilities, that you may setup Snyk automated pull requests for repositories, enable Snyk scan for manual pull requests, activate Snyk for code, activate Snyk for IaC. Check your Snyk Usage (If you're using the Snyk free plan like me, you can see how much resource you've used). Snyk may be integrated with your existing notification system, such as Slack. &lt;/p&gt;

&lt;p&gt;Here's a Snyk-bot's automatic pull request on GitHub repository&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1pv6yo4pbynslbxo1goa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1pv6yo4pbynslbxo1goa.png" alt="Snyk-bot's automatic pull request on GitHub repository" width="800" height="415"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can change Snyk configurations to check your repository code, IaC weekly due to limited resources in your plan.&lt;/p&gt;

&lt;p&gt;Furthermore, you can add Snyk app to your GitHub account from the &lt;a href="https://github.com/marketplace/snyk" rel="noopener noreferrer"&gt;marketplace&lt;/a&gt;: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5l7ce53kpo8sgq67bhbt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5l7ce53kpo8sgq67bhbt.png" alt="Snyk app" width="800" height="524"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://docs.snyk.io/integrate-with-snyk/snyk-ci-cd-integrations" rel="noopener noreferrer"&gt;Snyk for CI/CD&lt;/a&gt; - Pipelines and integrations in AWS, Azure, Bitbucket and more&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi050ze91uiga99c6b3j4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi050ze91uiga99c6b3j4.png" alt="Snyk for CI/CD" width="800" height="412"&gt;&lt;/a&gt; &lt;a href="https://www.google.com/url?sa=i&amp;amp;url=https%3A%2F%2Fsnyk.io%2Fblog%2Ffind-fix-vulnerabilities-ci-cd-pipeline-snyk-harness%2F&amp;amp;psig=AOvVaw2rfI3gcDfVsFzu0LxUyms9&amp;amp;ust=1709827966046000&amp;amp;source=images&amp;amp;cd=vfe&amp;amp;opi=89978449&amp;amp;ved=0CBUQjhxqFwoTCJDYk9iD4IQDFQAAAAAdAAAAABAD" rel="noopener noreferrer"&gt;&lt;em&gt;Image source&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>devsecops</category>
      <category>security</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Can COLD STARTS be prevented by Webpack ❄️</title>
      <dc:creator>Damika-Anupama</dc:creator>
      <pubDate>Sat, 03 Feb 2024 06:39:39 +0000</pubDate>
      <link>https://dev.to/damikaanupama/can-cold-starts-be-stopped-by-webpack-d</link>
      <guid>https://dev.to/damikaanupama/can-cold-starts-be-stopped-by-webpack-d</guid>
      <description>&lt;p&gt;Hi folks, let's talk about Cold Starts in Cloud Services. I'm Focusing on the AWS cold starts, according to&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The background details of cold starts&lt;/li&gt;
&lt;li&gt;How cold starts begin&lt;/li&gt;
&lt;li&gt;What are the effects of cold starts&lt;/li&gt;
&lt;li&gt;How I tried to prevent cold starts&lt;/li&gt;
&lt;li&gt;Further improvements.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I'll try my best to provide you updated details via this article. So, let's begin 😎&lt;/p&gt;




&lt;h1&gt;
  
  
  The background details of cold starts
&lt;/h1&gt;

&lt;p&gt;If you work in a software company that uses cloud services in production, you've probably heard developers talk about cold starts. If not, now it’s the time to learn everything about cold starts. Maintaining cold starts in clouds, reduces service costs and the amount of time it takes to execute HTTP requests. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cold start&lt;/strong&gt; term is related to the Function-as-a-Service in cloud services and &lt;a href="https://mikhail.io/serverless/coldstarts/big3/" rel="noopener noreferrer"&gt;this article&lt;/a&gt; provides pretty good comparison and an analysis of Cold Starts in Serverless Functions across AWS (lambda), Azure (Functions), and GCP (Functions).&lt;/p&gt;




&lt;h1&gt;
  
  
  How cold starts begin
&lt;/h1&gt;

&lt;p&gt;A cold start begins with how the architecture of the lambda service implemented in AWS. Normally when we sent an API request to lambda, lambda service works according to following order:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Is there a free execution environment? (Container with runtime)&lt;/li&gt;
&lt;li&gt;If not, Create an Execution environment&lt;/li&gt;
&lt;li&gt;Download lambda code&lt;/li&gt;
&lt;li&gt;Initialize (Running code outside handler function)&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;handler&lt;/code&gt; function code&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;2, 3, 4 steps caused Cold Starts&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1nlu0rwa2u56km5rl6if.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1nlu0rwa2u56km5rl6if.png" alt="Cold start procedure" width="800" height="303"&gt;&lt;/a&gt;&lt;em&gt;Extracted from &lt;a href="https://youtu.be/2EDNcPvR45w?si=KOINXXTUPAeRah2r&amp;amp;t=239" rel="noopener noreferrer"&gt;AWS re:Invent 2023 - Demystifying and mitigating AWS Lambda cold starts (COM305)&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Then What's a lambda warm start
&lt;/h3&gt;

&lt;p&gt;These &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/lambda-runtime-environment.html" rel="noopener noreferrer"&gt;execution environments&lt;/a&gt; (lambda instances) remain only &lt;a href="https://repost.aws/questions/QUKdeptBaRT5OKa-5ZCY3Bpg/how-long-does-a-lambda-instance-can-keep-warm" rel="noopener noreferrer"&gt;10 or 15 minutes&lt;/a&gt; after the first run (Cold start) of handler function. If another request comes during that time it uses already built virtual environment. This is the warm start. &lt;strong&gt;BUT&lt;/strong&gt; if another request comes while already created environment is busy with another request, lambda service has to create another virtual environment (look above steps of lambda service working order and you can understand this process). Then lambda's concurrency increases by 1 (total concurrency = 1 + 1 = 2).&lt;/p&gt;

&lt;p&gt;PS: When you’re working with lambda functions, you may probably work with AWS cloud watch. In there you may sometimes be confused with log groups and log streams. These log groups are created for 1 lambda service and 1 log stream is created for one lambda instance in the lambda service (&lt;a href="https://docs.aws.amazon.com/lambda/latest/operatorguide/log-structure.html" rel="noopener noreferrer"&gt;source&lt;/a&gt;) in another words, when a cold start begins. So 1 cloud watch stream includes all the details of the lambda instance such as, details of cold start, warm starts, cached requests, logs, errors etc. AWS X-Ray works on these logs output by the cloud watch service.&lt;/p&gt;




&lt;h1&gt;
  
  
  What are the effects of cold starts
&lt;/h1&gt;

&lt;p&gt;You might wonder why we are focusing on lambda cold starts. Because cold start implies that the relevant lambda instance is starting, the http request must wait until the lambda instance is warmed up, which may result in a few seconds of latency in the frontend website / mobile app. This leads to a poor user experience. Otherwise, if your application is a response-critical website, such as a bank app, an ecommerce website, or a stock market application, this could cause significant disaster.&lt;/p&gt;




&lt;h1&gt;
  
  
  How I tried to prevent cold starts
&lt;/h1&gt;

&lt;p&gt;I used &lt;strong&gt;&lt;a href="https://webpack.js.org/concepts/why-webpack/" rel="noopener noreferrer"&gt;webpack&lt;/a&gt;&lt;/strong&gt; to reduce the cold start time for AWS lambda functions. Normally, we use webpack for the bundling purposes in single page applications, basically on frontend frameworks such as Angular-CLI, React and more. In here we use the webpack's bundling, minifying and tree shaking features to &lt;a href="https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/webpack.html" rel="noopener noreferrer"&gt;bundle AWS NodeJS lambda functions&lt;/a&gt;.&lt;br&gt;
My approach on applying webpack in microservices:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Install Webpack and other relevant dependencies&lt;br&gt;
&lt;code&gt;npm install --save-dev webpack webpack-cli ts-loader webpack-node-externals glob&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configure the &lt;code&gt;webpack.config.ts&lt;/code&gt; where the &lt;code&gt;index.ts&lt;/code&gt; as the entry point to webpack.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import * as path from "path";
import { Configuration } from "webpack";
import nodeExternals from "webpack-node-externals";
const config: Configuration = {
    entry: "./index.ts",
    target: "node",
    mode: "production",
    module: {
        rules: [
            {
                test: /\.ts$/,
                use: "ts-loader",
                exclude: /node_modules/,
            },
        ],
    },
    resolve: {
        extensions: [ ".ts", ".js" ],
    },
    output: {
        filename: "[name].js",
        path: path.resolve(__dirname, "lib"),
        libraryTarget: "commonjs2",
    },
    externalsPresets: { node: true }, // Use externalsPresets to specify Node.js environment
    externals: [nodeExternals()], // Use the function to exclude node_modules
};
export default config;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Modify &lt;code&gt;package.json&lt;/code&gt; as
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"scripts": {
  "build": "webpack --config webpack.config.ts"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Update &lt;code&gt;tsconfig.json&lt;/code&gt;:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Added &lt;code&gt;webpack.config.ts&lt;/code&gt; to the include array to ensure TS includes Webpack config file when compiling.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "compilerOptions": {
        "module": "commonjs",
        "moduleResolution": "node",
        "esModuleInterop": true,
        "pretty": true,
        "sourceMap": true,
        "allowJs": true,
        "target": "es6",
        "outDir": "./lib",
        "baseUrl": "./",
        "types": ["chai", "node"],
    },
    "include": [
        "./**/*",
        "test/.mocharc",
        "webpack.config.js" 
    ],
    "exclude": [
        "node_modules", "lib"
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After doing these changes you can build your code and you'll precisely see how your &lt;code&gt;js&lt;/code&gt; files are minified and tree shaked dependencies. In here I reduced the package size of the output folder nearly 90% applying the webpack on my lambda functions. Please consider this might vary according to &lt;a href="https://docs.aws.amazon.com/pdfs/whitepapers/latest/microservices-on-aws/microservices-on-aws.pdf" rel="noopener noreferrer"&gt;your architecture and other factors&lt;/a&gt;. But as a result of bundling the code using webpack, it should reduce the final output folder package size. Next, I went to check the impact of the webpack on lambda execution time.&lt;/p&gt;

&lt;h2&gt;
  
  
  How I conducted my tests:
&lt;/h2&gt;

&lt;p&gt;I used 2 API calls to check the impact of webpack.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;GET ALL - Get All Users&lt;/li&gt;
&lt;li&gt;GET - Get User by ID&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After sending couple of API requests to AWS lambda functions, I went to CloudWatch to see the results, later I used AWS X-ray, because it clearly shows only the relevant results of set of API calls (for this, first you should enable X-ray in your lambda function/s)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7hi4sdf3u9374xsash6y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7hi4sdf3u9374xsash6y.png" alt="API Request details" width="760" height="313"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This image shows a set of API requests I sent to an instance of my lambda function. As I mentioned previously in this article, every instance starts with a cold start, and all the other instances invoke warm starts. You can figure out it by the Response Time. Meanwhile requests might be cached due to couple of reasons.&lt;/p&gt;

&lt;h2&gt;
  
  
  Caching
&lt;/h2&gt;

&lt;p&gt;When we call the same request couple of times, response becomes cached. Caching may occur for a variety of reasons, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Lambda Container Reuse&lt;/strong&gt;: AWS Lambda may reuse the same container for multiple invocations, which can lead to data persistence across invocations if our code uses global variables or similar constructs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;API Gateway Caching&lt;/strong&gt;: If we’re invoking our Lambda through API Gateway, it might be caching responses.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Client-side Caching&lt;/strong&gt;: Tools like Postman might cache responses based on headers.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So, we obtain responses without warm beginnings, and caching saves lambda service resources by returning the prior result without activating the lambda instance. You can clearly see how the response time has been reduced when it’s getting cached.&lt;/p&gt;

&lt;p&gt;Since our goal is to check lambda warm starts, we can prevent caching by sending requests with a 2 - 3-minute time delay between two API requests. Otherwise, to assure Lambda functions without interference from cache, we can try the following approaches:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Disable API Gateway Caching&lt;/strong&gt;: Ensure that caching is disabled in API Gateway settings.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Unique Query Parameters&lt;/strong&gt;: When testing with Postman or similar tools, you can add a unique query parameter to each request. This approach can prevent client-side and intermediate caching. For example, append a timestamp or a random number as a query parameter.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Avoiding Caching in Postman&lt;/strong&gt;: If you suspect Postman might be caching responses, you can disable caching in Postman settings or use a different tool for testing, like curl in the command line.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scheduled Invocations&lt;/strong&gt;:&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;To invoke Lambda functions automatically without API calls, you can use AWS CloudWatch Events (or EventBridge).&lt;/li&gt;
&lt;li&gt;Set up a rule to trigger your Lambda function at regular intervals.&lt;/li&gt;
&lt;li&gt;This approach can be useful for simulating traffic and understanding Lambda behavior over time.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When checking &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/services-xray.html" rel="noopener noreferrer"&gt;X-Ray logs for lambda functions&lt;/a&gt;, I should specifically mention that there are two nodes called as function and context.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lambda Context and Function - X-RAY
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffdp2jorzhk85hfxnjhsq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffdp2jorzhk85hfxnjhsq.png" alt="GetUsers and GetUserId using XRAY" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Function Node&lt;/strong&gt;: Actual execution of Lambda function's code, in another words execution time of the function logic itself. This includes the time taken by code and any libraries it uses.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Context (or Initialization) Node&lt;/strong&gt;: Related to the "initialization" or "bootstrap" phase of the Lambda function. Time spent by AWS Lambda to initialize the execution environment. &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;p&gt;A few cold start timings that I obtained by X-ray are included in the table below. Ultimately, although my output folder package size was reduced, neither the lambda execution time nor the cold start time improved. 😑, Since I make quite a number of mistakes, please leave a comment if you find anything wrong with my process. 🫡&lt;br&gt;
CS - Cold Start&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fintfofbxdf70dvkouxvf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fintfofbxdf70dvkouxvf.png" alt="Cold starts test data" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Optional&lt;/strong&gt;: For analyzing AWS X-Ray logs and monitoring the performance impact of applying Webpack to AWS microservices, especially in terms of Lambda cold starts and warm starts, there are several third-party tools and services that might be useful. These tools offer more advanced analytics and visualization capabilities than what's available directly in AWS X-Ray or CloudWatch.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://newrelic.com/welcome-back" rel="noopener noreferrer"&gt;New Relic&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.datadoghq.com/" rel="noopener noreferrer"&gt;Datadog&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sumologic.com/" rel="noopener noreferrer"&gt;Sumo Logic&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.dynatrace.com/" rel="noopener noreferrer"&gt;Dynatrace&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/blogs/apn/tag/thundra/" rel="noopener noreferrer"&gt;Thundra&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/retail/partner-solutions/epsagon/" rel="noopener noreferrer"&gt;Epsagon&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;if you're following &lt;a href="https://education.github.com/pack" rel="noopener noreferrer"&gt;GitHub Student Developer Pack&lt;/a&gt;, you can get free credits for some of these services. &lt;/p&gt;




&lt;h1&gt;
  
  
  Further improvements.
&lt;/h1&gt;

&lt;p&gt;Later on, I discovered that lambda memory may potentially affect cold starts as well as AWS monthly bill. We can check what is the optimal memory size for each of our lambda using &lt;a href="https://github.com/alexcasalboni/aws-lambda-power-tuning?tab=readme-ov-file" rel="noopener noreferrer"&gt;AWS Lambda Power Tuning&lt;/a&gt;. And &lt;a href="https://towardsdatascience.com/optimize-aws-lambda-memory-more-memory-doesnt-mean-more-costs-51ba566fecc7" rel="noopener noreferrer"&gt;this article&lt;/a&gt; well explains, how to test our lambda functions using AWS power tuning tool. Although there isn't much impact from webpack on your current lambda memory size, it might show a significant impact on a different lambda memory size after applying webpack.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb9x4m7vyz6822lje0qkn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb9x4m7vyz6822lje0qkn.png" alt="Powertuning output" width="800" height="351"&gt;&lt;/a&gt; &lt;em&gt;&lt;a href="https://github.com/alexcasalboni/aws-lambda-power-tuning/blob/master/imgs/visualization.png?raw=true" rel="noopener noreferrer"&gt;Image source&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Furthermore, we can follow couple of solutions. I got images from this &lt;a href="https://www.youtube.com/watch?v=Pvkq5g80MPg&amp;amp;pp=ygUuVXNpbmcgQ2xvdWRXYXRjaCBFdmVudCBSdWxlIHJlZHVjZSBjb2xkIHN0YXJ0cw%3D%3D" rel="noopener noreferrer"&gt;video&lt;/a&gt; for solution 1 and 2.&lt;/p&gt;

&lt;h3&gt;
  
  
  Solution 01 - Using CloudWatch Event Rule
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpf0mnmhmqljqtw9et1dv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpf0mnmhmqljqtw9et1dv.png" alt="Image description" width="621" height="565"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In here we can ping a lambda for an execution environment for a scheduled time period (i.e.: 10 or 15 min). But lambda might remove this virtual environment within 1 or 1.5 hours of time. Until then requests can use the execute environment stimulated by the scheduled event. In this way we can reduce the count of cold starts.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsb6eduin8zosbhk6qe8w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsb6eduin8zosbhk6qe8w.png" alt="Image description" width="800" height="339"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;we can use &lt;code&gt;serverless-plugin-warmup&lt;/code&gt; for this purpose.&lt;/p&gt;

&lt;h3&gt;
  
  
  Solution 02 - Lambda Provisioned Concurrency
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F833mu43ky6dmmeyk8vgc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F833mu43ky6dmmeyk8vgc.png" alt="Image description" width="800" height="456"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For each AWS account, we get 1000 provisioned concurrencies. So, when applying provision concurrency, it will be deducted from our AWS account. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ffrkyhn855qtl79o3b0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ffrkyhn855qtl79o3b0.png" alt="Image description" width="800" height="343"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Cost comparison between step 1 and step 2
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdyxvc35lzcjfyxuxe3fi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdyxvc35lzcjfyxuxe3fi.png" alt="Image description" width="800" height="662"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This basically explains the cost comparison with a lambda function with 100 instances. When lambda pinging, AWS charges the normal charge for lambda service. But in provision concurrency, AWS charges additional cost for explicit warm environment. They're keep charging until we disable the provision concurrency.&lt;/p&gt;

&lt;h4&gt;
  
  
  Best Practices
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Do NOT apply provision concurrency for all the lambdas. Apply only for frequently invoked lambdas.&lt;/li&gt;
&lt;li&gt;Use provision concurrency based on a schedule - (&lt;strong&gt;Scheduled scaling for Application Auto Scaling&lt;/strong&gt;)&lt;/li&gt;
&lt;li&gt;You can apply Lambda Hing for less frequently used lambda
Lambda Ping can further be configured with a &lt;strong&gt;CRON expression to minimize cost&lt;/strong&gt; furthermore. For example, Ping lambda from Monday to Friday from 8AM-5PM.
Identify the best memory allocation for your lambda-Use tools like &lt;a href="https://github.com/alexcasalboni/aws-lambda-power-tuning" rel="noopener noreferrer"&gt;&lt;strong&gt;Lambda Power Tuning&lt;/strong&gt;&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>lambda</category>
      <category>webpack</category>
      <category>microservices</category>
    </item>
  </channel>
</rss>
