<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Paul Mowat</title>
    <description>The latest articles on DEV Community by Paul Mowat (@paulmowat).</description>
    <link>https://dev.to/paulmowat</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/paulmowat"/>
    <language>en</language>
    <item>
      <title>How we moved from Artifactory and saved $200k p.a. Part 5 of 5 - Reaching our goal</title>
      <dc:creator>Paul Mowat</dc:creator>
      <pubDate>Wed, 28 Sep 2022 15:39:54 +0000</pubDate>
      <link>https://dev.to/oneadvanced/how-we-moved-from-artifactory-and-saved-200k-pa-part-5-of-5-reaching-our-goal-23dh</link>
      <guid>https://dev.to/oneadvanced/how-we-moved-from-artifactory-and-saved-200k-pa-part-5-of-5-reaching-our-goal-23dh</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Welcome back to the final part of our 5-part series on 'How we moved from Artifactory and saved $200k p.a'.&lt;/p&gt;

&lt;p&gt;If you are just joining we recommend jumping back to the beginning and starting from there.&lt;/p&gt;

&lt;h2&gt;
  
  
  The deadline
&lt;/h2&gt;

&lt;p&gt;On 19th August 2022 at 21:56 the final migration request was completed, a proud moment. The migration of approximately 1.5 million artefacts had been planned, transferred and verified. We had met our tight deadline.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PjoMmuvg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.paulmowat.co.uk/static/images/how-we-moved-from-artifactory-and-saved-200k/part-5/completion.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PjoMmuvg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.paulmowat.co.uk/static/images/how-we-moved-from-artifactory-and-saved-200k/part-5/completion.png" alt="completing the final migration request" width="637" height="63"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;completing the final migration request&lt;/p&gt;

&lt;p&gt;A short while after, on the 31st of August 2022, our subscription with JFrog expired and in turn, all access was revoked. This was the point of no return and the milestone at which we were most apprehensive. The team were ready for the inevitable influx of tickets. Largely (perhaps surprisingly) though, the noise was relatively quiet. Yes, we had some issues, we expected these and had readied ourselves but nothing much arrived. A positive sign!&lt;/p&gt;

&lt;h2&gt;
  
  
  A pleasant surprise
&lt;/h2&gt;

&lt;p&gt;This, perhaps, provides a good opportunity to let us recall the goal we set for ourselves:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;To migrate all requested artefacts from Artifactory without losing any, writing custom tooling as necessary&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Approaching 4 weeks since our Artifactory subscription ended and the Advanced Artefacts service went live, post-migration support has consistently been quiet, with zero packages lost to date.&lt;/p&gt;

&lt;h2&gt;
  
  
  In summary some interesting facts and stats…
&lt;/h2&gt;

&lt;p&gt;To conclude we thought it would be nice to summarise some of the key points from the process:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;1.5 million artefacts analysed&lt;/li&gt;
&lt;li&gt;24.5 TB of data migrated without losing a file.&lt;/li&gt;
&lt;li&gt;222 tickets processed. 215 completed, 7 for phase 2.&lt;/li&gt;
&lt;li&gt;18 clinics

&lt;ul&gt;
&lt;li&gt;The first clinic had over 130 participants&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;772 users interacted with through our internal Ms Teams Channel:

&lt;ul&gt;
&lt;li&gt;109 posts&lt;/li&gt;
&lt;li&gt;590 replies&lt;/li&gt;
&lt;li&gt;67 mentions&lt;/li&gt;
&lt;li&gt;115 reactions&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Provided dedicated support channels with fast response (typically less than an hour during UK hours)&lt;/li&gt;
&lt;li&gt;Days sleeping, thinking and dreaming of artefacts - 90+&lt;/li&gt;
&lt;li&gt;14+ hour days - many many many&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The numbers
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;1,200,624 objects in S3&lt;/li&gt;
&lt;li&gt;25 TB storage consumed&lt;/li&gt;
&lt;li&gt;21.5 MB average object size&lt;/li&gt;
&lt;li&gt;15,307.69 EC2 usage hours&lt;/li&gt;
&lt;li&gt;2,540 migration jobs run with SSM&lt;/li&gt;
&lt;li&gt;888 EC2 spot instances used&lt;/li&gt;
&lt;li&gt;56 CloudFormation stacks created&lt;/li&gt;
&lt;li&gt;10,539 ECR Authorization tokens requested&lt;/li&gt;
&lt;li&gt;5,027 ECR Image pushes&lt;/li&gt;
&lt;li&gt;7,968 CloudWatch log streams created&lt;/li&gt;
&lt;li&gt;680,263 CloudWatch PutLogEvents called&lt;/li&gt;
&lt;li&gt;100% Spot instance utilisation&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The costs and savings
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;The 3-Month spend over the project, including the migration workers = $8400

&lt;ul&gt;
&lt;li&gt;Our original budget estimate = $6000&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;The 3-Month S3 storage costs = $407.05&lt;/li&gt;
&lt;li&gt;The 3-Month CodeArtifact costs = $161.19&lt;/li&gt;
&lt;li&gt;The 3-Month ECR costs = $115.94&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;Using that as a basis of calculating our costs for the upcoming year, we estimate savings of $200,000 per annum vs our Artifactory subscription&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--U4iXv_tA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.paulmowat.co.uk/static/images/how-we-moved-from-artifactory-and-saved-200k/part-5/mathieu-stern-1zO4O3Z0UJA-unsplash.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--U4iXv_tA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.paulmowat.co.uk/static/images/how-we-moved-from-artifactory-and-saved-200k/part-5/mathieu-stern-1zO4O3Z0UJA-unsplash.jpg" alt="savings image" width="640" height="427"&gt;&lt;/a&gt;&lt;/p&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  That’s all folks
&lt;/h2&gt;

&lt;p&gt;We are hugely proud of our efforts and what we have delivered. We would like to thank our engineering teams for their support and engagement. This was a tremendously challenging project, involving complexities, long hours and hard work, but overall it was incredibly fun and rewarding in equal measures.&lt;/p&gt;

&lt;p&gt;Thanks for reading! Please return your chairs to an upright position and thanks for flying with Air Advanced Artefacts ;)&lt;/p&gt;

</description>
      <category>aws</category>
      <category>artifactory</category>
      <category>codeartifact</category>
      <category>ecr</category>
    </item>
    <item>
      <title>How we moved from Artifactory and saved $200k p.a. Part 4 of 5 - Migration</title>
      <dc:creator>Paul Mowat</dc:creator>
      <pubDate>Wed, 28 Sep 2022 15:38:55 +0000</pubDate>
      <link>https://dev.to/oneadvanced/how-we-moved-from-artifactory-and-saved-200k-pa-part-4-of-5-migration-5h8j</link>
      <guid>https://dev.to/oneadvanced/how-we-moved-from-artifactory-and-saved-200k-pa-part-4-of-5-migration-5h8j</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Welcome back to Part 4 of our 5-part series on 'How we moved from Artifactory and saved $200k p.a'.&lt;/p&gt;

&lt;p&gt;If you are just joining we recommend jumping back to the beginning and starting from there.&lt;/p&gt;

&lt;h2&gt;
  
  
  Migration preparation
&lt;/h2&gt;

&lt;p&gt;The following steps were performed in readiness:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Publish a &lt;strong&gt;checklist&lt;/strong&gt; of steps that teams could follow which would help identify product assets that need migrating&lt;/li&gt;
&lt;li&gt;Research how we could append custom metadata to existing artefacts

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;tagging&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Study the Artifactory REST API and AQL documentation&lt;/li&gt;
&lt;li&gt;Research metrics that could be queried for comparison and verification purposes:

&lt;ul&gt;
&lt;li&gt;storage summary&lt;/li&gt;
&lt;li&gt;size&lt;/li&gt;
&lt;li&gt;number of artefacts&lt;/li&gt;
&lt;li&gt;number of files&lt;/li&gt;
&lt;li&gt;number of folders&lt;/li&gt;
&lt;li&gt;creation timestamps&lt;/li&gt;
&lt;li&gt;deployment ordering&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Analyse the repository path structure and flag outliers&lt;/li&gt;
&lt;li&gt;Research publishing options for native CLIs through a lens of using containers to perform the workload

&lt;ul&gt;
&lt;li&gt;discovered requirements which ruled out AWS Batch and ECS&lt;/li&gt;
&lt;li&gt;e.g. windows/windows containers&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Build a categorised data source to map existing repositories, including:

&lt;ul&gt;
&lt;li&gt;dev or release repository&lt;/li&gt;
&lt;li&gt;Advanced Artefacts repository name using new dev/release convention&lt;/li&gt;
&lt;li&gt;flag large repository&lt;/li&gt;
&lt;li&gt;flag empty repository&lt;/li&gt;
&lt;li&gt;flag large numbers of artefacts in the repository&lt;/li&gt;
&lt;li&gt;flag unsupported/problematic repository package types&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Consider failure scenarios and recovery option features:

&lt;ul&gt;
&lt;li&gt;dry-run&lt;/li&gt;
&lt;li&gt;ability to replay&lt;/li&gt;
&lt;li&gt;debug levels&lt;/li&gt;
&lt;li&gt;resume (using offsets)&lt;/li&gt;
&lt;li&gt;clear progress counters&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Log decisions using Architecture Design Records&lt;/li&gt;
&lt;li&gt;Review Architecture Design Records&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We spent a considerable amount of our time budget on the planning and preparation stages. This served us well.&lt;/p&gt;

&lt;h3&gt;
  
  
  Checklist
&lt;/h3&gt;

&lt;p&gt;To facilitate the process, we set about creating a preparation checklist that was intended to give all impacted products and teams a clear and concise checklist of preparational steps that would (hopefully) heighten awareness in the appropriate areas. The main points this covered were:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Begin ASAP!&lt;/li&gt;
&lt;li&gt;Determine a complete list of impacted builds/artefacts which needed to be supported in production&lt;/li&gt;
&lt;li&gt;Review all CI/CD pipelines&lt;/li&gt;
&lt;li&gt;Review all runbooks (automated and manual)&lt;/li&gt;
&lt;li&gt;Review all production deployments&lt;/li&gt;
&lt;li&gt;Review all disaster recovery processes&lt;/li&gt;
&lt;li&gt;Identify all artefacts that will need to be migrated through property sets tagging in Artifactory&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Tagging
&lt;/h3&gt;

&lt;p&gt;However, before we begin looking at these, we need to cover tagging. Artifactory has a useful feature called Property Sets. We decided early (decision log - check) in the process to make wide use of custom properties through Property Sets. Our requirements were principally:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Artefact hygiene - remove unused or unsupported packages, waste&lt;/li&gt;
&lt;li&gt;Docker images underlying OS type - differentiate between Linux and Windows containers&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; That whilst possible to read a manifest for each Docker image and determine the existence of foreign layers, we were focused on stability and repeatability as well as speed. We wanted to utilise the native tooling as much as possible and felt that determining supported schema/configuration around custom tooling would negatively impact complexity.&lt;/p&gt;

&lt;h3&gt;
  
  
  Failure recovery
&lt;/h3&gt;

&lt;p&gt;The focus for migrations was specific to our end goals and where appropriate we tried to reuse our tools across different package types. Ultimately, the project as a whole needed to be successful once, with many smaller eventual successes along the way. Building effective tooling was critical to the success of the project. We needed to use software to automate as much as we could whilst also allowing us to clearly aggregate the activities we had performed which in turn could be used as means of verification.&lt;/p&gt;

&lt;p&gt;Key features such as dry-run mode, debug levels, and the ability to replay/resume from an offset was essential. The sheer volume of different files, paths, conventions/structures meant that we could only analyse so far before we needed to begin. We fully expected the need to debug activities during the migration phase and that these tools would really support us.&lt;/p&gt;

&lt;h2&gt;
  
  
  Migration tooling
&lt;/h2&gt;

&lt;p&gt;Large Network bandwidth and scalable compute resources were top of our platform requirements. The choices taken would drip down into the tooling we created.&lt;/p&gt;

&lt;p&gt;As an organisation, we place a strong emphasis on Infrastructure as Code (IaC) so creating processes and workflows around &lt;a href="https://aws.amazon.com/systems-manager/"&gt;AWS Systems Manager&lt;/a&gt; using the AWS CDK is expected. We created stacks that enabled us to scale our migration efforts on demand which included:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Migration stack&lt;/li&gt;
&lt;li&gt;EC2 Spot Fleet worker node stacks

&lt;ul&gt;
&lt;li&gt;Windows&lt;/li&gt;
&lt;li&gt;Linux&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;SSM Documents library stacks&lt;/li&gt;
&lt;li&gt;Custom command line interface tools

&lt;ul&gt;
&lt;li&gt;archive runner&lt;/li&gt;
&lt;li&gt;migrator&lt;/li&gt;
&lt;li&gt;migraterunner&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--u4IAH2AV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.paulmowat.co.uk/static/images/how-we-moved-from-artifactory-and-saved-200k/part-4/migrator.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--u4IAH2AV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.paulmowat.co.uk/static/images/how-we-moved-from-artifactory-and-saved-200k/part-4/migrator.png" alt="migrator cli" width="800" height="232"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Concurrency was an interesting feature. The default was to plan for concurrency, if we run tasks concurrently across our cloud resources then this would accelerate the process, surely? Frustratingly, this was not the case. In the details of AWS CodeArtifact were areas where the order that packages were uploaded is important, particularly npm and Maven. Furthermore, package managers have different ways to determine the latest version, but support for this in AWS CodeArtifact was not universal during our project. We also discovered that there were places where Artifactory supported deprecated features which we happily used, yet AWS CodeArtefact did not. This resulted in us essentially serially migrating packages but utilising the cursor-like features of offset and limits to optimise our efforts.&lt;/p&gt;

&lt;h3&gt;
  
  
  Migration workers
&lt;/h3&gt;

&lt;p&gt;The migration workers performed the brunt of the migration tasks and were orchestrated through AWS Systems Manager. The (awesome) network throughput enabled by AWS meant we were able to migrate hundreds of GiB of data per hour. This was crucial given that the order that most packages were migrated in was important.&lt;/p&gt;

&lt;p&gt;Using property sets that were laboriously set by our engineering teams, the project was able to leverage the excellent Artifactory API as a catalogue/pseudo-state-machine through which, workers could page their way whilst advancing the migration.&lt;/p&gt;

&lt;p&gt;In (often long-running) batches, workers would &lt;em&gt;pull&lt;/em&gt; packages using the native package managers to the worker’s local storage, then the documented AWS CodeArtifact/ECR/S3 tool was used to &lt;em&gt;push&lt;/em&gt; packages to their new location. This is where container images became tricky because pushing and pulling containers needs to be performed on a host running the relevant operating system. Whilst it is documented in the AWS SDK that you pull and push layers as mere blobs, the guidance was that this was only a feature to be used internally which was enough to ward us off.&lt;/p&gt;

&lt;h3&gt;
  
  
  Command line interfaces (CLIs)
&lt;/h3&gt;

&lt;p&gt;A handful of CLI tools were authored in &lt;a href="https://go.dev/"&gt;Go&lt;/a&gt;. These tools provided the brunt of the work where validation and custom logic were applied to transfer the different artefact types under the conditions of the project. AWS SSM documents were used in orchestrating the tools to transition the artefacts to the correct destination repositories via the appropriate worker node fleet. Much credit needs to go to the excellent &lt;a href="https://github.com/spf13/cobra"&gt;spf13/cobra&lt;/a&gt; commander Go module that was used in these CLIs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Migrating 1.5 million artefacts
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--b99uwJaN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.paulmowat.co.uk/static/images/how-we-moved-from-artifactory-and-saved-200k/part-4/xavi-cabrera-kn-UmDZQDjM-unsplash.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--b99uwJaN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.paulmowat.co.uk/static/images/how-we-moved-from-artifactory-and-saved-200k/part-4/xavi-cabrera-kn-UmDZQDjM-unsplash.jpg" alt="lego image" width="640" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Migration process
&lt;/h3&gt;

&lt;p&gt;We wrote our tooling to support a ticket-driven approach where teams could raise tickets to work with the implementation team and migrate their product packages in batches, working together to verify successful completion.&lt;/p&gt;

&lt;p&gt;This was important in allowing us to progressively work through the migration rather than a whole big-bang event occurring on an agreed date. Such an approach would be simply unworkable, requiring far too much alignment (thousands of pipelines updated in anticipation). Finding a date that would perfectly fit with hundreds of teams would be virtually impossible and crucially what rollback options would we have? Thus we decided this progressive approach would give us greater flexibility and the reassurance to engage team by team, product by product while checking off milestones on the route to completion.&lt;/p&gt;

&lt;p&gt;As things panned out, we ended up with a large backlog of migration activities ranging from a few packages to entire release repositories containing hundreds of thousands of artefacts and terabytes of data. The backlog was continually refined and tickets were resolved step by step with the owning teams. The team had overall responsibility for verifying the migration objective had been met before marking the work as complete.&lt;/p&gt;

&lt;p&gt;Resolving all migration tickets in the backlog would culminate in disabling write access for all standard users at a key, predetermined date, before swiftly proceeding, after a reasonable holding period, to remove complete access for all users, except system administrators. Any issues experienced along the way would be addressed in isolation, verifying with the team and making updates to our tooling as required.&lt;/p&gt;

&lt;h3&gt;
  
  
  Verification
&lt;/h3&gt;

&lt;p&gt;This was tricky. Reporting on the AWS CodeArtifact side is currently poor (09/2022). CloudWatch metrics are entirely missing but ultimately it was not at all straightforward to aggregate key metrics and statistics in the same way you can with Artifactory.&lt;/p&gt;

&lt;p&gt;We had to get creative and had to combine metrics queried from the Artifactory REST API with internal counters residing within our tooling to cross-reference our progress. Through deploying webhooks and using DynamoDB to record the dates when batches of work were undertaken we could replay activities using the DateTime offset to determine any deltas where pipelines were not fully updated and artefacts were still being deployed to Artifactory after the migration activity for the solution artefacts had begun. This worked out well and coupled with a reduction of write access permissions, enabled us to verify the process had worked to a point in time whilst giving us another window to replay the same batches but for a much smaller delta of packages. Significantly, as we advanced through the project, confidence grew as we were replaying actions over and over with consistent results. Testing our tooling was tricky so having the ability to utilise dry-run mode was invaluable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Next up
&lt;/h2&gt;

&lt;p&gt;The migration has now been completed.&lt;/p&gt;

&lt;p&gt;Next up, we’ll take a look at whether we’ve reached our goal.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>artifactory</category>
      <category>codeartifact</category>
      <category>ecr</category>
    </item>
    <item>
      <title>How we moved from Artifactory and saved $200k p.a. Part 3 of 5 - The future is Advanced Artefacts</title>
      <dc:creator>Paul Mowat</dc:creator>
      <pubDate>Wed, 28 Sep 2022 15:36:42 +0000</pubDate>
      <link>https://dev.to/oneadvanced/how-we-moved-from-artifactory-and-saved-200k-pa-part-3-of-5-the-future-is-advanced-artefacts-4d7j</link>
      <guid>https://dev.to/oneadvanced/how-we-moved-from-artifactory-and-saved-200k-pa-part-3-of-5-the-future-is-advanced-artefacts-4d7j</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Welcome back to Part 3 of our 5-part series on 'How we moved from Artifactory and saved $200k p.a'.&lt;/p&gt;

&lt;p&gt;If you are just joining we recommend jumping back to the beginning and starting from there.&lt;/p&gt;

&lt;h2&gt;
  
  
  Approach
&lt;/h2&gt;

&lt;p&gt;Having identified that we wanted to create a structured service we had to determine the best way to approach it.&lt;/p&gt;

&lt;p&gt;Our earlier analysis helped us identify the artefact types that we needed to support. Yet a remaining challenge was to identify how to support these and empower our development teams across the technologies and tools we use on a daily basis.&lt;/p&gt;

&lt;h3&gt;
  
  
  Architecture
&lt;/h3&gt;

&lt;p&gt;The following architecture gives a high-level overview of the service components.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mnS8I1J_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.paulmowat.co.uk/static/images/how-we-moved-from-artifactory-and-saved-200k/part-3/aa-architecture.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mnS8I1J_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.paulmowat.co.uk/static/images/how-we-moved-from-artifactory-and-saved-200k/part-3/aa-architecture.png" alt="aa-architecture.png" width="800" height="284"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Repository convention
&lt;/h3&gt;

&lt;p&gt;Something that became apparent was that our Artifactory configuration was a disordered muddle which has since provided us with a harsh lesson about paying particular attention to the rollout of such platform tooling. It had never been implemented in a controlled or consistent way.&lt;/p&gt;

&lt;p&gt;Determined to avoid this at all costs we decided to build naming conventions for each of our artefact types into our service. This would be implicit, removing disambiguation and preference from any decisions.&lt;/p&gt;

&lt;p&gt;Our products commonly have both &lt;strong&gt;development&lt;/strong&gt; and &lt;strong&gt;production&lt;/strong&gt; environments so it was decided that the service should mirror this and have just two conforming types of repository.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Development&lt;/strong&gt; repositories would be where teams push build artefacts continually via Continuous Integration (CI). Then, when the appropriate levels of testing had been passed, the artefacts could be promoted into the corresponding &lt;strong&gt;release&lt;/strong&gt; repository.&lt;/p&gt;

&lt;p&gt;A benefit of this approach was ensuring a clear separation between development and release dependencies so that we could start to look at implementing automated housekeeping rules in the future. We do not need hundreds of development packages so why bother keeping them?&lt;/p&gt;

&lt;p&gt;This helps with our goals of enforcing convention and consistency, which in turn makes it easier to automate and roll out changes in the future.&lt;/p&gt;

&lt;h3&gt;
  
  
  Infrastructure
&lt;/h3&gt;

&lt;p&gt;We needed to start building out our infrastructure to support the service.&lt;/p&gt;

&lt;p&gt;As we were utilising several AWS services, using &lt;a href="https://aws.amazon.com/cdk/"&gt;AWS CDK&lt;/a&gt; was the obvious choice. It allowed us to build the service quickly and was also easy to change when required.&lt;/p&gt;

&lt;p&gt;Going back to enforcing convention and consistency we leveraged &lt;a href="https://aws.amazon.com/servicecatalog/"&gt;AWS Service Catalog&lt;/a&gt; with several custom templates to help us create new repositories.&lt;/p&gt;

&lt;h2&gt;
  
  
  Delivery
&lt;/h2&gt;

&lt;p&gt;Providing a service that worked successfully meant that we had to look at how we delivered our software and consider what an exemplary software lifecycle looks like, as well as the platforms we needed to support.&lt;/p&gt;

&lt;p&gt;The following key areas were identified:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Local Development&lt;/li&gt;
&lt;li&gt;Application Configuration&lt;/li&gt;
&lt;li&gt;Authorisation&lt;/li&gt;
&lt;li&gt;Continuous Integration (CI)

&lt;ul&gt;
&lt;li&gt;GitHub Actions&lt;/li&gt;
&lt;li&gt;Jenkins&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Continuous Delivery (CD)

&lt;ul&gt;
&lt;li&gt;Harness&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We also were aware of some products using other technologies such as &lt;a href="https://azure.microsoft.com/en-us/products/devops/"&gt;Azure DevOps&lt;/a&gt; and &lt;a href="https://www.jetbrains.com/teamcity/"&gt;TeamCity&lt;/a&gt; that we would not directly support, but still had to take into consideration how they could access and use the service.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tooling
&lt;/h2&gt;

&lt;p&gt;As developers, we are used to using tools to help make our day-to-day easier. If you look at any good service you will see that they typically have a range of tools to make interacting with them easy.&lt;/p&gt;

&lt;h3&gt;
  
  
  Command Line Interface (CLI)
&lt;/h3&gt;

&lt;p&gt;We determined that creating a CLI would provide us with a centralised entry point for all of our delivery mechanisms and be flexible enough to allow it to work for any that we didn’t support.&lt;/p&gt;

&lt;p&gt;The CLI had to support multiple operating systems (Windows, Linux &amp;amp; macOS) and be easy to update and use as required.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UJhsSiR5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.paulmowat.co.uk/static/images/how-we-moved-from-artifactory-and-saved-200k/part-3/aa-cli.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UJhsSiR5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.paulmowat.co.uk/static/images/how-we-moved-from-artifactory-and-saved-200k/part-3/aa-cli.png" alt="advanced artefacts cli" width="800" height="343"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We started analysing what functionality the CLI would require and quickly identified potential overlaps between native package manager and docker commands. There is little point in us trying to write, maintain and support any tooling that mirrored these. Everyone knows how they work, they are industry standard tools.&lt;/p&gt;

&lt;p&gt;It was decided that our CLI would work complementary to these. It would bridge the gaps and provide the functionality we needed for our service to work.&lt;/p&gt;

&lt;p&gt;We determined our key functional requirements were:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Authorisation&lt;/li&gt;
&lt;li&gt;Packages - get and promote&lt;/li&gt;
&lt;li&gt;Generic Artefacts - get, list, publish and promote&lt;/li&gt;
&lt;li&gt;Container Images - promote&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The most important feature of the CLI was authorisation into the service. Every developer and delivery mechanism must authorise into the service before they can use it.&lt;/p&gt;

&lt;p&gt;The CLI had to make authorisation easy and limit the impact on our engineering teams on a day-to-day basis.&lt;/p&gt;

&lt;p&gt;We looked at other open-source CLIs for inspiration and took the time to understand how this could be done effectively for multiple operating systems, shells and from a user or service perspective.&lt;/p&gt;

&lt;p&gt;In the end, we went with a multi-pronged approach and created mechanisms that allowed authorisation in several ways i.e. user, role and service level.&lt;/p&gt;

&lt;p&gt;Our security is of the utmost importance and having authorisation tokens written to files was an absolute no-go. Everything by default was going to be applied to the running shell process in order for it to be used and thrown away when finished. That was implemented across several different shells such as bash, Powershell and Windows command prompt.&lt;/p&gt;

&lt;p&gt;With our authorisation mechanism now in place, working and flexible enough to handle different operating systems and shells, the other features were much more straightforward.&lt;/p&gt;

&lt;p&gt;Generic Artefacts proved the most labour intensive, only due to having to implement an entire set of commands to allow complete artefact management.&lt;/p&gt;

&lt;h3&gt;
  
  
  Other
&lt;/h3&gt;

&lt;p&gt;With the CLI now in place we used it to power any other tooling that would help accelerate our development teams.&lt;/p&gt;

&lt;p&gt;Our core Continuous Integration (CI) platform is &lt;a href="https://github.com/features/actions"&gt;GitHub Actions&lt;/a&gt;. We decided it was worth the effort to create a custom action that automatically downloaded the latest CLI, installed it and performed the required authorisation. This meant that teams could drop that action straight into their workflows and it would just work. Minimal change, maximum satisfaction.&lt;/p&gt;

&lt;p&gt;Next, we looked at &lt;a href="https://www.jenkins.io/"&gt;Jenkins&lt;/a&gt;. Although we are moving away from it, we still have some products still using it, therefore we spent a bit of time putting together some example pipelines on how the CLI could be used and included that in our documentation for teams to follow.&lt;/p&gt;

&lt;p&gt;Now we have covered our Continuous Integration (CI) tools, we needed to look at our Continuous Delivery (CD) ones.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://harness.io/"&gt;Harness&lt;/a&gt; is our Continuous Delivery (CD) tool of choice. It provides a flexible template engine, that we were able to utilise to create templates that could be reused across our teams.&lt;/p&gt;

&lt;h2&gt;
  
  
  Next up
&lt;/h2&gt;

&lt;p&gt;With our new Advanced Artefacts service in place, we were ready to get on with the actual migration from Artifactory.&lt;/p&gt;

&lt;p&gt;Next up, we’ll walk through how we built our migration tooling, defined our process and performed the migration.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>artifactory</category>
      <category>codeartifact</category>
      <category>ecr</category>
    </item>
    <item>
      <title>How we moved from Artifactory and saved $200k p.a. Part 2 of 5 - Design</title>
      <dc:creator>Paul Mowat</dc:creator>
      <pubDate>Wed, 28 Sep 2022 15:33:26 +0000</pubDate>
      <link>https://dev.to/oneadvanced/how-we-moved-from-artifactory-and-saved-200k-pa-part-2-of-5-design-3852</link>
      <guid>https://dev.to/oneadvanced/how-we-moved-from-artifactory-and-saved-200k-pa-part-2-of-5-design-3852</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Welcome back to Part 2 of our 5-part series on 'How we moved from Artifactory and saved $200k p.a'.&lt;/p&gt;

&lt;p&gt;If you are just joining we recommend jumping back to the beginning and starting from there.&lt;/p&gt;

&lt;h2&gt;
  
  
  Decision making
&lt;/h2&gt;

&lt;p&gt;The nature of larger projects such as these requires plenty of discussion and decision-making around temporary and permanent processes. We had lots of data to migrate and we needed to be efficient in our decision-making process. We decided upon using &lt;a href="https://adr.github.io/"&gt;Architecture Decision Records&lt;/a&gt; to log the key implementation decisions which significantly helped us deliver consistency throughout our support and guidance.&lt;/p&gt;

&lt;p&gt;As it turned out, undertaking this method of logging was not onerous and we ended up with records for around a dozen key strategic choices that we made; an example of one being the choice to utilise a spot fleet of EC2 workers to perform the migration versus something like AWS Batch or ECS. At first glance, we expected to go with a solution based on AWS Batch or AWS ECS but we had requirements to move resources such as Windows container images and it was so helpful to be able to easily recover the decision steps when we moved to create tooling to support this.&lt;/p&gt;

&lt;h2&gt;
  
  
  Workshopping
&lt;/h2&gt;

&lt;p&gt;Workshopping commenced on the 10th of June 2022 and we had until the 4th of July 2022 to perform the required analysis, design and implement our solution.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FsSVII3C--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.paulmowat.co.uk/static/images/how-we-moved-from-artifactory-and-saved-200k/part-2/kvalifik-5Q07sS54D0Q-unsplash.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FsSVII3C--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.paulmowat.co.uk/static/images/how-we-moved-from-artifactory-and-saved-200k/part-2/kvalifik-5Q07sS54D0Q-unsplash.jpg" alt="workshop image" width="640" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Analysis of requirements
&lt;/h3&gt;

&lt;p&gt;One of the first items of business was to determine which artefact types it was essential to support, those that would be unsupported and any transitions from these to corresponding supported types. Then we would need to determine the options to migrate to, whilst fulfilling the necessary obligations to supported packages and platforms.&lt;/p&gt;

&lt;p&gt;Over the past few years, engineering at Advanced has been consolidating its toolchain and programming languages adopted by default. In no way intent on dissuading reviews of new or emerging options, but rather adding consistency in those used and bringing a larger collective intelligence to engineering as a whole.&lt;/p&gt;

&lt;p&gt;From analysing our usage within Artifactory we settled upon support for &lt;a href="https://www.npmjs.com/"&gt;npm&lt;/a&gt;, &lt;a href="https://www.nuget.org/"&gt;NuGet&lt;/a&gt;, generic artefacts (zip, exe, dll etc), &lt;a href="https://www.docker.com/"&gt;Docker&lt;/a&gt; images and &lt;a href="https://maven.apache.org/"&gt;Maven&lt;/a&gt;. We quickly determined that our biggest challenge would be Docker images, accounting for greater than 50% of our consumed storage, with several repositories holding more than 1 TB of image data. Latterly, Maven would also prove challenging.&lt;/p&gt;

&lt;p&gt;From this analysis, we were acutely (and financially) aware that we were also wastefully holding onto obsolete build artefacts. We decided to use this as an opportunity to leverage our engineering teams to review and select the versions of artefacts that our products needed to retain. This would help reduce the scale of the migration ahead somewhat and perform some well-overdue housekeeping. After all, there is no point in migrating and paying for artefacts that are no longer required.&lt;/p&gt;

&lt;h3&gt;
  
  
  Solution analysis
&lt;/h3&gt;

&lt;p&gt;Having gathered an understanding of what needed support and delivery, we had to identify where we were going to migrate to.&lt;/p&gt;

&lt;p&gt;AWS is our preferred Cloud Provider and platform, as well as a key technical partner. It was a natural choice to look at their services for our solution. From investigation, we found that &lt;a href="https://aws.amazon.com/codeartifact/"&gt;AWS CodeArtifact&lt;/a&gt; was a decent fit for supporting npm, NuGet, Maven and Python (if required in the future), however, it was not a complete match for all our requirements. Favourably, &lt;a href="https://aws.amazon.com/s3/"&gt;S3&lt;/a&gt; is an excellent fit for generic artefacts, and &lt;a href="https://aws.amazon.com/ecr/"&gt;Elastic Container Registry (ECR)&lt;/a&gt; is perfectly appropriate for Docker images (even leading us to correct misunderstandings between images and repositories internally!).&lt;/p&gt;

&lt;p&gt;We now had the artefact types we needed to support at a high level and where they were going to migrate to.&lt;/p&gt;

&lt;h3&gt;
  
  
  Solution design
&lt;/h3&gt;

&lt;p&gt;Now we firmly knew our direction, we needed to decide how we get there.&lt;/p&gt;

&lt;p&gt;Initially, we considered publishing guidance around best practices for various AWS services to satisfy our artefact requirements but ultimately that was deemed unmaintainable.&lt;/p&gt;

&lt;p&gt;We wanted to finish the project with our artefact management strategy in a much better position than it started. Significant to us was ensuring we had the ability to define convention, consistency, clear guidance and expectations. We aimed to provide a maintainable solution that continues to build upon the best practices as it matures.&lt;/p&gt;

&lt;p&gt;This led us to agree that it was important for the culmination of the migration to result in a new, custom service that any engineering team within Advanced could consume. &lt;strong&gt;&lt;em&gt;Advanced Artefacts&lt;/em&gt;&lt;/strong&gt; was born.&lt;/p&gt;

&lt;p&gt;We now had two streams we needed to complete within the project:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The Advanced Artefacts service&lt;/li&gt;
&lt;li&gt;The Migration&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We will get into the detail around these in future posts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Support channels
&lt;/h2&gt;

&lt;p&gt;As mentioned previously Advanced has over seven hundred engineers from across the globe working on many projects and we needed to identify a strategy for how we could support them in the best way possible.&lt;/p&gt;

&lt;p&gt;We came up with the following three-pronged approach.&lt;/p&gt;

&lt;h3&gt;
  
  
  Documentation
&lt;/h3&gt;

&lt;p&gt;We decided early that we needed to document all parts of the project to allow our engineering teams to self-serve where possible. Without good documentation, there is no way a team of four can support over seven hundred developers.&lt;/p&gt;

&lt;p&gt;We focused on providing some getting-started documentation that walked teams through the process in an end-to-end fashion. Then proving the appropriate reference documentation for each step.&lt;/p&gt;

&lt;p&gt;This covered items such as the support channels available, each team's responsibilities, the migration preparation and also information on how to use our new Advanced Artefacts service both locally and from our CI/CD pipelines.&lt;/p&gt;

&lt;p&gt;A great deal of time was spent pouring over this, it was however crucial to the success of the project.&lt;/p&gt;

&lt;h3&gt;
  
  
  Clinics
&lt;/h3&gt;

&lt;p&gt;A technique that has worked fairly well for our organisation is the idea of online clinics. We held clinics twice a week for the duration of the project.&lt;/p&gt;

&lt;p&gt;We used the first two clinics to kick off the project with our engineering teams. This helped us set timelines around key milestones and clear expectations on what was being delivered.&lt;/p&gt;

&lt;p&gt;After that, they were reserved for anyone to drop into, receive updates and ask for assistance directly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Microsoft teams channel
&lt;/h3&gt;

&lt;p&gt;Microsoft Teams is our internal communication tool, therefore, we created a dedicated channel that we would use for communicating any important updates to the engineering teams.&lt;/p&gt;

&lt;p&gt;They could also ask us questions or get further clarification as required outside clinic sessions. The artefacts team committed to replying to the questions as soon as possible ensuring teams were unblocked and able to progress quickly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Next up
&lt;/h2&gt;

&lt;p&gt;Now we have our design in place we need to start implementing it.&lt;/p&gt;

&lt;p&gt;Next up, we will cover the creation of the Advanced Artefacts service.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>artifactory</category>
      <category>codeartifact</category>
      <category>ecr</category>
    </item>
    <item>
      <title>How we moved from Artifactory and saved $200k p.a. Part 1 of 5 - Planning</title>
      <dc:creator>Paul Mowat</dc:creator>
      <pubDate>Wed, 28 Sep 2022 15:31:35 +0000</pubDate>
      <link>https://dev.to/oneadvanced/how-we-moved-from-artifactory-and-saved-200k-pa-part-1-of-5-planning-i4c</link>
      <guid>https://dev.to/oneadvanced/how-we-moved-from-artifactory-and-saved-200k-pa-part-1-of-5-planning-i4c</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;A 5-part blog post by Alex Harrington and Paul Mowat covering the migration of 25 TB of artefacts from JFrog Artifactory to a custom solution we created for &lt;a href="https://www.oneadvanced.com"&gt;Advanced&lt;/a&gt;, achieving significant cost efficiency.&lt;/p&gt;

&lt;h2&gt;
  
  
  A journey
&lt;/h2&gt;

&lt;p&gt;Early in 2022, we decided that Artifactory had become an expensive option for us. Whilst a good product, Artifactory wasn't without difficulties surrounding our subscription. To provide a little more detail, specifically, you are either all in or all out with the JFrog platform, you can only subscribe to every component which is not too desirable at the enterprise level.&lt;/p&gt;

&lt;p&gt;In retrospect, we came to learn of significant portions of the JFrog platform (&lt;a href="https://jfrog.com/xray/"&gt;Xray&lt;/a&gt; for example) from which we were not getting any real value (for us) and this made the overall service costly. Moreover, we were serious about doubling down on the security of our software supply chain and researching a wider (custom) array of best-in-class solutions.&lt;/p&gt;

&lt;p&gt;Still, this was no easy decision as we were a large user of Artifactory with over 1.5 million artefacts published and 25 TB of data storage consumed. Many of our CI/CD pipelines and developer settings were all configured to use Artifactory, so the scale of the task was somewhat sizeable. Still, we proceeded to assess our options and planning the task(s) at hand was critical.&lt;/p&gt;

&lt;h2&gt;
  
  
  And so it begins
&lt;/h2&gt;

&lt;p&gt;Let’s start by declaring our initial goal:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;To migrate all requested artefacts from Artifactory without losing any, writing custom tooling as necessary&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The possibility of migrating away from Artifactory was first aired at the beginning of January.&lt;/p&gt;

&lt;p&gt;Artifactory comes under a category of services that could be described as quite "sticky". A SaaS solution where the impact of migrating away would reach far and wide within many an organisation.&lt;/p&gt;

&lt;p&gt;Advanced has over 150 products active product suites covering different market areas. Some examples are delivering care to 40 million people throughout the UK, sending 10 million sporting fans through turnstiles and supporting 1.2 billion passengers to arrive at their destinations on time. Our solutions are engineered by hundreds of colleagues from across the globe, built using multiple technologies, living in more than 2600 GitHub repositories and powered by thousands of CI/CD pipelines. All deploying to numerous cloud/hybrid-cloud platforms. Not withstanding backup, disaster recovery, &lt;a href="https://www.ses-escrow.co.uk/case-studies/nhs-case-study"&gt;escrow&lt;/a&gt; and many other internal and market-driven requirements.&lt;/p&gt;

&lt;p&gt;We needed to plan, but plan in a way that would allow us to scale.&lt;/p&gt;

&lt;h2&gt;
  
  
  Team
&lt;/h2&gt;

&lt;p&gt;We formed a small dedicated artefacts team with four members that had to support more than seven hundred engineers through the process. The artefacts team initially needed to design and implement the migration &lt;em&gt;machine&lt;/em&gt;, followed by educating and guiding our engineering teams through the project. This had to be as efficient as possible, in order for it to scale.&lt;/p&gt;

&lt;p&gt;The artefacts team structure was as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;1 x Principal DevOps Architect (Paul Mowat)&lt;/li&gt;
&lt;li&gt;1 x Principal DevOps Engineer (Alex Harrington)&lt;/li&gt;
&lt;li&gt;1 x Senior DevOps Engineer (Karthik Holikatti)&lt;/li&gt;
&lt;li&gt;1 x DevOps Engineer (Likhith Kotian)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Milestones
&lt;/h2&gt;

&lt;p&gt;Our next task was looking at the wider project and breaking it down into key milestones so we could share these. This was a critically important area. Accuracy and clarity in our communication were paramount.&lt;/p&gt;

&lt;p&gt;Starting with setting a hard immovable deadline and taking heed from previous hard-learned lessons with sliding deadlines that run and run, we chose to set this date in stone and declared this at the outset of our engagement with our wider engineering community.&lt;/p&gt;

&lt;p&gt;We felt this offered a powerful message, which clearly illustrated that urgent engagement was necessary from all sides.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--F7KXgv-F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.paulmowat.co.uk/static/images/how-we-moved-from-artifactory-and-saved-200k/part-1/luuk-wouters-F_zec7P_OwA-unsplash.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--F7KXgv-F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.paulmowat.co.uk/static/images/how-we-moved-from-artifactory-and-saved-200k/part-1/luuk-wouters-F_zec7P_OwA-unsplash.jpg" alt="lighthouse image" width="640" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The key milestones were:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;10-06-2022 - Project Kick-off - First implementation team workshop held to begin formulating the plan&lt;/li&gt;
&lt;li&gt;04-07-2022 - Deadline for our design and implementation complete ready for a wider rollout&lt;/li&gt;
&lt;li&gt;05-07-2022 - First Advanced Artefacts support clinic held with over 100 participants&lt;/li&gt;
&lt;li&gt;06-07-2022 - Migration Period Start&lt;/li&gt;
&lt;li&gt;19-08-2022 - Migration Period End&lt;/li&gt;
&lt;li&gt;22-08-2022 - Engineering teams would lose access to Artifactory&lt;/li&gt;
&lt;li&gt;31-08-2022 - Project End - Our Artifactory subscription would end&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Next up
&lt;/h2&gt;

&lt;p&gt;We have our team and plan.&lt;/p&gt;

&lt;p&gt;Next up, we get to work designing our solution.&lt;/p&gt;

</description>
      <category>artifactory</category>
      <category>aws</category>
      <category>codeartifact</category>
      <category>ecr</category>
    </item>
    <item>
      <title>CloudWatch RUM with Cognito Identity Pool for SAM/Cloudformation</title>
      <dc:creator>Paul Mowat</dc:creator>
      <pubDate>Thu, 03 Feb 2022 17:53:52 +0000</pubDate>
      <link>https://dev.to/oneadvanced/cloudwatch-rum-with-cognito-identity-pool-for-samcloudformation-42pc</link>
      <guid>https://dev.to/oneadvanced/cloudwatch-rum-with-cognito-identity-pool-for-samcloudformation-42pc</guid>
      <description>&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;I was asked to look into Amazon CloudWatch RUM and how to implement it for some of our applications at &lt;a href="https://www.oneadvanced.com" rel="noopener noreferrer"&gt;Advanced&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;CloudWatch RUM is a service that was released at AWS reInvent 2021. It allows you to configure your web application to perform real-user monitoring. It can also be configured to collect additional data, such as performance, errors and HTTP requests. It also works with AWS XRay, which allows it to track client to server traces. Overall it looks like a helpful tool that will aid analysis and debugging.&lt;/p&gt;

&lt;p&gt;The cost is $1 for every 100k of events that are collected. Every data item collected by the RUM web client is considered an event e.g. page view, JavaScript or HTTP errors. See &lt;a href="https://aws.amazon.com/cloudwatch/pricing/" rel="noopener noreferrer"&gt;AWS CloudWatch Pricing&lt;/a&gt; for detailed information.&lt;/p&gt;

&lt;p&gt;The guide will take you through getting CloudWatch RUM deployed and hooked into your application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Requirements
&lt;/h2&gt;

&lt;p&gt;To follow along, you will need.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS Account&lt;/li&gt;
&lt;li&gt;AWS SAM CLI&lt;/li&gt;
&lt;li&gt;Web application you want to do real-user monitoring on&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  CloudWatch RUM &amp;amp; Cognito Deployment
&lt;/h2&gt;

&lt;p&gt;The SAM/Cloudformation template will deploy the CloudWatch RUM application monitor along with a Cognito Identity Pool. The Identity Pool is configured to allow unauthorized access to the CloudWatch RUM web client. This is required for the CloudWatch RUM web client to send the events back to the CloudWatch RUM service.&lt;/p&gt;

&lt;p&gt;To deploy, follow the below steps.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Copy the below template into a file called &lt;code&gt;template.yml&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;AWSTemplateFormatVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;2010-09-09"&lt;/span&gt;
&lt;span class="na"&gt;Transform&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS::Serverless-2016-10-31&lt;/span&gt;
&lt;span class="na"&gt;Description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="s"&gt;Setup Cloudwatch RUM using Cognito IdentityPool for specified application and domain&lt;/span&gt;
&lt;span class="na"&gt;Parameters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;ApplicationName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;Description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;The name of the service&lt;/span&gt;
    &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;String&lt;/span&gt;
  &lt;span class="na"&gt;ApplicationDomain&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;Description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;The top-level internet domain name&lt;/span&gt;
    &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;String&lt;/span&gt;

&lt;span class="na"&gt;Resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;CWRumIdentityPool&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS::Cognito::IdentityPool&lt;/span&gt;
    &lt;span class="na"&gt;Properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
      &lt;span class="na"&gt;IdentityPoolName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;ApplicationName&lt;/span&gt;
      &lt;span class="na"&gt;AllowUnauthenticatedIdentities&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

  &lt;span class="na"&gt;CWRumIdentityPoolRoles&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS::Cognito::IdentityPoolRoleAttachment&lt;/span&gt;
    &lt;span class="na"&gt;Properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;IdentityPoolId&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;CWRumIdentityPool&lt;/span&gt;
      &lt;span class="na"&gt;Roles&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;unauthenticated&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!GetAtt&lt;/span&gt; &lt;span class="s"&gt;CWRumClientRole.Arn&lt;/span&gt;

  &lt;span class="na"&gt;CWRumClientRole&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS::IAM::Role&lt;/span&gt;
    &lt;span class="na"&gt;Properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;AssumeRolePolicyDocument&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;Version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;2012-10-17&lt;/span&gt;
        &lt;span class="na"&gt;Statement&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;Effect&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Allow&lt;/span&gt;
            &lt;span class="na"&gt;Principal&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;Federated&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;cognito-identity.amazonaws.com&lt;/span&gt;
            &lt;span class="na"&gt;Action&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;sts:AssumeRoleWithWebIdentity&lt;/span&gt;
            &lt;span class="na"&gt;Condition&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;StringEquals&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="na"&gt;cognito-identity.amazonaws.com:aud&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;CWRumIdentityPool&lt;/span&gt;
              &lt;span class="na"&gt;ForAnyValue:StringLike&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
                &lt;span class="na"&gt;cognito-identity.amazonaws.com:amr&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;unauthenticated&lt;/span&gt;
      &lt;span class="na"&gt;Description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Unauthenticated Role for AWS RUM Clients&lt;/span&gt;
      &lt;span class="na"&gt;Path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt;
      &lt;span class="na"&gt;Policies&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;PolicyName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWSRumClientPut&lt;/span&gt;
          &lt;span class="na"&gt;PolicyDocument&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;Version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;2012-10-17"&lt;/span&gt;
            &lt;span class="na"&gt;Statement&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;Effect&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Allow&lt;/span&gt;
                &lt;span class="na"&gt;Action&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;rum:PutRumEvents"&lt;/span&gt;
                &lt;span class="na"&gt;Resource&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Sub&lt;/span&gt; &lt;span class="s"&gt;arn:aws:rum:${AWS::Region}:${AWS::AccountId}:appmonitor/${ApplicationName}&lt;/span&gt;

  &lt;span class="na"&gt;CWRumAppMonitor&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS::RUM::AppMonitor&lt;/span&gt;
    &lt;span class="na"&gt;Properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;AppMonitorConfiguration&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;AllowCookies&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
        &lt;span class="na"&gt;EnableXRay&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
        &lt;span class="na"&gt;IdentityPoolId&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;CWRumIdentityPool&lt;/span&gt;
        &lt;span class="na"&gt;GuestRoleArn&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!GetAtt&lt;/span&gt; &lt;span class="s"&gt;CWRumClientRole.Arn&lt;/span&gt;
        &lt;span class="na"&gt;SessionSampleRate&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0.1&lt;/span&gt;
        &lt;span class="na"&gt;Telemetries&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;errors&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;performance&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
      &lt;span class="na"&gt;Domain&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;ApplicationDomain&lt;/span&gt;
      &lt;span class="na"&gt;Name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;ApplicationName&lt;/span&gt;

&lt;span class="na"&gt;Outputs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;CWRumAppMonitor&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;Description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;The Cloud Watch RUM App Monitor Name&lt;/span&gt;
    &lt;span class="na"&gt;Value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;CWRumAppMonitor&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Open up the command line and navigate to the folder you saved the above &lt;code&gt;template.yml&lt;/code&gt; into&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;From the command line, use AWS SAM to deploy the &lt;code&gt;template.yml&lt;/code&gt; file&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sam deploy --guided
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;During the prompts&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Enter a stack name&lt;/li&gt;
&lt;li&gt;Enter the desired AWS Region&lt;/li&gt;
&lt;li&gt;Enter the Application Name&lt;/li&gt;
&lt;li&gt;Enter the Application Domain&lt;/li&gt;
&lt;li&gt;Allow SAM CLI to create IAM roles with the required permissions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once the deployment has finished, you will have a CloudWatch RUM application monitor.&lt;/p&gt;

&lt;h2&gt;
  
  
  Application Configuration
&lt;/h2&gt;

&lt;p&gt;The next step is to configure your application to connect to the CloudWatch RUM application monitor.&lt;/p&gt;

&lt;h3&gt;
  
  
  Get JavaScript Snippet from AWS Console
&lt;/h3&gt;

&lt;p&gt;The JavaScript snippet is specific to the CloudWatch RUM application monitor and is used by your application to inject in the CloudWatch RUM web client. This is what captures and sends back events to the service.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Login to the AWS Console&lt;/li&gt;
&lt;li&gt;Navigate to CloudWatch RUM&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.paulmowat.co.uk%2Fstatic%2Fimages%2Fcloudwatch-rum-cognito-sam-cloudformation%2Faws_1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.paulmowat.co.uk%2Fstatic%2Fimages%2Fcloudwatch-rum-cognito-sam-cloudformation%2Faws_1.png" alt="cloudwatch rum navigation"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Identify your application monitor and click the &lt;code&gt;View JavaScript snippet&lt;/code&gt; link to show the snippet&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.paulmowat.co.uk%2Fstatic%2Fimages%2Fcloudwatch-rum-cognito-sam-cloudformation%2Faws_2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.paulmowat.co.uk%2Fstatic%2Fimages%2Fcloudwatch-rum-cognito-sam-cloudformation%2Faws_2.png" alt="cloudwatch rum javascript snippet"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click &lt;code&gt;Copy to clipboard&lt;/code&gt; to copy the Javascript snippet&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Modify Snippet Configuration
&lt;/h3&gt;

&lt;p&gt;If required, you can modify the code snippet to configure the CloudWatch RUM web client with additional options. &lt;/p&gt;

&lt;p&gt;See the &lt;a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-RUM-modify-snippet.html" rel="noopener noreferrer"&gt;CloudWatch RUM Modify Snippet&lt;/a&gt; documentation for further information on the additional options.&lt;/p&gt;

&lt;p&gt;From those options, I've decided to configure CloudWatch RUM as follows.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enables cookies to track user and session details&lt;/li&gt;
&lt;li&gt;Enables XRay traces&lt;/li&gt;
&lt;li&gt;Sample all sessions&lt;/li&gt;
&lt;li&gt;Collect telemetry events for errors, performance and HTTP requests

&lt;ul&gt;
&lt;li&gt;Adds the X-Amzn-Trace-Id header to the HTTP requests to allow client to server tracing&lt;/li&gt;
&lt;li&gt;Record all requests i.e. not just errors&lt;/li&gt;
&lt;li&gt;Only track HTTP requests including my domain using absolute and relative path&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;The snippet below shows the above configuration in action as an example.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The body of the snippet's function has been omitted for readability.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;script&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;function&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;n&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nx"&gt;v&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nx"&gt;r&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nx"&gt;u&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nx"&gt;x&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nx"&gt;z&lt;/span&gt;&lt;span class="p"&gt;){...})(&lt;/span&gt;
    &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;cwr&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;00000000-0000-0000-0000-000000000000&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;1.0.0&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;us-west-2&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;https://client.rum.us-east-1.amazonaws.com/1.0.2/cwr.js&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;allowCookies&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;enableXRay&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;sessionSampleRate&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;guestRoleArn&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;arn:aws:iam::000000000000:role/RUM-Monitor-us-west-2-000000000000-00xx-Unauth&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;identityPoolId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;us-west-2:00000000-0000-0000-0000-000000000000&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;endpoint&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://dataplane.rum.us-east-1.amazonaws.com&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;telemetries&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;errors&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;performance&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;[&lt;/span&gt;
          &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;http&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="na"&gt;addXRayTraceIdHeader&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;recordAllRequests&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;urlsToInclude&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
              &lt;span class="sr"&gt;/^https:&lt;/span&gt;&lt;span class="se"&gt;\/\/&lt;/span&gt;&lt;span class="sr"&gt;www&lt;/span&gt;&lt;span class="se"&gt;\.&lt;/span&gt;&lt;span class="sr"&gt;paulmowat&lt;/span&gt;&lt;span class="se"&gt;\.&lt;/span&gt;&lt;span class="sr"&gt;co&lt;/span&gt;&lt;span class="se"&gt;\.&lt;/span&gt;&lt;span class="sr"&gt;uk&lt;/span&gt;&lt;span class="se"&gt;\/&lt;/span&gt;&lt;span class="sr"&gt;.*/&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
              &lt;span class="sr"&gt;/^&lt;/span&gt;&lt;span class="se"&gt;(?!&lt;/span&gt;&lt;span class="sr"&gt;www&lt;/span&gt;&lt;span class="se"&gt;\.&lt;/span&gt;&lt;span class="sr"&gt;|&lt;/span&gt;&lt;span class="se"&gt;(?:&lt;/span&gt;&lt;span class="sr"&gt;http|ftp&lt;/span&gt;&lt;span class="se"&gt;)&lt;/span&gt;&lt;span class="sr"&gt;s&lt;/span&gt;&lt;span class="se"&gt;?&lt;/span&gt;&lt;span class="sr"&gt;:&lt;/span&gt;&lt;span class="se"&gt;\/\/&lt;/span&gt;&lt;span class="sr"&gt;|&lt;/span&gt;&lt;span class="se"&gt;[&lt;/span&gt;&lt;span class="sr"&gt;A-Za-z&lt;/span&gt;&lt;span class="se"&gt;]&lt;/span&gt;&lt;span class="sr"&gt;:&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="sr"&gt;|&lt;/span&gt;&lt;span class="se"&gt;\/\/)&lt;/span&gt;&lt;span class="sr"&gt;.*/&lt;/span&gt;
            &lt;span class="p"&gt;]&lt;/span&gt;
          &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;]&lt;/span&gt;
      &lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/script&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Insert &amp;amp; Deploy Snippet
&lt;/h3&gt;

&lt;p&gt;Now that we have the snippet configured, we can insert it into the web application code. It needs to be inserted within the &lt;code&gt;&amp;lt;head&amp;gt;&lt;/code&gt; element, above any other &lt;code&gt;&amp;lt;script&amp;gt;&lt;/code&gt; tags.&lt;/p&gt;

&lt;p&gt;The below example shows where to add it to an HTML page.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight html"&gt;&lt;code&gt;&lt;span class="cp"&gt;&amp;lt;!doctype html&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;html&lt;/span&gt; &lt;span class="na"&gt;lang=&lt;/span&gt;&lt;span class="s"&gt;"en"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;head&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;script&amp;gt;&lt;/span&gt;
    &lt;span class="c1"&gt;// snippet goes here&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;/script&amp;gt;&lt;/span&gt;
  ...
&lt;span class="nt"&gt;&amp;lt;/head&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;body&amp;gt;&lt;/span&gt;
  ...
&lt;span class="nt"&gt;&amp;lt;/body&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;See the below page on the CloudWatch RUM web client documentation for specific frameworks.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/aws-observability/aws-rum-web/blob/main/docs/cdn_angular.md" rel="noopener noreferrer"&gt;Angular&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/aws-observability/aws-rum-web/blob/main/docs/cdn_react.md" rel="noopener noreferrer"&gt;React&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now that we've got the CloudWatch RUM web client inserted. Let's get the application redeployed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Verifying it works
&lt;/h2&gt;

&lt;p&gt;Everything should now be configured and deployed. Let's verify it's all working as expected.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to your application&lt;/li&gt;
&lt;li&gt;Interact with your application to generate events e.g. change pages&lt;/li&gt;
&lt;li&gt;Look at the AWS CloudWatch RUM console for the application monitor. You should be able to see the last updated time and also see data appear as below&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.paulmowat.co.uk%2Fstatic%2Fimages%2Fcloudwatch-rum-cognito-sam-cloudformation%2Faws_3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.paulmowat.co.uk%2Fstatic%2Fimages%2Fcloudwatch-rum-cognito-sam-cloudformation%2Faws_3.png" alt="cloudwatch rum javascript snippet"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrap-up
&lt;/h2&gt;

&lt;p&gt;You should have CloudWatch RUM deployed, your web application configured and now be able to see the data on the AWS console.&lt;/p&gt;

&lt;p&gt;CloudWatch RUM looks like a good addition to the CloudWatch ecosystem and is very easy to get up and running with minimal changes to your application required.&lt;/p&gt;

&lt;p&gt;You can find the code from this blog at the following locations.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/paulmowat/aws-cloudwatch-rum-cognito-sam-cloudformation" rel="noopener noreferrer"&gt;https://github.com/paulmowat/aws-cloudwatch-rum-cognito-sam-cloudformation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://serverlessland.com/patterns/cognito-cloudwatch" rel="noopener noreferrer"&gt;https://serverlessland.com/patterns/cognito-cloudwatch&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Additional Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-RUM.html" rel="noopener noreferrer"&gt;Using CloudWatch RUM&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/blogs/aws/cloudwatch-rum/" rel="noopener noreferrer"&gt;Real-User Monitoring for Amazon CloudWatch&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
      <category>cloudformation</category>
      <category>monitoring</category>
    </item>
    <item>
      <title>How to configure SonarLint to connect to SonarQube for VS Code</title>
      <dc:creator>Paul Mowat</dc:creator>
      <pubDate>Sun, 23 Jan 2022 15:55:12 +0000</pubDate>
      <link>https://dev.to/paulmowat/how-to-configure-sonarlint-to-connect-to-sonarqube-for-vs-code-2687</link>
      <guid>https://dev.to/paulmowat/how-to-configure-sonarlint-to-connect-to-sonarqube-for-vs-code-2687</guid>
      <description>&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;At &lt;a href="https://www.oneadvanced.com" rel="noopener noreferrer"&gt;Advanced&lt;/a&gt; we use SonarQube as our static code analysis tool. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.sonarqube.org/" rel="noopener noreferrer"&gt;SonarQube&lt;/a&gt; is an open-source tool that can scan and identify problems with code quality across several different technology stacks.&lt;/p&gt;

&lt;p&gt;SonarQube analysis is typically built directly into our CI/CD processes and provides feedback to the teams as the build is happening.&lt;/p&gt;

&lt;p&gt;This provides good information however, the feedback loop can be too slow for rapid development.&lt;/p&gt;

&lt;p&gt;One of our current focuses is to improve the feedback loop for developers.&lt;/p&gt;

&lt;p&gt;Fortunately, when it comes to &lt;a href="https://www.sonarqube.org/" rel="noopener noreferrer"&gt;SonarQube&lt;/a&gt;, they also provide &lt;a href="https://www.sonarlint.org/" rel="noopener noreferrer"&gt;SonarLint&lt;/a&gt;, which can be configured directly into your IDE to give that true shift-left mentality. &lt;/p&gt;

&lt;p&gt;We use VS Code as our editor of choice across several projects now, so will cover how to get up and running below. &lt;/p&gt;

&lt;h2&gt;
  
  
  Requirements
&lt;/h2&gt;

&lt;p&gt;To get started you will need&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A &lt;a href="https://www.sonarqube.org/" rel="noopener noreferrer"&gt;SonarQube&lt;/a&gt;  server&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://code.visualstudio.com/" rel="noopener noreferrer"&gt;VS Code&lt;/a&gt; installed&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Installation &amp;amp; Global Configuration
&lt;/h2&gt;

&lt;p&gt;Open up VS Code and under extensions install &lt;code&gt;SonarLint&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.paulmowat.co.uk%2Fstatic%2Fimages%2Fsonarlint-connected-sonarqube-vscode%2Finstall_sonarlint1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.paulmowat.co.uk%2Fstatic%2Fimages%2Fsonarlint-connected-sonarqube-vscode%2Finstall_sonarlint1.png" alt="install sonarlint"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once installed, restart or reload VS Code to ensure it's taken effect.&lt;/p&gt;

&lt;p&gt;If SonarLint can't detect a JAVA JRE on your system, it will prompt you to download one. Let it download if required.&lt;/p&gt;

&lt;p&gt;Once VS Code is up and running again, hit &lt;code&gt;Ctrl + Shift + P&lt;/code&gt; to open the command palette. Then enter &lt;code&gt;Preferences: Open Settings (JSON)&lt;/code&gt; and select to open up your settings.&lt;/p&gt;

&lt;p&gt;To get SonarLint working, you need to specify the following settings.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="nl"&gt;"sonarlint.connectedMode.connections.sonarqube"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; 
        &lt;/span&gt;&lt;span class="nl"&gt;"connectionId"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"sonar"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;id&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;of&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;your&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;sonarqube&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;server&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"serverUrl"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://sonarqube-server.com/"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;url&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;of&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;your&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;sonarqube&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;server&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"token"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"XXXX"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;token&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;to&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;authenticate&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;with&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;sonarqube&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nl"&gt;"sonarlint.pathToNodeExecutable"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"C:&lt;/span&gt;&lt;span class="se"&gt;\\\\&lt;/span&gt;&lt;span class="s2"&gt;Program Files&lt;/span&gt;&lt;span class="se"&gt;\\\\&lt;/span&gt;&lt;span class="s2"&gt;nodejs&lt;/span&gt;&lt;span class="se"&gt;\\\\&lt;/span&gt;&lt;span class="s2"&gt;node.exe"&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;path&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;to&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;your&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;node.js&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;installation&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;if&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;analyzing&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Javascript/Typescript&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To generate the token, you will need to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Login to your SonarQube server&lt;/li&gt;
&lt;li&gt;Click on your profile picture on the top right-hand side of the page and select &lt;code&gt;My Account&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.paulmowat.co.uk%2Fstatic%2Fimages%2Fsonarlint-connected-sonarqube-vscode%2Fsonarqube_gettoken1-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.paulmowat.co.uk%2Fstatic%2Fimages%2Fsonarlint-connected-sonarqube-vscode%2Fsonarqube_gettoken1-1.png" alt="sonarqube my account"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Next select &lt;code&gt;Security&lt;/code&gt;, specify a token name and hit generate&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.paulmowat.co.uk%2Fstatic%2Fimages%2Fsonarlint-connected-sonarqube-vscode%2Fsonarqube_gettoken2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.paulmowat.co.uk%2Fstatic%2Fimages%2Fsonarlint-connected-sonarqube-vscode%2Fsonarqube_gettoken2.png" alt="sonarqube security"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your token will be displayed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.paulmowat.co.uk%2Fstatic%2Fimages%2Fsonarlint-connected-sonarqube-vscode%2Fsonarqube_gettoken3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.paulmowat.co.uk%2Fstatic%2Fimages%2Fsonarlint-connected-sonarqube-vscode%2Fsonarqube_gettoken3.png" alt="sonarqube token"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Copy your token and paste this into the above &lt;code&gt;token&lt;/code&gt; setting in VS Code&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;SonarLint is now configured globally within VS Code to access SonarQube via the specified &lt;code&gt;connectionId&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Workspace Configuration
&lt;/h2&gt;

&lt;p&gt;Next, we need to configure your project workspace to allow it to scan the appropriate SonarQube project.&lt;/p&gt;

&lt;p&gt;Go back onto your SonarQube server and grab the project key.&lt;/p&gt;

&lt;p&gt;This can be found on the project page on the bottom right of the page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.paulmowat.co.uk%2Fstatic%2Fimages%2Fsonarlint-connected-sonarqube-vscode%2Fsonarqube_projectkey.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.paulmowat.co.uk%2Fstatic%2Fimages%2Fsonarlint-connected-sonarqube-vscode%2Fsonarqube_projectkey.png" alt="sonarqube token"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In VSCode hit &lt;code&gt;Ctrl + Shift + P&lt;/code&gt; to open the command palette. Then enter &lt;code&gt;Preferences: Open Workspace Settings (JSON)&lt;/code&gt; and select to open up your workspace settings.&lt;/p&gt;

&lt;p&gt;To get SonarLint working, you need to specify the following settings.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"sonarlint.connectedMode.project"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"connectionId"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"sonar"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;should&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;be&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;same&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;connectionId&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;you&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;defined&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;above&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"projectKey"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"XXXX"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Replace&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;with&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;project&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;key&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;you&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;grabbed&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;from&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;SonarQube&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;server&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We now need to update the SonarLint bindings for the workspace to ensure the rules are in-sync locally and on the server. &lt;/p&gt;

&lt;p&gt;Again, hit &lt;code&gt;Ctrl + Shift + P&lt;/code&gt; to open the command palette. Then enter &lt;code&gt;SonarLint: Update all bindings to SonarQube/SonarCloud&lt;/code&gt; and select. You should see the following message on the bottom right of VS Code once complete.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.paulmowat.co.uk%2Fstatic%2Fimages%2Fsonarlint-connected-sonarqube-vscode%2Finstall_sonarlint2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.paulmowat.co.uk%2Fstatic%2Fimages%2Fsonarlint-connected-sonarqube-vscode%2Finstall_sonarlint2.png" alt="sonarlint update bindings"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The project should now be connected and configured.&lt;/p&gt;

&lt;h2&gt;
  
  
  Verifying
&lt;/h2&gt;

&lt;p&gt;You can verify this by opening up a file that has some problems.&lt;/p&gt;

&lt;p&gt;These will now be highlighted within your code: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;With an underline that shows a popup of the issue when hovered over&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.paulmowat.co.uk%2Fstatic%2Fimages%2Fsonarlint-connected-sonarqube-vscode%2Finstall_sonarlint3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.paulmowat.co.uk%2Fstatic%2Fimages%2Fsonarlint-connected-sonarqube-vscode%2Finstall_sonarlint3.png" alt="sonarlint problems"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Within the VS Code problems panel&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.paulmowat.co.uk%2Fstatic%2Fimages%2Fsonarlint-connected-sonarqube-vscode%2Finstall_sonarlint4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.paulmowat.co.uk%2Fstatic%2Fimages%2Fsonarlint-connected-sonarqube-vscode%2Finstall_sonarlint4.png" alt="sonarlint problems"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You have achieved true shift-left and now have a fast feedback loop allowing you to resolve any SonarQube issues as you code.&lt;/p&gt;

</description>
      <category>sonarlint</category>
      <category>sonarqube</category>
      <category>vscode</category>
      <category>productivity</category>
    </item>
    <item>
      <title>How to resolve GH006 Protected Branch Update Failed</title>
      <dc:creator>Paul Mowat</dc:creator>
      <pubDate>Wed, 03 Nov 2021 19:03:46 +0000</pubDate>
      <link>https://dev.to/paulmowat/how-to-resolve-gh006-protected-branch-update-failed-3mgm</link>
      <guid>https://dev.to/paulmowat/how-to-resolve-gh006-protected-branch-update-failed-3mgm</guid>
      <description>&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;At work, we've used Github protected branches for quite a while now. They allow us to enforce rules that developers must follow before their branches can be committed.&lt;/p&gt;

&lt;p&gt;We typically set up the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Require a pull request before merging.&lt;/li&gt;
&lt;li&gt;Require status checks to pass before merging.&lt;/li&gt;
&lt;li&gt;Require the branch to be up to date before merging.&lt;/li&gt;
&lt;li&gt;Require linear history.&lt;/li&gt;
&lt;li&gt;Restrict who can push to matching branches.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These rules help us maintain quality by ensuring all branches meet the required criteria.&lt;/p&gt;

&lt;p&gt;We started to use &lt;a href="https://rushjs.io/"&gt;Rush&lt;/a&gt; as our monorepo management tool a while back. It's a fantastic tool and helps us in many ways to make our development process simpler. &lt;/p&gt;

&lt;p&gt;As part of its usage, we use the &lt;a href="https://rushjs.io/pages/commands/rush_change/"&gt;rush change&lt;/a&gt; functionality, which allows our developers to create &lt;a href="https://rushjs.io/pages/best_practices/change_logs/"&gt;change logs&lt;/a&gt; for their PRs. These are then identified by the build pipeline and automatically increment the package versions.&lt;/p&gt;

&lt;p&gt;To do this &lt;a href="https://rushjs.io/"&gt;Rush&lt;/a&gt; creates a version bump branch and merges it back in as part of the build pipeline. This process works fine when branch protection is disabled.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;However, when branch protection is enabled, you get a lovely error like the following.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;error: GH006: Protected branch update failed &lt;span class="k"&gt;for &lt;/span&gt;refs/heads/main
error: Required status check &lt;span class="s2"&gt;"Build"&lt;/span&gt; is expected. At least 1 approving review is required by reviewers with write access
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is a problem! We want to use protected branches and want to be able to use &lt;a href="https://rushjs.io/pages/commands/rush_change/"&gt;rush change&lt;/a&gt; to control versions. So how can we fix it?&lt;/p&gt;

&lt;p&gt;I assumed this would be an easy fix and just a case of giving the Github action extra permissions to push to the protected branch. Wrong!&lt;/p&gt;

&lt;p&gt;After spending a bit of time Googling, I came across a Github community post @ &lt;a href="https://github.community/t/allowing-github-actions-bot-to-push-to-protected-branch/16536"&gt;https://github.community/t/allowing-github-actions-bot-to-push-to-protected-branch/16536&lt;/a&gt;. The TLDR is that Github can't make the change to fix it for security reasons.&lt;/p&gt;

&lt;p&gt;At that point in time, we were busy with deadlines and had to decide quickly on how to progress.  That decision was to temporarily disable branch protection and look at it later, even though it felt wrong. The issue was put in the backlog to be looked at later.&lt;/p&gt;

&lt;h2&gt;
  
  
  Solution
&lt;/h2&gt;

&lt;p&gt;Fast forward to last week, and I finally got some time to investigate further. There is, unfortunately, no movement from GitHub's side, but I managed to track down a workaround.&lt;/p&gt;

&lt;p&gt;The workaround is as follows:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Create a new Github user specifically for building.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create a new personal access token for that user with access to &lt;code&gt;repo&lt;/code&gt;.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KjU3nOxp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.paulmowat.co.uk/_next/image%3Furl%3D%252Fstatic%252Fimages%252Fresolve-github-action-gh006-protected-branch-update-failed%252Fgithub_pat.png%26w%3D828%26q%3D75" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KjU3nOxp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.paulmowat.co.uk/_next/image%3Furl%3D%252Fstatic%252Fimages%252Fresolve-github-action-gh006-protected-branch-update-failed%252Fgithub_pat.png%26w%3D828%26q%3D75" alt="github personal access token" width="762" height="647"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add the personal access token as a Github secret e.g. BUILD_SVC_PAT.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aNntn8W8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.paulmowat.co.uk/_next/image%3Furl%3D%252Fstatic%252Fimages%252Fresolve-github-action-gh006-protected-branch-update-failed%252Fgithub_secret.png%26w%3D1920%26q%3D75" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aNntn8W8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.paulmowat.co.uk/_next/image%3Furl%3D%252Fstatic%252Fimages%252Fresolve-github-action-gh006-protected-branch-update-failed%252Fgithub_secret.png%26w%3D1920%26q%3D75" alt="github secret" width="880" height="404"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Update your branch protection and add your new build user to 'Restrict who can push to matching branches'.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Update your Github action to check out the code using the Github secret.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Checking out...&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v2&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;token&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.BUILD_SVC_PAT }}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will let you configure branch protection and let the &lt;a href="https://rushjs.io/pages/commands/rush_change/"&gt;rush change&lt;/a&gt; version bump branch be committed.&lt;/p&gt;

&lt;p&gt;Now, this workaround isn't perfect! You are creating an admin user who can do the build and allowing that user to push to the protected branch.&lt;/p&gt;

&lt;p&gt;This means you can't set the &lt;code&gt;Include Administrators&lt;/code&gt; branch protection rule, therefore your other admins can still push directly and bypass branch protection. However, it does stop any other non-admin developers from being able to push directly to the branch. For our needs, it was a suitable solution.&lt;/p&gt;

</description>
      <category>github</category>
      <category>rushjs</category>
    </item>
    <item>
      <title>Move your AWS Lambdas to Graviton2 easily with Cloudformation/SAM</title>
      <dc:creator>Paul Mowat</dc:creator>
      <pubDate>Mon, 04 Oct 2021 18:53:03 +0000</pubDate>
      <link>https://dev.to/paulmowat/move-your-aws-lambdas-to-graviton2-easily-with-cloudformation-sam-2o3h</link>
      <guid>https://dev.to/paulmowat/move-your-aws-lambdas-to-graviton2-easily-with-cloudformation-sam-2o3h</guid>
      <description>&lt;p&gt;Yesterday, Amazon Web Services (AWS) shared &lt;a href="https://aws.amazon.com/blogs/aws/aws-lambda-functions-powered-by-aws-graviton2-processor-run-your-functions-on-arm-and-get-up-to-34-better-price-performance/"&gt;AWS Lambda Functions Powered By Graviton2 Processor&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;At a high-level AWS claim:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Up to 19% better performance&lt;/li&gt;
&lt;li&gt;At 20% lower cost&lt;/li&gt;
&lt;li&gt;Also applies to functions using provisioned concurrency.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All supported Lambda runtimes, including the custom runtime are supported. You just need to be careful with any binaries or functions packaged as containers to ensure they are built to support the architecture.&lt;/p&gt;

&lt;p&gt;To help with performance testing, you can create two versions of a function, one in x86 and one for Arm. You can then use an alias with appropriate weights to distribute traffic between them. Once your test is complete, you can compare the performance difference within CloudWatch.&lt;/p&gt;

&lt;p&gt;When migrating from x86 to Arm in production, you can use the same function version and weighted alias approach. This will allow you to slowly start ramping up, e.g. from 1%, gradually up to 100%. If something looks wrong, or you are experiencing errors, then adjust the weights down to zero to force traffic back to the x86 function.&lt;/p&gt;

&lt;p&gt;Everything I deploy to AWS is via Cloudformation or AWS SAM. Below we will look at how to configure those to use the new architecture.&lt;/p&gt;

&lt;h2&gt;
  
  
  Update CLIs
&lt;/h2&gt;

&lt;p&gt;If you use a CLI tool to deploy, it is probably a good idea to update to the latest one for your setup. That will ensure it's compatible with the new changes.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html"&gt;AWS CLI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/aws/aws-sam-cli"&gt;AWS SAM CLI&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Template Changes
&lt;/h2&gt;

&lt;p&gt;The example below shows the changes required for an &lt;code&gt;AWS::Lambda::Function&lt;/code&gt; and an &lt;code&gt;AWS::Lambda::LayerVersion&lt;/code&gt;. The changes are the same if you are using an &lt;code&gt;AWS::Serverless::Function&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;exampleLambdaFunction&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS::Lambda::Function&lt;/span&gt;
  &lt;span class="na"&gt;Properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# OTHER ITEMS REMOVED FOR EXAMPLE&lt;/span&gt;
    &lt;span class="na"&gt;Layers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;exampleLibrary&lt;/span&gt;
    &lt;span class="na"&gt;Architectures&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;arm64&lt;/span&gt;

&lt;span class="na"&gt;exampleLibrary&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS::Lambda::LayerVersion&lt;/span&gt;
  &lt;span class="na"&gt;Properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# OTHER ITEMS REMOVED FOR EXAMPLE&lt;/span&gt;
    &lt;span class="na"&gt;CompatibleArchitectures&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;arm64&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now the change is made, you are ready to deploy using your preferred mechanism, whether thats CLI or via the console.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;There is really no reason not to try out the changes. They perform better, cost less, and the changes are minimal. There are also mechanisms in place by using the function versions and alias to do a gradual rollout until you have the confidence to switch over fully.&lt;/p&gt;

&lt;h2&gt;
  
  
  Further reading
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/foundation-arch.html"&gt;https://docs.aws.amazon.com/lambda/latest/dg/foundation-arch.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/configuration-versions.html"&gt;https://docs.aws.amazon.com/lambda/latest/dg/configuration-versions.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/configuration-aliases.html"&gt;https://docs.aws.amazon.com/lambda/latest/dg/configuration-aliases.html&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>lambda</category>
      <category>awssam</category>
      <category>cloudformation</category>
    </item>
    <item>
      <title>My thoughts on working from home</title>
      <dc:creator>Paul Mowat</dc:creator>
      <pubDate>Mon, 04 Oct 2021 17:59:18 +0000</pubDate>
      <link>https://dev.to/paulmowat/my-thoughts-on-working-from-home-4c79</link>
      <guid>https://dev.to/paulmowat/my-thoughts-on-working-from-home-4c79</guid>
      <description>&lt;p&gt;Four years ago, I got the opportunity to work from home full time. I had never worked from home more than the odd day here or there before and wasn't sure what to expect. This was before the world had to do it during the Coronavirus pandemic, and information on how to be effective was a bit more scarce. I decided to go for it and jump in, below are the key takeaways.&lt;/p&gt;

&lt;h2&gt;
  
  
  Work environment
&lt;/h2&gt;

&lt;p&gt;I had just recently moved house and was lucky to have a spare room that could be turned into an office.&lt;/p&gt;

&lt;p&gt;I created a comfortable setup with the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Large Desk&lt;/li&gt;
&lt;li&gt;Comfortable chair&lt;/li&gt;
&lt;li&gt;Whiteboard&lt;/li&gt;
&lt;li&gt;Three monitor setup&lt;/li&gt;
&lt;li&gt;Webcam&lt;/li&gt;
&lt;li&gt;Headset&lt;/li&gt;
&lt;li&gt;All in one Printer/Scanner&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This was everything I needed to get going. I know that not everyone can set up a dedicated space. However, I felt it was essential for me if this was going to work.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Takeaway&lt;/strong&gt; - I'm glad I spent some time and money getting myself setup to be comfortable and effective work environment. My office lets me focus fully on my work. I've recently upgraded a few items to make it even better, more storage, a new webcam, microphone and better speakers. I'm now able to have video calls without the need of a headset, which is great.&lt;/p&gt;

&lt;h2&gt;
  
  
  Effective Communication
&lt;/h2&gt;

&lt;p&gt;At work, we used several different communication channels. Sending emails was always the primary one and is used heavily. We also had Skype for Business for speaking to each other and screen sharing when required. &lt;/p&gt;

&lt;p&gt;To begin with, there were challenges being the remote member of a team. During calls where everyone else was in an office, it was hard to hear, follow along and interact as well as possible.&lt;/p&gt;

&lt;p&gt;At the beginning of the pandemic, the entire company moved to use Microsoft Teams. I noticed a significant improvement instantly. The sound and video quality were better, and since everyone was working from home there was more acknowledgement around better communication.&lt;/p&gt;

&lt;p&gt;We've evolved along the way and now have an accepted meeting etiquette. Video calls are also used much more than before to try and add that personal element. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Takeaway&lt;/strong&gt; - This has been a difficult one. We all need to stay in touch and it's went from few meetings to lots of meetings due to the pandemic. It's now starting to calm down which is great. A good suggestion is to look at how you can do async communication more. Always think, do you need that extra meeting? Could I just have a group chat instead?&lt;/p&gt;

&lt;h2&gt;
  
  
  Focus
&lt;/h2&gt;

&lt;p&gt;Being able to focus on work while at home is something that concerns a lot of people. For me, though this was never a concern. I love what I do and enjoy coding and building new applications. &lt;/p&gt;

&lt;p&gt;As someone who happily spends hours coding, I found my productivity levels increased drastically due to fewer interruptions and background office noise. I could turn on Spotify and get on with it.&lt;/p&gt;

&lt;p&gt;During my first two years working from home, I was the lead developer for two brand new applications. These were built using entirely new technology stacks to me. I loved the challenge of it!&lt;/p&gt;

&lt;p&gt;Once the pandemic started, things began to change as everyone was trying, to understand how to work effectively. There was a large increase in meetings being scheduled. All the meetings were starting to impact our developers getting into the &lt;a href="https://lifehacker.com/what-is-the-zone-anyway-5920484"&gt;Zone&lt;/a&gt; and being as productive as possible.  We've since introduced blocked focus time which has made a significant difference.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Takeaway&lt;/strong&gt; - It is ok to indicate to others that you need time to do your work. Block that time in your calendar as Focus Time. Outlook has a great feature to help automate this now. Constant distractions have a large impact on productivity but also on morale/stress levels.&lt;/p&gt;

&lt;h2&gt;
  
  
  Work/Life Balance
&lt;/h2&gt;

&lt;p&gt;If you work where you live, how do you separate them? It's a difficult question. &lt;/p&gt;

&lt;p&gt;The first few years working at home and learning new technologies, I struggled with the balance. There were deadlines to meet and a lot of learning to do, which meant quite a few midnight sessions when I got stuck to learn how to unblock myself. Not ideal, but needs must.&lt;/p&gt;

&lt;p&gt;Over the last two years, I've been much better. I tend to work from 8 am to 5 pm and stop. My son, coming in every day at 5 pm and shouting "It's dinner time, Daddy", signals the end of the day for me.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Takeaway&lt;/strong&gt; - If you focus on one, more than the other you are going to have a problem in the long run. Stay in control of your work by planning your day. Create a to-do list, book out your lunchtime in your calendar. Don't answer calls or emails after work hours. You need that separation to maintain your health.&lt;/p&gt;

&lt;h2&gt;
  
  
  Family
&lt;/h2&gt;

&lt;p&gt;The single best benefit of working at home is getting to spend extra time with your family. I'm married and have two sons, one five years old and the other eight months old.&lt;/p&gt;

&lt;p&gt;I started to work from home when my firstborn was six months old. I'm lucky in that I've got to see all the key milestones. Today, for example, I got to see my eight-month-old clap his hands for the first time.&lt;/p&gt;

&lt;p&gt;I also get to have breakfast, lunch and dinner with them every day. I don't have to waste hours in a commute back and forth to the office.&lt;/p&gt;

&lt;p&gt;Working at home also gives you that bit of flexibility that's handy when you have kids. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Takeaway&lt;/strong&gt; - Spend as much time with your family as possible. They appreciate it. It's the little things like being able to go for lunch together at the local cafe or go for a walk together at lunchtime. Make the best of it!&lt;/p&gt;

</description>
      <category>workfromhome</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
