<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: OneAdvanced</title>
    <description>The latest articles on DEV Community by OneAdvanced (@oneadvanced).</description>
    <link>https://dev.to/oneadvanced</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/oneadvanced"/>
    <language>en</language>
    <item>
      <title>API provider contract testing for all with Portman, OpenAPI and Postman</title>
      <dc:creator>Alex Savage</dc:creator>
      <pubDate>Wed, 10 May 2023 15:08:00 +0000</pubDate>
      <link>https://dev.to/oneadvanced/api-provider-contract-testing-for-all-with-portman-openapi-and-postman-4ll1</link>
      <guid>https://dev.to/oneadvanced/api-provider-contract-testing-for-all-with-portman-openapi-and-postman-4ll1</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;APIs are everywhere and power almost everything we use today.&lt;br&gt;
OpenAPI has revolutionized the way APIs are designed, documented, deployed... Its kind of a big thing.&lt;/p&gt;

&lt;p&gt;Making a great OpenAPI definition is helped with amazing tools such as &lt;a href="https://stoplight.io/open-source/spectral"&gt;Stoplight Spectral&lt;/a&gt; which can help you ensure that your definition is not only valid against the OpenAPI specification but also your own API standards through custom linting rules.&lt;/p&gt;

&lt;p&gt;How do you ensure that what you create and deploy matches your definition? Do you write all your tests by hand? Do you probe with Postman and hope for the best?&lt;/p&gt;

&lt;p&gt;Maybe you do today but this blog will help you try something new... Welcome to the world of &lt;em&gt;contract testing&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;By the end of this blog you will have completed your first contract test (against my mocked API) and you should be free to use this knowledge to find gaps in your or any API out there subject to permission!&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting started
&lt;/h2&gt;

&lt;p&gt;Rather than spending too long explaining what contract testers are, where they came from etc... Lets use one! (It's free so what is stopping you?)&lt;/p&gt;

&lt;p&gt;We will be using &lt;a href="https://github.com/apideck-libraries/portman"&gt;apideck Portman&lt;/a&gt;. You are welcome to follow their readme.MD and come back later or follow the instructions here.&lt;/p&gt;

&lt;h3&gt;
  
  
  Installing Portman
&lt;/h3&gt;

&lt;p&gt;Install Portman&lt;/p&gt;

&lt;p&gt;&lt;code&gt;$ npm install -g @apideck/portman&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Initialize your CLI&lt;/p&gt;

&lt;p&gt;&lt;code&gt;portman --init&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating a contract test to run in Postman
&lt;/h3&gt;

&lt;p&gt;Make a new directory and download this example portman configuration file and store it there&lt;br&gt;
&lt;a href="https://api.oneadvanced.com/portman/configurations/portman-config.adv.json"&gt;https://api.oneadvanced.com/portman/configurations/portman-config.adv.json&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Download this openAPI definition from SwaggerHub (the unresolved YAML works fine)&lt;br&gt;
Link to SwaggerHub: &lt;a href="https://app.swaggerhub.com/apis/AdvancedComputerSoft/Contract-Breaker/1.0.0"&gt;https://app.swaggerhub.com/apis/AdvancedComputerSoft/Contract-Breaker/1.0.0&lt;/a&gt;&lt;br&gt;
Direct link to YAML: &lt;a href="https://api.swaggerhub.com/apis/AdvancedComputerSoft/Contract-Breaker/1.0.0"&gt;https://api.swaggerhub.com/apis/AdvancedComputerSoft/Contract-Breaker/1.0.0&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Open the directory that has both files and run the following:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;portman -c portman-config.adv.json -l AdvancedComputerSoft-Contract-Breaker-1.0.0-swagger.yaml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;
&lt;p&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foxvmbwzlpm0a8s1dhnof.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foxvmbwzlpm0a8s1dhnof.png" alt="Terminal output from a successful creation of tests using Portman ready to import manually into Postman" width="800" height="177"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Portman will use the configuration file to port the openAPI definition to a Postman collection you can now import into Postman and start running tests!&lt;/p&gt;

&lt;h3&gt;
  
  
  Run the test
&lt;/h3&gt;

&lt;p&gt;Import the Postman collection file from the tmp/converted folder that Portman created as part of the process.&lt;br&gt;
&lt;/p&gt;
&lt;p&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fevq4lni1vzgj64otiqwn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fevq4lni1vzgj64otiqwn.png" alt="Postman application with the manually imported collection that was created by Portman" width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Open the List Countries request and press send!&lt;br&gt;
&lt;/p&gt;
&lt;p&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8zqrjssrcskdl69pergb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8zqrjssrcskdl69pergb.png" alt="Postman test result showing failed tests" width="800" height="513"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Congratulations! You ran a test and you got some failed tests. (Hopefully)&lt;br&gt;
Don't worry, that was supposed to happen.&lt;/p&gt;

&lt;h3&gt;
  
  
  What happened?
&lt;/h3&gt;

&lt;p&gt;Lets go back and look at the collection.&lt;br&gt;
At first glance, it looks like any other collection. Check the tests tab and you should see something new.&lt;/p&gt;

&lt;p&gt;This is where Portman has done its magic and created contract tests for you to run against the API. The OpenAPI definition was used to generate these tests.&lt;/p&gt;


&lt;p&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7lrwnllwrd5mfbu916k2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7lrwnllwrd5mfbu916k2.png" alt="Postman tests that were generated by Portman and included in the collection. Details below" width="800" height="515"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can see Postman will now test to ensure:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The status code matches 200&lt;/li&gt;
&lt;li&gt;The response is application/json&lt;/li&gt;
&lt;li&gt;The response has a body which is an object&lt;/li&gt;
&lt;li&gt;The response body validates against the schema from the OpenAPI&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Bonus - The response is received within 2000ms (This is not taken from the OpenAPI definition but from the Portman configuration&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All of this and you did not need to write anything. No human error, no missed tests. All of it was auto-generated.&lt;/p&gt;

&lt;h3&gt;
  
  
  Interpreting the responses
&lt;/h3&gt;


&lt;p&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2j9z68ecb3ptikr1he0n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2j9z68ecb3ptikr1he0n.png" alt="Postman tests results showing errors" width="800" height="130"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Postman has reported that 3 out of 5 tests have failed:&lt;/p&gt;

&lt;p&gt;1) &lt;code&gt;[GET]::/countries - Response status code is 200 | AssertionError: expected 201 to equal 200&lt;/code&gt; &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;This error was raised by the status code check as the API incorrectly returned a 201 response code instead of a 200 which it was expecting
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;2) &lt;code&gt;[GET]::/countries - Schema is valid | AssertionError: expected data to satisfy schema but found following errors: data.data[0] should have required property 'countryId', data.data[0].population should be integer, data.data[1] should NOT have additional properties, data.data[1].countryId should be string, data.data[1].population should be integer, data.pagination should NOT have additional properties, data.pagination.offset should be integer&lt;/code&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;This error was contains all the json schema validation issues. There are missing required properties, wrong types and additionProperties (these are caused by misspelled properties or those that were not documented)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;What you should find is this isn't an API that you would want to release! Contract testing lets you find where your deployed API (or your one that you are developing) doesn't match the OpenAPI definition. Catch things now before you release!&lt;/p&gt;

&lt;h2&gt;
  
  
  More tests
&lt;/h2&gt;

&lt;p&gt;Portman has lots of other functionality to explore.&lt;/p&gt;

&lt;p&gt;I would recommend to first look at overrides to try manipulating the collection. I started by making tests that removed the security and tried to call the API. You can stipulate which response from the OpenAPI you wish to receive. I was expecting a 401 to be returned. The Portman documentation has details of all of the functionality on offer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;APIs need to match their documentation. Whether or not you start with Design in OpenAPI or start with code, contract testing can ensure the 2 are in sync so your consumers don't get surprises and raise issues.&lt;/p&gt;

</description>
      <category>api</category>
      <category>openapi</category>
      <category>testing</category>
      <category>postman</category>
    </item>
    <item>
      <title>Standard visual modelling languages</title>
      <dc:creator>Joe Wallace</dc:creator>
      <pubDate>Fri, 18 Nov 2022 17:35:08 +0000</pubDate>
      <link>https://dev.to/oneadvanced/standard-visual-modelling-languages-4el0</link>
      <guid>https://dev.to/oneadvanced/standard-visual-modelling-languages-4el0</guid>
      <description>&lt;p&gt;Within the profession of Business Analysis, a key function is for the Analyst to be able to communicate ideas effectively, in a way that an intended audience is easily able to understand. &lt;/p&gt;

&lt;p&gt;One way of doing this is visually modelling a process or system; a large benefit of this is to be able to distil complex concepts down into one easy-to-digest visual. Additionally, it’s easy to iterate on a model based on feedback, and new requirements coming to light.&lt;/p&gt;

&lt;p&gt;In order to help Business Analysts, various standardised modelling languages have been created. In this article, we’ll focus on two of the most common ones: Unified Modelling Language (UML) and Business Process Modelling Notation (BPMN). When should you use them, and what do they consist of?&lt;/p&gt;

&lt;h2&gt;
  
  
  Unified Modelling Language 🌐
&lt;/h2&gt;

&lt;p&gt;UML is an object-oriented modelling language, intended for use in a broad range of different domains, architectures and coding languages. It is focussed on the design of IT systems specifically, and as such can be used for software or architectural designs.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;What is UML good for?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Interactions within an IT system or between IT systems.&lt;/li&gt;
&lt;li&gt;Determining objects used by a system, and the data items within them.&lt;/li&gt;
&lt;li&gt;Communicating architectural designs to stakeholders with technical expertise.&lt;/li&gt;
&lt;li&gt;Automated generation of code and/or test cases.&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;There are two different types of UML diagrams:&lt;/p&gt;

&lt;h2&gt;
  
  
  Structure diagrams
&lt;/h2&gt;

&lt;p&gt;These demonstrate the objects and components within an IT system and how they relate to each other. Various types of structure diagrams are available, including:&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Class diagrams&lt;/u&gt; &lt;br&gt;
Denote object classes within the system, and the relationships between them.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Object diagrams&lt;/u&gt; &lt;br&gt;
Similar to class diagrams, but show how the user of a system might see the objects within it.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Composite structure diagrams&lt;/u&gt; &lt;br&gt;
Also similar to class diagrams, but also show the composite parts within classes.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Component diagrams&lt;/u&gt; &lt;br&gt;
Depict how system components hang together and depend on each other.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Deployment diagrams&lt;/u&gt; &lt;br&gt;
Denote the distribution order and configuration required when deploying an IT system.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Package diagrams&lt;/u&gt; &lt;br&gt;
Model how packages in the system fit into the overall application model and their dependencies.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Profile diagrams&lt;/u&gt; &lt;br&gt;
Allow you to document prototype data structures using “stereotypes” - domain-specific data models which can be used for more formal class or object diagrams later.&lt;/p&gt;

&lt;h2&gt;
  
  
  Example
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://viewer.diagrams.net/?tags=%7B%7D&amp;amp;highlight=0000ff&amp;amp;edit=_blank&amp;amp;layers=1&amp;amp;nav=1&amp;amp;title=Example%20Class%20Diagram.drawio#R7V1tc%2BK2Fv41zLSdgbEsIORjINvtnaY7mbCd2%2B2XO8IWRre2RW2xCf31PZLfLRsMwWtKndnJWsfym55HRzqPjp0BXnhvHwOy3fzCbeoOTMN%2BG%2BDHgWmiqXEP%2F0nLPrVMIosTMDu2ZYYl%2B4vGRiO27phNw0JFwbkr2LZotLjvU0sUbCQI%2BGux2pq7xatuiUM1w9Iirm79L7PFJrLOzLvM%2FhNlzia5MprGT%2ByRpHL8JOGG2Pw1Z8IfBngRcC6iLe9tQV3Zekm7RMf9WLM3vbGA%2BqLJAfdovvkZo4%2F%2F%2B%2F2z8%2BQM37bs95dhfJavxN3FD%2FxC1zQIiBvftNgnLRG%2BMs8lPpTma%2B6LZbzHgDJxmePDtgW3QgMwfKWBYNCID%2FEOwbdgtTbMtZ%2FInu%2FkDYeCWH8kpfmGB%2BwvOC1cGD8iMMDuQMR8MKeFGkt5ZHzpgIZQ5zlpBZSankgo4joWd12yDdlK3bCs4pHAYf6cC8G95ER859vUjkspWKogAv5HCr88Pm41eEr6VgsHSkGG7kG5R0WwhyrxASkvkp6RlF8zmkFviWybHMXAGtM7praTnju93At0BeI78LzZ9XDpeuOm1ytdjriAsU8EncsWC%2FOcg43co2YmxcQTWGlWs5L6lnygTztvBSwr8xOgEDkuunQtapkYbonFfOdJ1XkcZ5aX%2BLmlicOxa1exYMNsm%2FqKJYIIskqZv%2BXMF6phJnP4B823MEaTwQRuaAFllJXhn6weiAX3gVCEKZpQYOkrlUxtxqn6TqwTbV%2FE71Sc87QqAHwqmviAjzFeoAW4f3NgHvAmG%2BG58WZbkE%2FMjiEfa5D%2FGjjQffc90BcF%2Bm7WMdCTQ337iQpxg466W8iRMe4Y86mGuQaxy9Q8MW4OVDmHOoK%2FB0jK0yWAf5Z8eBwijRRYJwWuIIBLVtR95iETjMvzB1HdEjG6GqQRbjgbm7UE6p0GKrEsuhXffX9z%2Fbc9DLv2xjMNxID%2BH0KBHsTmIE67dq%2F3Goi7rQ3hVjKw%2Fsdf88Aj0o31uDbH9f7McPdiuCaKVg7YB%2FsrUyHtC%2F1zJ5%2B%2F11xa1VxQSXPBs4akSLSS0yQXhM%2B8XAeSC9KVQI2N1LcfpKQKpZXLJR3mYIr5goyo%2BCNz3WaAFtGntkMTflN3xV8%2FZIa5MsCOhOMpHeRRh8kAT8B3gUUb9Exgv0MPuZj7anZVwRlQF1z01%2BLNHaDPs3ShObXOKFIHyFM8RfRQ8VF51bd0Ilw6EZ6UuBU9tHaii%2FFK1%2FIetmq88Ki6z971fRPXh8%2F1Ree5PjxucLm7K3F9ukD5DH2X%2Bj0pWyblxJgVWNL2Gshk8s9ZA0G6hvqJeFSj5L94wo%2BryXUt6x5IF0c%2F%2FbQc3OwCVuu66GG8O1%2F0QLow%2BgguAix8Db%2FmLID76kG%2FKOidL4AgXTh9sG0YfsMe6stC3f3CB9L11Y8Q8fau%2FOJQN10NaQ9qXYV95qGwuH17M7CuwZ50PVFLoo5%2BRfOiuM4aTsjaWtI0dUkTHpOt94%2FU447MV2VW2C%2BhnIBo52soScfMQSpbYedCpN%2BrNS2rNUNsmKOyYoMnzSiBxpN6TtyGjmjqOmKv2FR03%2BtVbExdc3villpq71FsimJjHaa1cV%2FX3Z4DvoaQHHCMko6523fLxoCeu2p9OUB1Ya0E6OIW47LWAEVG545WV83QaPSDBmEu44BvVXvnArBc7oGZTL%2BS2jYjHvftzxvml2ZmaJwYcpkK%2BeQDQGTDHUmsfAbCERat4olZOanhjYnfcttf5DbAH5Ue33K7HveDfKB9kcQGXJOwkEzojFE84L0zW2E4xsbIyP2gwqQNZmiwd5ztNosXaJrLMESl2aBG0JpkhuxMSUW%2BXoe0lYQHUxcJUSe8Rm3y2odG%2Bi1fyDFbFjNqq9J%2BUOwPqJv%2BELP9aJ5PtJJT32%2BK9MYX6UNpZJN2mvGoFDI17ScTVD5V6UQt5%2FyYungaJa9WKOV9jN7qW6XDxikOaHZWSsU%2FKccR6zpvH6BX9NzrDdCxLun2od07AO08ZwLrgu6%2FIbRre8ntCOydZ01gXSq91ayJ1kDsPh8C63Lp84b7%2FTcazgCz84wHrKumHzzC9O%2FA9BhebSID1oXSOPCyZdjL4deSBtE7Z30cNmgxDhuWlKrm66SJ8XbXSbEu%2Fi631GLAM3F7Hwg5393gGhXqaoIxXelc7lbDsMfyZCy7j8N0ya6fkJ8IYudR1VjXt%2Fr5%2BHlYdh9cjXWpq5%2BPn4Zh5zHVWFe3bnktMFv%2B%2B5Le%2FvG1wGz970tuT%2BtrgcnHZI%2BtBdYFfdka%2BrQ407%2FMBwDK7%2B0jVIoLmi4GjlHNO7ZXtGY%2BrpADlfSbpfYaHpGtvqIu950kmPV4IAc3sSE%2B%2FBeNdGmAu5DVd0L1f3ULK7l7RUIVDBORHlGb4gdh4lZu%2BlzI7hHG4aok4uuGCboE3yktrwHZFrttI592JII077Rs22HV%2B9FGVRRpHIgi3%2BfSdM3vkEfzuZIUbBJulMNAxXaqyZY5wSOc7X1k6ZkGDJpFihqXdi9Jsxz1L%2BOanPscxJMKhCeXcTNDs%2FxK%2FLl%2BZlj%2BZElZyGg56WCs65cDc%2BrKgYzBhhNNNSLDKjF8gp4dJla47CqrmToc%2BQi2rZzGmgdRSb1yo5anLO6rCVh2jtzVqmdq2jtEC%2B7yIOssa3BXJdNRCS59t6jKMRV7mZT3ktMP5DgznS4Wl%2FJZpfSDyhdGTJ3NuDV3VfWxypQVOXCmf%2B54TJFsq5o0v%2Bzk5%2BddSRyjijvqbe9oaIF2UzqIYakRSnVWmoxeBMaVjQzKeurIRBU0g%2BGuPofQ1AXUqZEOkIX3yVojky5cvptMD44TUEfNP%2Br4RNQkJ6AWD6QgawBO1kY%2Bn7JDE6upzLpnVOljb%2BVkUVTFoG9Jn2vJDG0z41kDVP7k53aD5pmf6dwOjQwT52d3xug%2BM3Q0w2uaTBrJ5IcCSHx3hy4ym0OlKdjdmXO5clKfWZ4UXkPMqCvW73bGC%2B5t4%2FepT3XG3k5%2Bww6mg0OgsNpkvgpNZfVwHwrq9W5ZTfVKyclV%2Banmt3TLSQB0SSJ9pD4NAJAwptKUeLKR%2FVW4zVXP84r5NmAlYjkjEycsl8g1GIPJX0Tez241TIz9oK%2FlP08mBXbhigjkvkowKatil6PXaR9WjZu7XiOp0Tuy8a96NGw%2B8B1%2FO8KshuDbKBb6B01L75E2FkbLmesNBQtASikDSbV4KaT2hseo%2BoYzUkVnbDg6QjH7m2FR9exPr%2BEPfwM%3D"&gt;Example class diagram on Diagrams.net&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Behaviour diagrams
&lt;/h2&gt;

&lt;p&gt;These demonstrate dynamic changes which may occur within an IT system as a result of user or automated triggers. Many different behaviour diagrams exist, including:&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Use case diagrams&lt;/u&gt;&lt;br&gt;
Describe how users and systems functionally interact with an IT system, in a series of steps.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Activity diagrams&lt;/u&gt;&lt;br&gt;
Graphical representations of the workflow through a system, organised into swim lanes.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Interaction overview diagrams&lt;/u&gt; &lt;br&gt;
Similar to activity diagrams, but with a focus on interactions over workflow.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Sequence diagrams&lt;/u&gt;&lt;br&gt;
Models dependencies in the flow of an IT system in sequence.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Communication diagrams&lt;/u&gt; &lt;br&gt;
Similar to sequence diagrams, but more focussed on how processes interact than on the time sequence.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Timing diagrams&lt;/u&gt; &lt;br&gt;
Also similar to sequence diagrams, but with an emphasis on the amount of time each process takes.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;State diagrams&lt;/u&gt;&lt;br&gt;
Show which state transitions are permitted and how they are triggered.&lt;/p&gt;

&lt;h2&gt;
  
  
  Example
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://viewer.diagrams.net/?tags=%7B%7D&amp;amp;highlight=0000ff&amp;amp;edit=_blank&amp;amp;layers=1&amp;amp;nav=1&amp;amp;title=Example%20Use%20Case%20Diagram.drawio#R7VrJcts4EP0aVc0c7OJO6eglW1VclYqScjI3iIQojEmCA0K2PF8%2FDRJcsCh2bMmykvHBAhoLgcfXrxuQJv5FsXnHULW6oinOJ56Tbib%2B5cTz4qkH%2F4XhvjW4kRO2loyRVNoGw5z8i6XRkdY1SXGtdOSU5pxUqjGhZYkTrtgQY%2FRO7bakufrUCmXYMMwTlJvWa5LyVWudevFgf49Jtuqe7EaztqVAXWe5k3qFUno3MvlvJv4Fo5S3pWJzgXMBXodLO%2B7tltZ%2BYQyX%2FDEDFum79ObjwrsmF8Vf7%2F%2BOv5LF9ETOcovytdzwZ7zEjGEmF83vOyRg%2FZUorov8LOGUTfzzW8w4Aaw%2BogXOP9GacEJL6LKgnNNi1OEsJ5lo4LQC64oXOVRcKNI1z0mJL%2Fq354BRrgnG4s3Wzbo9hMA9TAvM2T106QaEEvWOd91buBteoi9Nq9H7i6QNSdpk%2FcwDslCQ4P4E0J4BNOy5XuccwYaOGuo41qCODgy1b0BtAHy3IhzPK5SI%2Bh0olwrVDlDptyxROXFDA5UgtMASzMI94RIYuFyhG9BWz2GNz6PcwAlmBaGFyvn%2BEfN9kze9244Rmu6LOKEB0BeGSPYKIepczT00ZJEB2RxzgReoTQLPeFVweRZlelm4YgOuawYoqARzcsy5JQAfErqTPqAeDLupRdYjVIi957zZ9bhGyiRfp7g3ZkoXWdMAHkGoR9M%2BzOIyPRNpJdhohaHpPEX1CqdyHDTLHHYKNUbXZdq0iViLN4R%2FE%2BXTUNa%2Bd6OgfLkZdbu87yolQDcaJKrfx23DsKbWjWv3hlMjudUIAPuna5bgh1WRI5Zh%2FpAUbCWUc%2BoG3kzhVCCZwnCOOLlVV2qjj5z8EyUiZ%2BpmDlyVqr6eWLQ7lKPGCbI2kT%2FV5CLWJmoh%2BMFEXUe6XNZY6dMwvkfl6U4wsziBLhMDQ0taYlUaNEb27FK4NVDt2ex6kDWBnTUjlbElSeFuqOPOtHjqP5E6npYD6yK4hTm7YkXnAb8OLeJfgxZaFNWVad%2B0ME%2F3z6KFjF%2FuKHoNscwev3Ydhw5FiUCnxFODTBxpE81emBPmRcQRc2JLzvEynAj140VwrJwwb0y%2BVikyziWkXFJWoObm6TUdTvrE7WBnE9e8WznyANxy4vgjsOZa%2FiNz%2Bp25lnmn9P%2Bp9dn83aoMJ5Zj5smOzpn%2BdHYaRrPhL3gaJbfN0y33p0%2Bd%2FRWNOm5%2Fh1DXvPW7QjxZNfGiAoxxs64UFzQT3wqSpD5ovAgdVQSCg4cL8x7w8xBmr1CJMly0IM7va44LA76j%2FobIcpX4ol8QubarxKPNgd2DJsGRFql1n3l0Dhz%2BeJ59x2nbxVrUxOQlbdaZ0Fy4GbRE%2F6xpG4l9x4miJBmboi5kN2MXneFcMIaUGUy0oBshlJEM%2F28XwyDIsFPwUI7F6wPZXMHnuha1BNWNugI%2Fmm6CGAmE8on4SQMTCd1KlD98EQ1SMvontDuI7OkDuDVX2V1zRm9ABdr9SvYvSZ5rJiR1JAHKYmYRmIKkab5N4lUfEkucy0V1J9TnSE4Qx6ehGldnFtWPQtMx9FRzdz8BsN3S7ZBjH7rk0bGxi6OsVhmGFHaNKEfaOM7EkuhS9Cwp8Iv95pSKVEJ5joVQwYsSyna%2Ft0NCndU1TYg871tJJdIFQRb4vBE7FKmIqP%2FRUalAAsGFYFeFaZWLQqNXrUjVfwrporrM%2FdY0s18Z70G1oDr8%2BqwNpMNv%2BPw3%2FwE%3D"&gt;Example use case diagram on Diagrams.net&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Business Process Modelling Notation 🔁
&lt;/h2&gt;

&lt;p&gt;BPMN is a process-oriented modelling language, intended to represent the full business process supporting an IT system. It is based on flow charts, and likely to cover areas outside of the scope of a software or architectural change being investigated or proposed.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;What is BPMN good for?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Mapping out business processes underpinned by an IT system.&lt;/li&gt;
&lt;li&gt;Determining the different scenarios that may occur, and how a system should behave in each scenario.&lt;/li&gt;
&lt;li&gt;Communicating system behaviour to all business stakeholders, regardless of technical knowledge.&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;BPMN has four main components:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Flow objects&lt;/strong&gt;&lt;br&gt;
These can be events (triggers for the beginning or end of a flow), activities (processes or sub-processes undertaken by a user or system within the flow), or gateways (decision points where the flow can branch off in multiple directions).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Connecting objects&lt;/strong&gt;&lt;br&gt;
These can be flows of activities or messages (denoted by a solid or dashed arrow, respectively) or associations (links between an artefact and a flow object).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Swimlanes&lt;/strong&gt;&lt;br&gt;
These can exist on two levels: Pools (groupings of roles such as a function or organisation) and, within them, lanes (individual roles within the process).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Artefacts&lt;/strong&gt; &lt;br&gt;
These can be data objects (information required for a process), groups (which link together activities in a process), or annotations (textual descriptions of to give additional context).&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Example
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://viewer.diagrams.net/?tags=%7B%7D&amp;amp;highlight=0000ff&amp;amp;edit=_blank&amp;amp;layers=1&amp;amp;nav=1&amp;amp;title=Example%20Business%20Process%20Flow%20Diagram.drawio#R7V1bc9o4FP41PGYH381jQtJuZjYznbKdbvdN2ALUGIuVRYD%2B%2BpV8wbZksCD4Amk7k7GFb5zz6dM5n47MwBgvt58JWC1esA%2BDgT70twPjcaDrjquzv7xhlzSYhp00zAnykyYtb5igXzBtHKata%2BTDqHQgxTigaFVu9HAYQo%2BW2gAheFM%2BbIaD8l1XYA6lhokHArn1O%2FLpIml1dSdv%2FxOi%2BSK7s2aPkk%2BWIDs4%2FSbRAvh4U2gyngbGmGBMk63ldgwDbrvMLsl5nw58un8wAkOqcgIFu29f%2F8Xj2ff1cPQ8%2BfyKFuO79CpvIFinXzh9WLrLLEDwOvQhv8hwYDxsFojCyQp4%2FNMNczlrW9BlwPY0tjlDQTDGASbxucaTzf%2Bz9ogS%2FAqzT0IcstMf0ntDQuH24JfS9qZiEIN4CSnZsUOyE4apdVN4aXa6v8mdpelZ46LgKcdMG0GKkPn%2B4rkR2UZqxxNsqks2%2FQpnkBBIJNtGG7QMQGyMBSboFw4pCBQtzY6dpFdStzz7ZBj%2F69r8%2BrAp8xv1kGZXYfwB640MolVCKjO05V1AtLJvQdc3q6zs6lPDtlWtfBhHB03vlC3vyIa3KsxuNWV1U8HqoX%2FPGZnteQGIIuSVjV0mGrhF9J%2F0E779g7f%2FYaV7j9vCYY%2B7dCe5JfQlSlexNHtWvCYerIOW7BFFmxMYAIreys9W5Yj0Dl8wYk%2B9d7huCn1N7EHJ46dnFQcC4UKaVb7QyCpfhwIyh1S6TgyL%2Fbc%2BHymWhJRnn7kHzbgpvQCFiI2%2BbDOEDAuHByPt5MFoNpvpnlfVWX17altNdlbR5LZVQZMV0LGb6q625ITvhNkyRikfq2IPBJDSikHrSn1g6D3zgfMRKDM1csIpdYDsilpNTYDGmcxq2iJFt8qsrgSoF%2FAq9OkZwcvYvxTB%2BNEJ9DDxuSt3EYXLW%2Bnt5qhnvX3Uh97OLEp2%2FxR3Cmfx3fy0eK8tlrAVWcLtlCWcA8nOqTRhCSOR1jJPZLcrYPHbygfi8I%2FCGSZLZjEc3gorOHpt1tQuK2hVAowdsNs%2BILYxp7ExkoZp1jBh%2BOBwAiHnbRj%2FXXGwRNmh7Fmm%2BelDZjT2l0XZmPKYejiNQ21m9zWJ9z1EvAAWzy7cXPA88wgtu7dS2yn6PG0CAZqHnNuYB1lYaTxw%2F%2FJg%2Fz79YIl8PziUlZfJj6sfJU3Dtsfj9rLtfTxfBI5ZARy9MeDIKlMBOAWH2f%2BtcYqifKsaV%2FceY0fEAoM6HK2QlwBpg9i3Z%2FhZk7d4nwUTFIRzdmvx2gcu%2BRtwBwDnChmjWxG%2FaK0CTtbVvuZjxQsIwRwuk5ByUh1JXr3YKQwe%2B6yhNHpUs0BjXpF1txdAvUUpxPfhEs%2F57AzyInkkX%2BDldB2dMYpD%2B8Ao7oym6jY%2FAjbl4F6XHeFWuMFtzAuypjUmMAmo5EzrWkKpGifoI7PkBKPzWEoWtSRjn5hiZemSdlq6lKVm2XaTQkzGy7U5VkYVtUlWguausixbGPiMM5MsR5iauhNHxqaTLFneu%2Ff9coZFMXf5FkUUhfNb5gopde6eLGSx7L1kkXX6fUf%2FMcjVmU47fWd92RDcLgodyoqJeCGRFZruzLJ69wmgYM3zoV6mK831ZKOanguAqpruF%2F11uWILWcuarD0PRnKke%2BOesYT54VHXnjkqbZ2nUDxCD0WxHrkXvIZVwkIiVKDQZ04qKRWUoFif%2BNhaVx2S3DKSqsSuVrWHbGwu5lc4jNYBBSnf37LOYKvrDKPGPKBSVdXbcGn%2FsLXhUqepjzh9LOoVqtGSYZUzc6fdYEmXNam%2FGe3Oy9NL15LZJNhRVkE6n1HSZS1qAjk48Jp67B6y5fsoANZYXZrd71gA1C8vPV3D7L4qreqq0pPeKf9a4pzLmfxrC9JTy8mqLitPz%2FFU%2Fp5%2FeVHfEM%2Fi%2FZ%2FQu7IZ%2FuPUIOp%2BFXMDLRNyY0rTbXCDaoVgAuvOuEFQMMVIW5kbxElENW5g6AC7wmFZ9ntQeRMXZgyHxx9LON4qHc42kge4LE9dviSuE5B3pq4e8PHpmBTGPU1cJdIUKNtAWTb4HkUZ83yW9mNCF3iOQxA85a0CzvJj%2FsJ4lULxJ6R0ly5gBGuKy0BtnEeNbHVkHY8aB0CrjMb3rc6SVcFCGQndrW4jQZG6VIWG02qGYuiS3b8Q%2FIZ8npUD%2Fw15fCOekMxjxBsJCV2RdzqPCQ1ZTnvA%2BJXfahWzaFJJdSP2F6cBu4%2FJjcaWK54iZ%2BbhSh6h%2FCgFKB3F5OpjSacrIsVqbmm9jXKtSF1lf8MZu1Gl2SVl3fuKEBS%2BIZoW41%2FfyrzTCLoHBKEi6F1qFfnM9WC16aeuZVqNBiqaZoq2lyuNW11JbvRiXWRv9ZIsdKjn5k61VEeYy5KEDlVudoW1mZqovDTNzR9YvusKPCNxYDdH54Fn5NRcqGnwqEhctzCMiIbWuh5FMrr53W2767bapbqtdKGGu61ZJVLxiTIJQv2oEGusW4uzFRVxeas1haYuOebe8%2BDq4zlGQdJq1zOyovVcXD8uzTyD2G8g9CoE3ytNY6Voo%2FM01uzFa7l6m0uZyrlUp6%2Bn0TRXpOEzkylNFxdSKE71XWxklZWu%2B3Ty4WMRuCM6onMCr1K83lmu%2FwKjCMx5sf1wCukGwqRwHweH6vYX4C1%2BwwCIFnHVfoDCD16qXwMjTVRHqoacVmv1zSr5LnFgtAJh5sD0lajcyfGk4wSShAb2vi4efSREUKjqJzgVzo3Hu1Hmps7L%2FO%2B0TGTOilhNOWfeF7q247teLIo8d1osZQ3legtNMUpwFYOEbtdaioKreXaFmlBbbbb8dipTRTPrLQgviLtsBXnPq6ZF4NnnAk9U%2Bm3F2shLAS%2BzR9PAGyoC77RkqSn2y5xSj8Je0Z9uCW81Vl48JQTmpuLqqVOLIV0hp8vmtQ6mbuK7RU883tGOHy%2B%2BHlc4vpniTEvh9wfa7HJlrtc66nK6KvEnLuqqy7liKfu562VcQTczW14wY%2BkSCu%2BPlOH1I9UcvD8DER3oNPfKALab%2F9JJ4rf852KMp%2F8B"&gt;Example business process flow diagram on Diagrams.net&lt;/a&gt;&lt;/p&gt;

</description>
      <category>agile</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Epics: Purpose, structure, and content</title>
      <dc:creator>Joe Wallace</dc:creator>
      <pubDate>Mon, 07 Nov 2022 16:36:29 +0000</pubDate>
      <link>https://dev.to/oneadvanced/epics-purpose-structure-and-content-3k8m</link>
      <guid>https://dev.to/oneadvanced/epics-purpose-structure-and-content-3k8m</guid>
      <description>&lt;p&gt;✅ Epic User Stories, often called “Epics”, are simply User Stories which are too large to fit into an individual development timebox (“Sprint”, in the Scrum methodology).&lt;/p&gt;

&lt;p&gt;❌ Epic User Stories are not themes which group together individual User Stories, new product features such a modules, or buckets of time to collect recorded time against.&lt;/p&gt;

&lt;h2&gt;
  
  
  The purpose of an Epic 🎞
&lt;/h2&gt;

&lt;p&gt;The purpose of an Epic is similar to a typical User Story - an informal description of a user’s requirement, written in natural language relevant to the business domain. However, an Epic is less constrained by development capacity or process, so may describe a requirement a little broader and looser than required in a typical User Story.&lt;/p&gt;

&lt;h2&gt;
  
  
  The structure and content of an Epic 🧬
&lt;/h2&gt;

&lt;p&gt;Epic User Stories, much like any other User Story, typically begin with a card describing the user’s problem, such as the “Connextra” template:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;As a user persona&lt;br&gt;
I want capability&lt;br&gt;
So that business benefit&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;They will also have acceptance criteria like a typical User Story, to inform the test cases that are run to verify that the conditions to fulfil the user story have been met, as in the “Gherkin” template:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Given pre-condition&lt;br&gt;
When user action&lt;br&gt;
Then outcome&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  User Story mapping 📌
&lt;/h2&gt;

&lt;p&gt;A process known as User Story mapping is often used by software development teams to sketch out a user’s journey through the system, break that down into Epics, and then break the (must have) Epics within it down into User Stories (of any priority).&lt;/p&gt;

&lt;p&gt;See an example below for a holiday story map. Note that this might differ based on the scenario at hand and the needs of the customer. For example, if the customer had experience operating a boat, they might rent their own, and then a captain is not a must have!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://viewer.diagrams.net/?tags=%7B%7D&amp;amp;highlight=0000ff&amp;amp;edit=_blank&amp;amp;layers=1&amp;amp;nav=1&amp;amp;title=Example%20Story%20Map.drawio#R7Vxdc5s4FP01fuyMARs7j4mbNvuQmW3Jbmf2TYaL0VhGVBZxvL9%2BJb4cuLSTnQZLxn0yvoCFzuF%2BnIvwxFvtXj4LkiWPPAI2cafRy8T7OHFdx5%2FeqA9tOZaWmeeXho2gUXXQyRDQf6EyTitrTiPYtw6UnDNJs7Yx5GkKoWzZiBD80D4s5qw9akY2gAxBSBi2fqORTErr0l2c7A9AN0k9suNXE96R%2BuBqJvuERPzwyuTdT7yV4FyWW7uXFTANXo1Led6nH%2BxtLkxAKt9yAmNpsADCj8FN8vgX34vv%2F3z5UP3KM2F5NeEnQZ4LDiXXEKaRRk3wnbYkoJmEvaQpkZSn1cTksUZL8DyNQA84nXh3h4RKCDIS6r0HdX8oWyJ3TH1z1GZMGVtxxkVxrhfNYRnNlH0vBd%2FCqz1Ld%2B35vtpTXSwICS8%2FRMFpsFU3JfAdSHFUh1QnLCs2ju2vhxO3Tk1Y8opXv7KR6nbaND98QlxtVKD%2FDwJcRMBtGPLdjkeXirDvWgax1wOxpM9UUhVYLg9fZ2obwDMcRBLY9YCrJi3bCLaRSnkKHVgrE2F0k6qvoQIJlP1OQ0hVmL6tduxoFOlheilrkxrzVAbVRTn19zLxOMv34ejDvE3RzRxR5Pcw5A3F0BwxNHF9JqvJt0jyv%2Be83vFhX8Byqw5w3OylwKber7Y2%2BvMryFykOjsIQtM6b5SpglCRcSHrsdSll8OVZ%2F6i771izcW%2BGMexG4Z9vhj5a38%2BTDZxFqZd0UdEf2J62F8NdDaA7c5sQ3uBM0t9x08TLssyqvCEtLgad7qGmAttiFlpuXhWuunePCtLxMrfChhSVP9CD6gvDmA7AvAX1gWgGwR%2BkHKxVWfSdDMCxFH1ZR7yWiq%2FwvyOE1mkZJqNAXPHutvc6RHOuuYt57UdEfaedVnXwZp5dQzL%2BDIa2OdT62DHOvo%2Bo%2BFvlXfiaGlY5jlYiQdJLiXTcWmd7yc%2F7upV5erFO87MPr%2FB4juQXPQ1n67Wc9wels7rOVg4v1uH5DHf6x94IM%2FVkXf5sRyKajdUNG1BC%2FTzN0kiAsu41yX9cAnreJAmiTvHUfLMHol1%2B7uRHSQ8Z1GL7gCIpneaAtHyE0iYqA%2Bu4q4wQTo40RwWfaTf%2BAuPDNMZa6pIc6TjtsC7kb5CnH%2BFWMA%2B2Sm0NPU8bRJt1SY10BFdhtCff9fL%2BWw%2BHYT3mWucd9yR%2BB3Z3ymydzuy5kN7Pdjv0D5caO%2By3khVc6zjbsxwPv4ZZKOjmm5%2FMVWeC30zoB7%2Ftbi%2F%2BSTv4tbQkO7%2FCIS1s7s5yg35vvHFBy5uSw3n%2Bw%2B8uAPWZLPR68bUFldYR9fj4ubrORf3uIZ08VuqE3vI04jq9VhFk3mszt3tnVlA9oALV669eO%2BybUHxPmATrse1nzhlhhg2488WFOoDdt6QP5fs1Ks%2Bmpx9Lf5sQTWOW24VSckp6KoCSiXVfcMTKBCO%2BqHG5TPSXRplQYTFzbC6d3mi5L5hYLop4qPWNkUZxA9a6QiunxtefKeyy455f6mfbv2UnU8CCuEhgGxjsh94CaERJszXnR5uKF0lE%2BYrBg93dK4qh3RXeJrPIR5uuDTFtUVJ5DwldpceC5JI3xogRM83GtMRwm9B5sAdC5w5VgxIqh8RXXzC6BJgQcLATQRMQKBnqD63cBwjCcab8h7W%2BVeVtdFbAhak7TeI8RUXQr9Er7j4ntNMr14ZIxkWJGmswxEZf6QKlTyUfNg8YYgD85m6vgmuNkZ136oxH6NmWHYjSgI1VqkriiUYIclk9d5xUj6QJaOIWc7SsSxmzbAS7xEW7YWPFy8wEA8WxK036e%2Fm4dYYKLBNY8ywxsYa44nnOmMUf2A0KA3nERmYBuMqY4a1NkoXT%2FVj%2FBEm8O6rmRYkcCy%2BcU1FtzpR61kSNmyAOhMP3Xc1LeAB628coP4kYbF2bcrytFi3nAnQs47GEK26vmFB%2FYQFOObkSw659o79Vr%2FGfFqEND5CLCik3iDCH4DtYBSCohukzDtE%2FQ7ozwvZ2ywTPBOUSO0XIeMyGfqfRM5U13YZMe8Rc6y%2FexjRTy90KXUAUr7ocloeOwq90eVlSL2hvp7%2BAbTY9%2Bp%2FVL37%2FwA%3D"&gt;Example story map on Diagrams.net&lt;/a&gt;&lt;/p&gt;

</description>
      <category>agile</category>
      <category>beginners</category>
    </item>
    <item>
      <title>GitHub Reusable Workflows and Custom Actions</title>
      <dc:creator>Saurabh Shah</dc:creator>
      <pubDate>Thu, 06 Oct 2022 09:47:42 +0000</pubDate>
      <link>https://dev.to/oneadvanced/github-reusable-workflows-and-custom-actions-3cbk</link>
      <guid>https://dev.to/oneadvanced/github-reusable-workflows-and-custom-actions-3cbk</guid>
      <description>&lt;p&gt;Liquid syntax error: Unknown tag 'endraw'&lt;/p&gt;
</description>
      <category>devops</category>
      <category>github</category>
      <category>githubactions</category>
      <category>aws</category>
    </item>
    <item>
      <title>How we moved from Artifactory and saved $200k p.a. Part 5 of 5 - Reaching our goal</title>
      <dc:creator>Paul Mowat</dc:creator>
      <pubDate>Wed, 28 Sep 2022 15:39:54 +0000</pubDate>
      <link>https://dev.to/oneadvanced/how-we-moved-from-artifactory-and-saved-200k-pa-part-5-of-5-reaching-our-goal-23dh</link>
      <guid>https://dev.to/oneadvanced/how-we-moved-from-artifactory-and-saved-200k-pa-part-5-of-5-reaching-our-goal-23dh</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Welcome back to the final part of our 5-part series on 'How we moved from Artifactory and saved $200k p.a'.&lt;/p&gt;

&lt;p&gt;If you are just joining we recommend jumping back to the beginning and starting from there.&lt;/p&gt;

&lt;h2&gt;
  
  
  The deadline
&lt;/h2&gt;

&lt;p&gt;On 19th August 2022 at 21:56 the final migration request was completed, a proud moment. The migration of approximately 1.5 million artefacts had been planned, transferred and verified. We had met our tight deadline.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PjoMmuvg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.paulmowat.co.uk/static/images/how-we-moved-from-artifactory-and-saved-200k/part-5/completion.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PjoMmuvg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.paulmowat.co.uk/static/images/how-we-moved-from-artifactory-and-saved-200k/part-5/completion.png" alt="completing the final migration request" width="637" height="63"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;completing the final migration request&lt;/p&gt;

&lt;p&gt;A short while after, on the 31st of August 2022, our subscription with JFrog expired and in turn, all access was revoked. This was the point of no return and the milestone at which we were most apprehensive. The team were ready for the inevitable influx of tickets. Largely (perhaps surprisingly) though, the noise was relatively quiet. Yes, we had some issues, we expected these and had readied ourselves but nothing much arrived. A positive sign!&lt;/p&gt;

&lt;h2&gt;
  
  
  A pleasant surprise
&lt;/h2&gt;

&lt;p&gt;This, perhaps, provides a good opportunity to let us recall the goal we set for ourselves:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;To migrate all requested artefacts from Artifactory without losing any, writing custom tooling as necessary&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Approaching 4 weeks since our Artifactory subscription ended and the Advanced Artefacts service went live, post-migration support has consistently been quiet, with zero packages lost to date.&lt;/p&gt;

&lt;h2&gt;
  
  
  In summary some interesting facts and stats…
&lt;/h2&gt;

&lt;p&gt;To conclude we thought it would be nice to summarise some of the key points from the process:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;1.5 million artefacts analysed&lt;/li&gt;
&lt;li&gt;24.5 TB of data migrated without losing a file.&lt;/li&gt;
&lt;li&gt;222 tickets processed. 215 completed, 7 for phase 2.&lt;/li&gt;
&lt;li&gt;18 clinics

&lt;ul&gt;
&lt;li&gt;The first clinic had over 130 participants&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;772 users interacted with through our internal Ms Teams Channel:

&lt;ul&gt;
&lt;li&gt;109 posts&lt;/li&gt;
&lt;li&gt;590 replies&lt;/li&gt;
&lt;li&gt;67 mentions&lt;/li&gt;
&lt;li&gt;115 reactions&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Provided dedicated support channels with fast response (typically less than an hour during UK hours)&lt;/li&gt;
&lt;li&gt;Days sleeping, thinking and dreaming of artefacts - 90+&lt;/li&gt;
&lt;li&gt;14+ hour days - many many many&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The numbers
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;1,200,624 objects in S3&lt;/li&gt;
&lt;li&gt;25 TB storage consumed&lt;/li&gt;
&lt;li&gt;21.5 MB average object size&lt;/li&gt;
&lt;li&gt;15,307.69 EC2 usage hours&lt;/li&gt;
&lt;li&gt;2,540 migration jobs run with SSM&lt;/li&gt;
&lt;li&gt;888 EC2 spot instances used&lt;/li&gt;
&lt;li&gt;56 CloudFormation stacks created&lt;/li&gt;
&lt;li&gt;10,539 ECR Authorization tokens requested&lt;/li&gt;
&lt;li&gt;5,027 ECR Image pushes&lt;/li&gt;
&lt;li&gt;7,968 CloudWatch log streams created&lt;/li&gt;
&lt;li&gt;680,263 CloudWatch PutLogEvents called&lt;/li&gt;
&lt;li&gt;100% Spot instance utilisation&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The costs and savings
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;The 3-Month spend over the project, including the migration workers = $8400

&lt;ul&gt;
&lt;li&gt;Our original budget estimate = $6000&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;The 3-Month S3 storage costs = $407.05&lt;/li&gt;
&lt;li&gt;The 3-Month CodeArtifact costs = $161.19&lt;/li&gt;
&lt;li&gt;The 3-Month ECR costs = $115.94&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;Using that as a basis of calculating our costs for the upcoming year, we estimate savings of $200,000 per annum vs our Artifactory subscription&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--U4iXv_tA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.paulmowat.co.uk/static/images/how-we-moved-from-artifactory-and-saved-200k/part-5/mathieu-stern-1zO4O3Z0UJA-unsplash.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--U4iXv_tA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.paulmowat.co.uk/static/images/how-we-moved-from-artifactory-and-saved-200k/part-5/mathieu-stern-1zO4O3Z0UJA-unsplash.jpg" alt="savings image" width="640" height="427"&gt;&lt;/a&gt;&lt;/p&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  That’s all folks
&lt;/h2&gt;

&lt;p&gt;We are hugely proud of our efforts and what we have delivered. We would like to thank our engineering teams for their support and engagement. This was a tremendously challenging project, involving complexities, long hours and hard work, but overall it was incredibly fun and rewarding in equal measures.&lt;/p&gt;

&lt;p&gt;Thanks for reading! Please return your chairs to an upright position and thanks for flying with Air Advanced Artefacts ;)&lt;/p&gt;

</description>
      <category>aws</category>
      <category>artifactory</category>
      <category>codeartifact</category>
      <category>ecr</category>
    </item>
    <item>
      <title>How we moved from Artifactory and saved $200k p.a. Part 4 of 5 - Migration</title>
      <dc:creator>Paul Mowat</dc:creator>
      <pubDate>Wed, 28 Sep 2022 15:38:55 +0000</pubDate>
      <link>https://dev.to/oneadvanced/how-we-moved-from-artifactory-and-saved-200k-pa-part-4-of-5-migration-5h8j</link>
      <guid>https://dev.to/oneadvanced/how-we-moved-from-artifactory-and-saved-200k-pa-part-4-of-5-migration-5h8j</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Welcome back to Part 4 of our 5-part series on 'How we moved from Artifactory and saved $200k p.a'.&lt;/p&gt;

&lt;p&gt;If you are just joining we recommend jumping back to the beginning and starting from there.&lt;/p&gt;

&lt;h2&gt;
  
  
  Migration preparation
&lt;/h2&gt;

&lt;p&gt;The following steps were performed in readiness:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Publish a &lt;strong&gt;checklist&lt;/strong&gt; of steps that teams could follow which would help identify product assets that need migrating&lt;/li&gt;
&lt;li&gt;Research how we could append custom metadata to existing artefacts

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;tagging&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Study the Artifactory REST API and AQL documentation&lt;/li&gt;
&lt;li&gt;Research metrics that could be queried for comparison and verification purposes:

&lt;ul&gt;
&lt;li&gt;storage summary&lt;/li&gt;
&lt;li&gt;size&lt;/li&gt;
&lt;li&gt;number of artefacts&lt;/li&gt;
&lt;li&gt;number of files&lt;/li&gt;
&lt;li&gt;number of folders&lt;/li&gt;
&lt;li&gt;creation timestamps&lt;/li&gt;
&lt;li&gt;deployment ordering&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Analyse the repository path structure and flag outliers&lt;/li&gt;
&lt;li&gt;Research publishing options for native CLIs through a lens of using containers to perform the workload

&lt;ul&gt;
&lt;li&gt;discovered requirements which ruled out AWS Batch and ECS&lt;/li&gt;
&lt;li&gt;e.g. windows/windows containers&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Build a categorised data source to map existing repositories, including:

&lt;ul&gt;
&lt;li&gt;dev or release repository&lt;/li&gt;
&lt;li&gt;Advanced Artefacts repository name using new dev/release convention&lt;/li&gt;
&lt;li&gt;flag large repository&lt;/li&gt;
&lt;li&gt;flag empty repository&lt;/li&gt;
&lt;li&gt;flag large numbers of artefacts in the repository&lt;/li&gt;
&lt;li&gt;flag unsupported/problematic repository package types&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Consider failure scenarios and recovery option features:

&lt;ul&gt;
&lt;li&gt;dry-run&lt;/li&gt;
&lt;li&gt;ability to replay&lt;/li&gt;
&lt;li&gt;debug levels&lt;/li&gt;
&lt;li&gt;resume (using offsets)&lt;/li&gt;
&lt;li&gt;clear progress counters&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Log decisions using Architecture Design Records&lt;/li&gt;
&lt;li&gt;Review Architecture Design Records&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We spent a considerable amount of our time budget on the planning and preparation stages. This served us well.&lt;/p&gt;

&lt;h3&gt;
  
  
  Checklist
&lt;/h3&gt;

&lt;p&gt;To facilitate the process, we set about creating a preparation checklist that was intended to give all impacted products and teams a clear and concise checklist of preparational steps that would (hopefully) heighten awareness in the appropriate areas. The main points this covered were:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Begin ASAP!&lt;/li&gt;
&lt;li&gt;Determine a complete list of impacted builds/artefacts which needed to be supported in production&lt;/li&gt;
&lt;li&gt;Review all CI/CD pipelines&lt;/li&gt;
&lt;li&gt;Review all runbooks (automated and manual)&lt;/li&gt;
&lt;li&gt;Review all production deployments&lt;/li&gt;
&lt;li&gt;Review all disaster recovery processes&lt;/li&gt;
&lt;li&gt;Identify all artefacts that will need to be migrated through property sets tagging in Artifactory&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Tagging
&lt;/h3&gt;

&lt;p&gt;However, before we begin looking at these, we need to cover tagging. Artifactory has a useful feature called Property Sets. We decided early (decision log - check) in the process to make wide use of custom properties through Property Sets. Our requirements were principally:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Artefact hygiene - remove unused or unsupported packages, waste&lt;/li&gt;
&lt;li&gt;Docker images underlying OS type - differentiate between Linux and Windows containers&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; That whilst possible to read a manifest for each Docker image and determine the existence of foreign layers, we were focused on stability and repeatability as well as speed. We wanted to utilise the native tooling as much as possible and felt that determining supported schema/configuration around custom tooling would negatively impact complexity.&lt;/p&gt;

&lt;h3&gt;
  
  
  Failure recovery
&lt;/h3&gt;

&lt;p&gt;The focus for migrations was specific to our end goals and where appropriate we tried to reuse our tools across different package types. Ultimately, the project as a whole needed to be successful once, with many smaller eventual successes along the way. Building effective tooling was critical to the success of the project. We needed to use software to automate as much as we could whilst also allowing us to clearly aggregate the activities we had performed which in turn could be used as means of verification.&lt;/p&gt;

&lt;p&gt;Key features such as dry-run mode, debug levels, and the ability to replay/resume from an offset was essential. The sheer volume of different files, paths, conventions/structures meant that we could only analyse so far before we needed to begin. We fully expected the need to debug activities during the migration phase and that these tools would really support us.&lt;/p&gt;

&lt;h2&gt;
  
  
  Migration tooling
&lt;/h2&gt;

&lt;p&gt;Large Network bandwidth and scalable compute resources were top of our platform requirements. The choices taken would drip down into the tooling we created.&lt;/p&gt;

&lt;p&gt;As an organisation, we place a strong emphasis on Infrastructure as Code (IaC) so creating processes and workflows around &lt;a href="https://aws.amazon.com/systems-manager/"&gt;AWS Systems Manager&lt;/a&gt; using the AWS CDK is expected. We created stacks that enabled us to scale our migration efforts on demand which included:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Migration stack&lt;/li&gt;
&lt;li&gt;EC2 Spot Fleet worker node stacks

&lt;ul&gt;
&lt;li&gt;Windows&lt;/li&gt;
&lt;li&gt;Linux&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;SSM Documents library stacks&lt;/li&gt;
&lt;li&gt;Custom command line interface tools

&lt;ul&gt;
&lt;li&gt;archive runner&lt;/li&gt;
&lt;li&gt;migrator&lt;/li&gt;
&lt;li&gt;migraterunner&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--u4IAH2AV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.paulmowat.co.uk/static/images/how-we-moved-from-artifactory-and-saved-200k/part-4/migrator.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--u4IAH2AV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.paulmowat.co.uk/static/images/how-we-moved-from-artifactory-and-saved-200k/part-4/migrator.png" alt="migrator cli" width="800" height="232"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Concurrency was an interesting feature. The default was to plan for concurrency, if we run tasks concurrently across our cloud resources then this would accelerate the process, surely? Frustratingly, this was not the case. In the details of AWS CodeArtifact were areas where the order that packages were uploaded is important, particularly npm and Maven. Furthermore, package managers have different ways to determine the latest version, but support for this in AWS CodeArtifact was not universal during our project. We also discovered that there were places where Artifactory supported deprecated features which we happily used, yet AWS CodeArtefact did not. This resulted in us essentially serially migrating packages but utilising the cursor-like features of offset and limits to optimise our efforts.&lt;/p&gt;

&lt;h3&gt;
  
  
  Migration workers
&lt;/h3&gt;

&lt;p&gt;The migration workers performed the brunt of the migration tasks and were orchestrated through AWS Systems Manager. The (awesome) network throughput enabled by AWS meant we were able to migrate hundreds of GiB of data per hour. This was crucial given that the order that most packages were migrated in was important.&lt;/p&gt;

&lt;p&gt;Using property sets that were laboriously set by our engineering teams, the project was able to leverage the excellent Artifactory API as a catalogue/pseudo-state-machine through which, workers could page their way whilst advancing the migration.&lt;/p&gt;

&lt;p&gt;In (often long-running) batches, workers would &lt;em&gt;pull&lt;/em&gt; packages using the native package managers to the worker’s local storage, then the documented AWS CodeArtifact/ECR/S3 tool was used to &lt;em&gt;push&lt;/em&gt; packages to their new location. This is where container images became tricky because pushing and pulling containers needs to be performed on a host running the relevant operating system. Whilst it is documented in the AWS SDK that you pull and push layers as mere blobs, the guidance was that this was only a feature to be used internally which was enough to ward us off.&lt;/p&gt;

&lt;h3&gt;
  
  
  Command line interfaces (CLIs)
&lt;/h3&gt;

&lt;p&gt;A handful of CLI tools were authored in &lt;a href="https://go.dev/"&gt;Go&lt;/a&gt;. These tools provided the brunt of the work where validation and custom logic were applied to transfer the different artefact types under the conditions of the project. AWS SSM documents were used in orchestrating the tools to transition the artefacts to the correct destination repositories via the appropriate worker node fleet. Much credit needs to go to the excellent &lt;a href="https://github.com/spf13/cobra"&gt;spf13/cobra&lt;/a&gt; commander Go module that was used in these CLIs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Migrating 1.5 million artefacts
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--b99uwJaN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.paulmowat.co.uk/static/images/how-we-moved-from-artifactory-and-saved-200k/part-4/xavi-cabrera-kn-UmDZQDjM-unsplash.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--b99uwJaN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.paulmowat.co.uk/static/images/how-we-moved-from-artifactory-and-saved-200k/part-4/xavi-cabrera-kn-UmDZQDjM-unsplash.jpg" alt="lego image" width="640" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Migration process
&lt;/h3&gt;

&lt;p&gt;We wrote our tooling to support a ticket-driven approach where teams could raise tickets to work with the implementation team and migrate their product packages in batches, working together to verify successful completion.&lt;/p&gt;

&lt;p&gt;This was important in allowing us to progressively work through the migration rather than a whole big-bang event occurring on an agreed date. Such an approach would be simply unworkable, requiring far too much alignment (thousands of pipelines updated in anticipation). Finding a date that would perfectly fit with hundreds of teams would be virtually impossible and crucially what rollback options would we have? Thus we decided this progressive approach would give us greater flexibility and the reassurance to engage team by team, product by product while checking off milestones on the route to completion.&lt;/p&gt;

&lt;p&gt;As things panned out, we ended up with a large backlog of migration activities ranging from a few packages to entire release repositories containing hundreds of thousands of artefacts and terabytes of data. The backlog was continually refined and tickets were resolved step by step with the owning teams. The team had overall responsibility for verifying the migration objective had been met before marking the work as complete.&lt;/p&gt;

&lt;p&gt;Resolving all migration tickets in the backlog would culminate in disabling write access for all standard users at a key, predetermined date, before swiftly proceeding, after a reasonable holding period, to remove complete access for all users, except system administrators. Any issues experienced along the way would be addressed in isolation, verifying with the team and making updates to our tooling as required.&lt;/p&gt;

&lt;h3&gt;
  
  
  Verification
&lt;/h3&gt;

&lt;p&gt;This was tricky. Reporting on the AWS CodeArtifact side is currently poor (09/2022). CloudWatch metrics are entirely missing but ultimately it was not at all straightforward to aggregate key metrics and statistics in the same way you can with Artifactory.&lt;/p&gt;

&lt;p&gt;We had to get creative and had to combine metrics queried from the Artifactory REST API with internal counters residing within our tooling to cross-reference our progress. Through deploying webhooks and using DynamoDB to record the dates when batches of work were undertaken we could replay activities using the DateTime offset to determine any deltas where pipelines were not fully updated and artefacts were still being deployed to Artifactory after the migration activity for the solution artefacts had begun. This worked out well and coupled with a reduction of write access permissions, enabled us to verify the process had worked to a point in time whilst giving us another window to replay the same batches but for a much smaller delta of packages. Significantly, as we advanced through the project, confidence grew as we were replaying actions over and over with consistent results. Testing our tooling was tricky so having the ability to utilise dry-run mode was invaluable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Next up
&lt;/h2&gt;

&lt;p&gt;The migration has now been completed.&lt;/p&gt;

&lt;p&gt;Next up, we’ll take a look at whether we’ve reached our goal.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>artifactory</category>
      <category>codeartifact</category>
      <category>ecr</category>
    </item>
    <item>
      <title>How we moved from Artifactory and saved $200k p.a. Part 3 of 5 - The future is Advanced Artefacts</title>
      <dc:creator>Paul Mowat</dc:creator>
      <pubDate>Wed, 28 Sep 2022 15:36:42 +0000</pubDate>
      <link>https://dev.to/oneadvanced/how-we-moved-from-artifactory-and-saved-200k-pa-part-3-of-5-the-future-is-advanced-artefacts-4d7j</link>
      <guid>https://dev.to/oneadvanced/how-we-moved-from-artifactory-and-saved-200k-pa-part-3-of-5-the-future-is-advanced-artefacts-4d7j</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Welcome back to Part 3 of our 5-part series on 'How we moved from Artifactory and saved $200k p.a'.&lt;/p&gt;

&lt;p&gt;If you are just joining we recommend jumping back to the beginning and starting from there.&lt;/p&gt;

&lt;h2&gt;
  
  
  Approach
&lt;/h2&gt;

&lt;p&gt;Having identified that we wanted to create a structured service we had to determine the best way to approach it.&lt;/p&gt;

&lt;p&gt;Our earlier analysis helped us identify the artefact types that we needed to support. Yet a remaining challenge was to identify how to support these and empower our development teams across the technologies and tools we use on a daily basis.&lt;/p&gt;

&lt;h3&gt;
  
  
  Architecture
&lt;/h3&gt;

&lt;p&gt;The following architecture gives a high-level overview of the service components.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mnS8I1J_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.paulmowat.co.uk/static/images/how-we-moved-from-artifactory-and-saved-200k/part-3/aa-architecture.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mnS8I1J_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.paulmowat.co.uk/static/images/how-we-moved-from-artifactory-and-saved-200k/part-3/aa-architecture.png" alt="aa-architecture.png" width="800" height="284"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Repository convention
&lt;/h3&gt;

&lt;p&gt;Something that became apparent was that our Artifactory configuration was a disordered muddle which has since provided us with a harsh lesson about paying particular attention to the rollout of such platform tooling. It had never been implemented in a controlled or consistent way.&lt;/p&gt;

&lt;p&gt;Determined to avoid this at all costs we decided to build naming conventions for each of our artefact types into our service. This would be implicit, removing disambiguation and preference from any decisions.&lt;/p&gt;

&lt;p&gt;Our products commonly have both &lt;strong&gt;development&lt;/strong&gt; and &lt;strong&gt;production&lt;/strong&gt; environments so it was decided that the service should mirror this and have just two conforming types of repository.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Development&lt;/strong&gt; repositories would be where teams push build artefacts continually via Continuous Integration (CI). Then, when the appropriate levels of testing had been passed, the artefacts could be promoted into the corresponding &lt;strong&gt;release&lt;/strong&gt; repository.&lt;/p&gt;

&lt;p&gt;A benefit of this approach was ensuring a clear separation between development and release dependencies so that we could start to look at implementing automated housekeeping rules in the future. We do not need hundreds of development packages so why bother keeping them?&lt;/p&gt;

&lt;p&gt;This helps with our goals of enforcing convention and consistency, which in turn makes it easier to automate and roll out changes in the future.&lt;/p&gt;

&lt;h3&gt;
  
  
  Infrastructure
&lt;/h3&gt;

&lt;p&gt;We needed to start building out our infrastructure to support the service.&lt;/p&gt;

&lt;p&gt;As we were utilising several AWS services, using &lt;a href="https://aws.amazon.com/cdk/"&gt;AWS CDK&lt;/a&gt; was the obvious choice. It allowed us to build the service quickly and was also easy to change when required.&lt;/p&gt;

&lt;p&gt;Going back to enforcing convention and consistency we leveraged &lt;a href="https://aws.amazon.com/servicecatalog/"&gt;AWS Service Catalog&lt;/a&gt; with several custom templates to help us create new repositories.&lt;/p&gt;

&lt;h2&gt;
  
  
  Delivery
&lt;/h2&gt;

&lt;p&gt;Providing a service that worked successfully meant that we had to look at how we delivered our software and consider what an exemplary software lifecycle looks like, as well as the platforms we needed to support.&lt;/p&gt;

&lt;p&gt;The following key areas were identified:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Local Development&lt;/li&gt;
&lt;li&gt;Application Configuration&lt;/li&gt;
&lt;li&gt;Authorisation&lt;/li&gt;
&lt;li&gt;Continuous Integration (CI)

&lt;ul&gt;
&lt;li&gt;GitHub Actions&lt;/li&gt;
&lt;li&gt;Jenkins&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Continuous Delivery (CD)

&lt;ul&gt;
&lt;li&gt;Harness&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We also were aware of some products using other technologies such as &lt;a href="https://azure.microsoft.com/en-us/products/devops/"&gt;Azure DevOps&lt;/a&gt; and &lt;a href="https://www.jetbrains.com/teamcity/"&gt;TeamCity&lt;/a&gt; that we would not directly support, but still had to take into consideration how they could access and use the service.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tooling
&lt;/h2&gt;

&lt;p&gt;As developers, we are used to using tools to help make our day-to-day easier. If you look at any good service you will see that they typically have a range of tools to make interacting with them easy.&lt;/p&gt;

&lt;h3&gt;
  
  
  Command Line Interface (CLI)
&lt;/h3&gt;

&lt;p&gt;We determined that creating a CLI would provide us with a centralised entry point for all of our delivery mechanisms and be flexible enough to allow it to work for any that we didn’t support.&lt;/p&gt;

&lt;p&gt;The CLI had to support multiple operating systems (Windows, Linux &amp;amp; macOS) and be easy to update and use as required.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UJhsSiR5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.paulmowat.co.uk/static/images/how-we-moved-from-artifactory-and-saved-200k/part-3/aa-cli.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UJhsSiR5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.paulmowat.co.uk/static/images/how-we-moved-from-artifactory-and-saved-200k/part-3/aa-cli.png" alt="advanced artefacts cli" width="800" height="343"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We started analysing what functionality the CLI would require and quickly identified potential overlaps between native package manager and docker commands. There is little point in us trying to write, maintain and support any tooling that mirrored these. Everyone knows how they work, they are industry standard tools.&lt;/p&gt;

&lt;p&gt;It was decided that our CLI would work complementary to these. It would bridge the gaps and provide the functionality we needed for our service to work.&lt;/p&gt;

&lt;p&gt;We determined our key functional requirements were:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Authorisation&lt;/li&gt;
&lt;li&gt;Packages - get and promote&lt;/li&gt;
&lt;li&gt;Generic Artefacts - get, list, publish and promote&lt;/li&gt;
&lt;li&gt;Container Images - promote&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The most important feature of the CLI was authorisation into the service. Every developer and delivery mechanism must authorise into the service before they can use it.&lt;/p&gt;

&lt;p&gt;The CLI had to make authorisation easy and limit the impact on our engineering teams on a day-to-day basis.&lt;/p&gt;

&lt;p&gt;We looked at other open-source CLIs for inspiration and took the time to understand how this could be done effectively for multiple operating systems, shells and from a user or service perspective.&lt;/p&gt;

&lt;p&gt;In the end, we went with a multi-pronged approach and created mechanisms that allowed authorisation in several ways i.e. user, role and service level.&lt;/p&gt;

&lt;p&gt;Our security is of the utmost importance and having authorisation tokens written to files was an absolute no-go. Everything by default was going to be applied to the running shell process in order for it to be used and thrown away when finished. That was implemented across several different shells such as bash, Powershell and Windows command prompt.&lt;/p&gt;

&lt;p&gt;With our authorisation mechanism now in place, working and flexible enough to handle different operating systems and shells, the other features were much more straightforward.&lt;/p&gt;

&lt;p&gt;Generic Artefacts proved the most labour intensive, only due to having to implement an entire set of commands to allow complete artefact management.&lt;/p&gt;

&lt;h3&gt;
  
  
  Other
&lt;/h3&gt;

&lt;p&gt;With the CLI now in place we used it to power any other tooling that would help accelerate our development teams.&lt;/p&gt;

&lt;p&gt;Our core Continuous Integration (CI) platform is &lt;a href="https://github.com/features/actions"&gt;GitHub Actions&lt;/a&gt;. We decided it was worth the effort to create a custom action that automatically downloaded the latest CLI, installed it and performed the required authorisation. This meant that teams could drop that action straight into their workflows and it would just work. Minimal change, maximum satisfaction.&lt;/p&gt;

&lt;p&gt;Next, we looked at &lt;a href="https://www.jenkins.io/"&gt;Jenkins&lt;/a&gt;. Although we are moving away from it, we still have some products still using it, therefore we spent a bit of time putting together some example pipelines on how the CLI could be used and included that in our documentation for teams to follow.&lt;/p&gt;

&lt;p&gt;Now we have covered our Continuous Integration (CI) tools, we needed to look at our Continuous Delivery (CD) ones.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://harness.io/"&gt;Harness&lt;/a&gt; is our Continuous Delivery (CD) tool of choice. It provides a flexible template engine, that we were able to utilise to create templates that could be reused across our teams.&lt;/p&gt;

&lt;h2&gt;
  
  
  Next up
&lt;/h2&gt;

&lt;p&gt;With our new Advanced Artefacts service in place, we were ready to get on with the actual migration from Artifactory.&lt;/p&gt;

&lt;p&gt;Next up, we’ll walk through how we built our migration tooling, defined our process and performed the migration.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>artifactory</category>
      <category>codeartifact</category>
      <category>ecr</category>
    </item>
    <item>
      <title>How we moved from Artifactory and saved $200k p.a. Part 2 of 5 - Design</title>
      <dc:creator>Paul Mowat</dc:creator>
      <pubDate>Wed, 28 Sep 2022 15:33:26 +0000</pubDate>
      <link>https://dev.to/oneadvanced/how-we-moved-from-artifactory-and-saved-200k-pa-part-2-of-5-design-3852</link>
      <guid>https://dev.to/oneadvanced/how-we-moved-from-artifactory-and-saved-200k-pa-part-2-of-5-design-3852</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Welcome back to Part 2 of our 5-part series on 'How we moved from Artifactory and saved $200k p.a'.&lt;/p&gt;

&lt;p&gt;If you are just joining we recommend jumping back to the beginning and starting from there.&lt;/p&gt;

&lt;h2&gt;
  
  
  Decision making
&lt;/h2&gt;

&lt;p&gt;The nature of larger projects such as these requires plenty of discussion and decision-making around temporary and permanent processes. We had lots of data to migrate and we needed to be efficient in our decision-making process. We decided upon using &lt;a href="https://adr.github.io/"&gt;Architecture Decision Records&lt;/a&gt; to log the key implementation decisions which significantly helped us deliver consistency throughout our support and guidance.&lt;/p&gt;

&lt;p&gt;As it turned out, undertaking this method of logging was not onerous and we ended up with records for around a dozen key strategic choices that we made; an example of one being the choice to utilise a spot fleet of EC2 workers to perform the migration versus something like AWS Batch or ECS. At first glance, we expected to go with a solution based on AWS Batch or AWS ECS but we had requirements to move resources such as Windows container images and it was so helpful to be able to easily recover the decision steps when we moved to create tooling to support this.&lt;/p&gt;

&lt;h2&gt;
  
  
  Workshopping
&lt;/h2&gt;

&lt;p&gt;Workshopping commenced on the 10th of June 2022 and we had until the 4th of July 2022 to perform the required analysis, design and implement our solution.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FsSVII3C--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.paulmowat.co.uk/static/images/how-we-moved-from-artifactory-and-saved-200k/part-2/kvalifik-5Q07sS54D0Q-unsplash.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FsSVII3C--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.paulmowat.co.uk/static/images/how-we-moved-from-artifactory-and-saved-200k/part-2/kvalifik-5Q07sS54D0Q-unsplash.jpg" alt="workshop image" width="640" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Analysis of requirements
&lt;/h3&gt;

&lt;p&gt;One of the first items of business was to determine which artefact types it was essential to support, those that would be unsupported and any transitions from these to corresponding supported types. Then we would need to determine the options to migrate to, whilst fulfilling the necessary obligations to supported packages and platforms.&lt;/p&gt;

&lt;p&gt;Over the past few years, engineering at Advanced has been consolidating its toolchain and programming languages adopted by default. In no way intent on dissuading reviews of new or emerging options, but rather adding consistency in those used and bringing a larger collective intelligence to engineering as a whole.&lt;/p&gt;

&lt;p&gt;From analysing our usage within Artifactory we settled upon support for &lt;a href="https://www.npmjs.com/"&gt;npm&lt;/a&gt;, &lt;a href="https://www.nuget.org/"&gt;NuGet&lt;/a&gt;, generic artefacts (zip, exe, dll etc), &lt;a href="https://www.docker.com/"&gt;Docker&lt;/a&gt; images and &lt;a href="https://maven.apache.org/"&gt;Maven&lt;/a&gt;. We quickly determined that our biggest challenge would be Docker images, accounting for greater than 50% of our consumed storage, with several repositories holding more than 1 TB of image data. Latterly, Maven would also prove challenging.&lt;/p&gt;

&lt;p&gt;From this analysis, we were acutely (and financially) aware that we were also wastefully holding onto obsolete build artefacts. We decided to use this as an opportunity to leverage our engineering teams to review and select the versions of artefacts that our products needed to retain. This would help reduce the scale of the migration ahead somewhat and perform some well-overdue housekeeping. After all, there is no point in migrating and paying for artefacts that are no longer required.&lt;/p&gt;

&lt;h3&gt;
  
  
  Solution analysis
&lt;/h3&gt;

&lt;p&gt;Having gathered an understanding of what needed support and delivery, we had to identify where we were going to migrate to.&lt;/p&gt;

&lt;p&gt;AWS is our preferred Cloud Provider and platform, as well as a key technical partner. It was a natural choice to look at their services for our solution. From investigation, we found that &lt;a href="https://aws.amazon.com/codeartifact/"&gt;AWS CodeArtifact&lt;/a&gt; was a decent fit for supporting npm, NuGet, Maven and Python (if required in the future), however, it was not a complete match for all our requirements. Favourably, &lt;a href="https://aws.amazon.com/s3/"&gt;S3&lt;/a&gt; is an excellent fit for generic artefacts, and &lt;a href="https://aws.amazon.com/ecr/"&gt;Elastic Container Registry (ECR)&lt;/a&gt; is perfectly appropriate for Docker images (even leading us to correct misunderstandings between images and repositories internally!).&lt;/p&gt;

&lt;p&gt;We now had the artefact types we needed to support at a high level and where they were going to migrate to.&lt;/p&gt;

&lt;h3&gt;
  
  
  Solution design
&lt;/h3&gt;

&lt;p&gt;Now we firmly knew our direction, we needed to decide how we get there.&lt;/p&gt;

&lt;p&gt;Initially, we considered publishing guidance around best practices for various AWS services to satisfy our artefact requirements but ultimately that was deemed unmaintainable.&lt;/p&gt;

&lt;p&gt;We wanted to finish the project with our artefact management strategy in a much better position than it started. Significant to us was ensuring we had the ability to define convention, consistency, clear guidance and expectations. We aimed to provide a maintainable solution that continues to build upon the best practices as it matures.&lt;/p&gt;

&lt;p&gt;This led us to agree that it was important for the culmination of the migration to result in a new, custom service that any engineering team within Advanced could consume. &lt;strong&gt;&lt;em&gt;Advanced Artefacts&lt;/em&gt;&lt;/strong&gt; was born.&lt;/p&gt;

&lt;p&gt;We now had two streams we needed to complete within the project:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The Advanced Artefacts service&lt;/li&gt;
&lt;li&gt;The Migration&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We will get into the detail around these in future posts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Support channels
&lt;/h2&gt;

&lt;p&gt;As mentioned previously Advanced has over seven hundred engineers from across the globe working on many projects and we needed to identify a strategy for how we could support them in the best way possible.&lt;/p&gt;

&lt;p&gt;We came up with the following three-pronged approach.&lt;/p&gt;

&lt;h3&gt;
  
  
  Documentation
&lt;/h3&gt;

&lt;p&gt;We decided early that we needed to document all parts of the project to allow our engineering teams to self-serve where possible. Without good documentation, there is no way a team of four can support over seven hundred developers.&lt;/p&gt;

&lt;p&gt;We focused on providing some getting-started documentation that walked teams through the process in an end-to-end fashion. Then proving the appropriate reference documentation for each step.&lt;/p&gt;

&lt;p&gt;This covered items such as the support channels available, each team's responsibilities, the migration preparation and also information on how to use our new Advanced Artefacts service both locally and from our CI/CD pipelines.&lt;/p&gt;

&lt;p&gt;A great deal of time was spent pouring over this, it was however crucial to the success of the project.&lt;/p&gt;

&lt;h3&gt;
  
  
  Clinics
&lt;/h3&gt;

&lt;p&gt;A technique that has worked fairly well for our organisation is the idea of online clinics. We held clinics twice a week for the duration of the project.&lt;/p&gt;

&lt;p&gt;We used the first two clinics to kick off the project with our engineering teams. This helped us set timelines around key milestones and clear expectations on what was being delivered.&lt;/p&gt;

&lt;p&gt;After that, they were reserved for anyone to drop into, receive updates and ask for assistance directly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Microsoft teams channel
&lt;/h3&gt;

&lt;p&gt;Microsoft Teams is our internal communication tool, therefore, we created a dedicated channel that we would use for communicating any important updates to the engineering teams.&lt;/p&gt;

&lt;p&gt;They could also ask us questions or get further clarification as required outside clinic sessions. The artefacts team committed to replying to the questions as soon as possible ensuring teams were unblocked and able to progress quickly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Next up
&lt;/h2&gt;

&lt;p&gt;Now we have our design in place we need to start implementing it.&lt;/p&gt;

&lt;p&gt;Next up, we will cover the creation of the Advanced Artefacts service.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>artifactory</category>
      <category>codeartifact</category>
      <category>ecr</category>
    </item>
    <item>
      <title>How we moved from Artifactory and saved $200k p.a. Part 1 of 5 - Planning</title>
      <dc:creator>Paul Mowat</dc:creator>
      <pubDate>Wed, 28 Sep 2022 15:31:35 +0000</pubDate>
      <link>https://dev.to/oneadvanced/how-we-moved-from-artifactory-and-saved-200k-pa-part-1-of-5-planning-i4c</link>
      <guid>https://dev.to/oneadvanced/how-we-moved-from-artifactory-and-saved-200k-pa-part-1-of-5-planning-i4c</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;A 5-part blog post by Alex Harrington and Paul Mowat covering the migration of 25 TB of artefacts from JFrog Artifactory to a custom solution we created for &lt;a href="https://www.oneadvanced.com"&gt;Advanced&lt;/a&gt;, achieving significant cost efficiency.&lt;/p&gt;

&lt;h2&gt;
  
  
  A journey
&lt;/h2&gt;

&lt;p&gt;Early in 2022, we decided that Artifactory had become an expensive option for us. Whilst a good product, Artifactory wasn't without difficulties surrounding our subscription. To provide a little more detail, specifically, you are either all in or all out with the JFrog platform, you can only subscribe to every component which is not too desirable at the enterprise level.&lt;/p&gt;

&lt;p&gt;In retrospect, we came to learn of significant portions of the JFrog platform (&lt;a href="https://jfrog.com/xray/"&gt;Xray&lt;/a&gt; for example) from which we were not getting any real value (for us) and this made the overall service costly. Moreover, we were serious about doubling down on the security of our software supply chain and researching a wider (custom) array of best-in-class solutions.&lt;/p&gt;

&lt;p&gt;Still, this was no easy decision as we were a large user of Artifactory with over 1.5 million artefacts published and 25 TB of data storage consumed. Many of our CI/CD pipelines and developer settings were all configured to use Artifactory, so the scale of the task was somewhat sizeable. Still, we proceeded to assess our options and planning the task(s) at hand was critical.&lt;/p&gt;

&lt;h2&gt;
  
  
  And so it begins
&lt;/h2&gt;

&lt;p&gt;Let’s start by declaring our initial goal:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;To migrate all requested artefacts from Artifactory without losing any, writing custom tooling as necessary&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The possibility of migrating away from Artifactory was first aired at the beginning of January.&lt;/p&gt;

&lt;p&gt;Artifactory comes under a category of services that could be described as quite "sticky". A SaaS solution where the impact of migrating away would reach far and wide within many an organisation.&lt;/p&gt;

&lt;p&gt;Advanced has over 150 products active product suites covering different market areas. Some examples are delivering care to 40 million people throughout the UK, sending 10 million sporting fans through turnstiles and supporting 1.2 billion passengers to arrive at their destinations on time. Our solutions are engineered by hundreds of colleagues from across the globe, built using multiple technologies, living in more than 2600 GitHub repositories and powered by thousands of CI/CD pipelines. All deploying to numerous cloud/hybrid-cloud platforms. Not withstanding backup, disaster recovery, &lt;a href="https://www.ses-escrow.co.uk/case-studies/nhs-case-study"&gt;escrow&lt;/a&gt; and many other internal and market-driven requirements.&lt;/p&gt;

&lt;p&gt;We needed to plan, but plan in a way that would allow us to scale.&lt;/p&gt;

&lt;h2&gt;
  
  
  Team
&lt;/h2&gt;

&lt;p&gt;We formed a small dedicated artefacts team with four members that had to support more than seven hundred engineers through the process. The artefacts team initially needed to design and implement the migration &lt;em&gt;machine&lt;/em&gt;, followed by educating and guiding our engineering teams through the project. This had to be as efficient as possible, in order for it to scale.&lt;/p&gt;

&lt;p&gt;The artefacts team structure was as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;1 x Principal DevOps Architect (Paul Mowat)&lt;/li&gt;
&lt;li&gt;1 x Principal DevOps Engineer (Alex Harrington)&lt;/li&gt;
&lt;li&gt;1 x Senior DevOps Engineer (Karthik Holikatti)&lt;/li&gt;
&lt;li&gt;1 x DevOps Engineer (Likhith Kotian)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Milestones
&lt;/h2&gt;

&lt;p&gt;Our next task was looking at the wider project and breaking it down into key milestones so we could share these. This was a critically important area. Accuracy and clarity in our communication were paramount.&lt;/p&gt;

&lt;p&gt;Starting with setting a hard immovable deadline and taking heed from previous hard-learned lessons with sliding deadlines that run and run, we chose to set this date in stone and declared this at the outset of our engagement with our wider engineering community.&lt;/p&gt;

&lt;p&gt;We felt this offered a powerful message, which clearly illustrated that urgent engagement was necessary from all sides.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--F7KXgv-F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.paulmowat.co.uk/static/images/how-we-moved-from-artifactory-and-saved-200k/part-1/luuk-wouters-F_zec7P_OwA-unsplash.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--F7KXgv-F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.paulmowat.co.uk/static/images/how-we-moved-from-artifactory-and-saved-200k/part-1/luuk-wouters-F_zec7P_OwA-unsplash.jpg" alt="lighthouse image" width="640" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The key milestones were:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;10-06-2022 - Project Kick-off - First implementation team workshop held to begin formulating the plan&lt;/li&gt;
&lt;li&gt;04-07-2022 - Deadline for our design and implementation complete ready for a wider rollout&lt;/li&gt;
&lt;li&gt;05-07-2022 - First Advanced Artefacts support clinic held with over 100 participants&lt;/li&gt;
&lt;li&gt;06-07-2022 - Migration Period Start&lt;/li&gt;
&lt;li&gt;19-08-2022 - Migration Period End&lt;/li&gt;
&lt;li&gt;22-08-2022 - Engineering teams would lose access to Artifactory&lt;/li&gt;
&lt;li&gt;31-08-2022 - Project End - Our Artifactory subscription would end&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Next up
&lt;/h2&gt;

&lt;p&gt;We have our team and plan.&lt;/p&gt;

&lt;p&gt;Next up, we get to work designing our solution.&lt;/p&gt;

</description>
      <category>artifactory</category>
      <category>aws</category>
      <category>codeartifact</category>
      <category>ecr</category>
    </item>
    <item>
      <title>User Stories 101</title>
      <dc:creator>Joe Wallace</dc:creator>
      <pubDate>Tue, 27 Sep 2022 10:27:24 +0000</pubDate>
      <link>https://dev.to/oneadvanced/user-stories-101-nie</link>
      <guid>https://dev.to/oneadvanced/user-stories-101-nie</guid>
      <description>&lt;h2&gt;
  
  
  What is a User Story? 📖
&lt;/h2&gt;

&lt;p&gt;A “user story” in software development is an informal description of a user’s requirement, written in natural language relevant to the business domain, preferably without reference to the software in use (although sometimes this may be necessary), and worked on by a software development team.&lt;/p&gt;

&lt;p&gt;It should also ideally be written by (or at least in collaboration with) a user, though they are also often typically written by a Business Analyst or Product Owner, and really can be written by anyone, if they have the right information on the business context.&lt;/p&gt;

&lt;p&gt;User stories are conversation starters and anchor what a solution needs to facilitate, but they don’t define it. You may even be able to achieve what you wanted to in your user story without software development!&lt;/p&gt;

&lt;h2&gt;
  
  
  User Story Formats 🃏
&lt;/h2&gt;

&lt;p&gt;User stories are often produced in a formatted fashion, largely to drive consistency or to help people to structure their thoughts, but they don’t have to be.&lt;/p&gt;

&lt;p&gt;Typically at Advanced (and in many other software companies), user stories follow the “Connextra” template:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;As a user persona&lt;br&gt;
I want capability&lt;br&gt;
So that business benefit&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The lightweight nature of this format makes it useful in different contexts; on a project management tool like Jira, or physically, on a post-it note or index card.&lt;/p&gt;

&lt;p&gt;The key part of any user story is the business benefit (or the “So that” on the Connextra template); if you can’t define why a user wants a capability in your software or process, you probably don’t need it at all!&lt;/p&gt;

&lt;h2&gt;
  
  
  User Story Splitting 🕷
&lt;/h2&gt;

&lt;p&gt;User stories should be as small and self-contained as possible in their scope. This is both to ensure a focussed solution and to plan work in manageable chunks. &lt;/p&gt;

&lt;p&gt;There are multiple methods of splitting user stories, but the most commonly used is SPIDR:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Spikes&lt;/strong&gt;&lt;br&gt;
Investigations and prototyping to better understand the ideal solution for a user story.&lt;br&gt;
&lt;strong&gt;Paths&lt;/strong&gt;&lt;br&gt;
Different routes a user may take through a process, e.g. reschedule an appointment, or cancel it without rescheduling.&lt;br&gt;
&lt;strong&gt;Interfaces&lt;/strong&gt;&lt;br&gt;
Different devices, browsers, or alternate interfaces for different types of users of the system.&lt;br&gt;
&lt;strong&gt;Data&lt;/strong&gt;&lt;br&gt;
Different categories of information being managed or browsed, e.g. displaying someone’s allergies, or displaying their medication.&lt;br&gt;
&lt;strong&gt;Rules&lt;/strong&gt;&lt;br&gt;
Constraints on the system, e.g. rules on the maximum number of appointments someone can have booked at once, or a minimum gap between appointments.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Acceptance Criteria ✅
&lt;/h2&gt;

&lt;p&gt;Acceptance criteria are often added to user stories to determine conditions which need to be fulfilled when implementing the story such as to make it acceptable to the customer. These are used to inform the test cases that are run to verify that the conditions to fulfil the user story have been met.&lt;/p&gt;

&lt;p&gt;These may take the format of simple bullet points, or use the “Given, When, Then” template (also known as Gherkin):&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Given pre-condition&lt;br&gt;
When user action&lt;br&gt;
Then outcome&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Example User Story 🔍
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;As a&lt;/strong&gt; District Nurse&lt;br&gt;
&lt;strong&gt;I want&lt;/strong&gt; to be able to view a patient’s allergies when visiting them in their home&lt;br&gt;
&lt;strong&gt;So that&lt;/strong&gt; I don’t prepare them food or medication that may be dangerous to them&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Acceptance Criteria&lt;/u&gt;&lt;br&gt;
&lt;strong&gt;Given&lt;/strong&gt; a patient has any currently active allergies in their clinical record&lt;br&gt;
&lt;strong&gt;When&lt;/strong&gt; I open the patient’s visit on my tablet or phone&lt;br&gt;
&lt;strong&gt;Then&lt;/strong&gt; I will see them listed, with the causative agent, reaction severity and description shown&lt;/p&gt;

</description>
      <category>agile</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Technical Debt and Technical Efficiency</title>
      <dc:creator>Joe Wallace</dc:creator>
      <pubDate>Wed, 14 Sep 2022 14:12:56 +0000</pubDate>
      <link>https://dev.to/oneadvanced/technical-debt-and-technical-efficiency-2oc2</link>
      <guid>https://dev.to/oneadvanced/technical-debt-and-technical-efficiency-2oc2</guid>
      <description>&lt;h2&gt;
  
  
  Technical Debt: “The Loan” 💰
&lt;/h2&gt;

&lt;p&gt;Within agile software development, one of our key aims is to get working software out as early as we can. This allows us to enable the value for our users quickly (and therefore bring in revenue as a business) and get early feedback to improve and iterate the product in line with customer needs, without wasting time on things customers might not ultimately want. &lt;/p&gt;

&lt;p&gt;While we may refactor our code as we go along, this tends to mean that though the software we deploy works as intended, it may not be as efficient as we’d like. These inefficiencies are often called technical debt.&lt;/p&gt;

&lt;h2&gt;
  
  
  Technical Efficiency: “The Repayment” 💳
&lt;/h2&gt;

&lt;p&gt;It is important to address inefficiencies in our code over time, for a multitude of reasons: To reduce the likelihood of bugs occurring, to make the code base more manageable, and to improve performance, among other things. &lt;/p&gt;

&lt;p&gt;Therefore, its recommended that each product at Advanced sets aside a bucket of time in each release period for technical efficiency - refactoring and improving code to “pay back” the technical debt, like a loan we took out in order to deliver the software early. When American developer Ward Cunningham coined the term “technical debt”, this was exactly what he meant - repaying technical debt is essential, much like paying back a loan.&lt;/p&gt;

&lt;h2&gt;
  
  
  Technical Product Initiatives and Technical Efficiency 👩‍💻
&lt;/h2&gt;

&lt;p&gt;There are some scenarios where initiatives we want to complete at Advanced are technical in nature, for example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Replacing a third party component or platform.&lt;/li&gt;
&lt;li&gt;Upgrading to a new version of a framework or integration.&lt;/li&gt;
&lt;li&gt;Changing our authentication methods.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These initiatives are often erroneously categorised as “technical efficiency”, as the end user does not notice a difference in their day-to-day work. This is problematic, however, as these types of initiatives are not related to technical efficiency by nature, and due to their complexity and business benefit, need to be prioritised against other initiatives.&lt;/p&gt;

&lt;p&gt;All of these initiatives should be considered like any customer-facing one, and supported by an appropriate business case that clearly quantifies the value of the initiative, and justifies why it should be done. For example, we wouldn’t be replacing a third party component for an abstract reason; we might be doing this because it causes unacceptable performance issues which are affecting the abilities for users to do their work, or the component might be going out of support, putting our entire service at risk.&lt;/p&gt;

&lt;p&gt;It is vital that these items are prioritised and analysed in the same way that customer-facing initiatives are, with just as much care put into the business case. Prioritisation is a wider topic, the scope of which extends far beyond this article, but the following should be considered for more technical initiatives, as well as user-facing ones:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The business risk incurred in not completing the initiative.&lt;/li&gt;
&lt;li&gt;Potential cost saving.&lt;/li&gt;
&lt;li&gt;Potential revenue add, in terms of additional module sales or selling into new markets.&lt;/li&gt;
&lt;li&gt;Customer satisfaction (and therefore retention).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Paying off technical debt is important, and freeing up space in our development teams' capacity by categorising these initiatives correctly helps us to recognise this.&lt;/p&gt;

</description>
      <category>agile</category>
    </item>
    <item>
      <title>What size is my T-shirt? Can someone look at the label?</title>
      <dc:creator>AprilBauer</dc:creator>
      <pubDate>Fri, 17 Jun 2022 13:37:09 +0000</pubDate>
      <link>https://dev.to/oneadvanced/what-size-is-my-t-shirt-can-someone-look-at-the-label-c9l</link>
      <guid>https://dev.to/oneadvanced/what-size-is-my-t-shirt-can-someone-look-at-the-label-c9l</guid>
      <description>&lt;p&gt;Introduction&lt;br&gt;
Imagine you are standing in the dressing room of your favorite shop trying on T-shirts. Did you grab the right size, does it have extra detail, is the price too high? What does mom think?&lt;/p&gt;

&lt;p&gt;You look at the construction of the T-shirt, discussing the color and stitching, does it need to be dry cleaned? Can it be mass produced or is it haute couture? This would affect the cost to create the T-shirt. And “mom” is certainly going to be thinking about price, as well as how well it will wash and wear (in our world, think how many customers it’s being sold to).&lt;/p&gt;

&lt;p&gt;T-shirt sizing isn’t just for clothing, it’s also used to measure the effort need to develop a software initiative. Just like the T-shirt in the dressing room, you need to understand the size of T-shirt to get the right fit for the software initiative. You accomplish this by sitting down with Engineering and Product Management teams (just think of them as mom).&lt;/p&gt;

&lt;p&gt;What does this mean for software development, then?&lt;br&gt;
 &lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq4f0auq6juc1u37rmsbx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq4f0auq6juc1u37rmsbx.png" alt="View inside a warehouse with racks to the left and right filled with boxes of various sizes." width="800" height="535"&gt;&lt;/a&gt;  &lt;/p&gt;

&lt;p&gt; Now if you are reading this, thinking “how can software fit into a T-shirt”, and “I would never care about the color and stitching”, you do care about the what the software does; the user experience, if extra security is needed, or if it needs to be available on mobile devices or a portal. Does it have certain requirements needing new UI, will it involve ancillary teams such as CoEs or the C4Es? The T-shirt size can represent task, scope, effort, complexity work hours, time estimates, or often all of the above.&lt;/p&gt;

&lt;p&gt;After everyone has looked the T-shirt over, maybe you decide you want the T-shirt for your niece Joan, and she only needs an XS (0-29 days of development), or maybe it will be a gift for Uncle Joe, and he is one big dude, so you will definitely need an XXXL (1500+ days); or maybe, just want one for yourself and a L (200-399 days) will do for you.&lt;/p&gt;

&lt;p&gt;Great, you know the T-shirt sizes you need – but they not in stock, so you are going to need to order them!&lt;/p&gt;

&lt;p&gt;Imagine your order arrives at the warehouse (Engineering and Product Management teams), but the staff are busy, so they start looking through the orders to see what sizes are needed, and what the value of those T-shirts will be against the time they have available to get them. Once they decide what T-shirts they have room for, you will get a “Go” or “No Go” decision regarding whether your T-shirt can be delivered.&lt;/p&gt;

&lt;p&gt;So, if it’s your lucky day, your brand new T-shirt will arrive!&lt;/p&gt;

</description>
      <category>agile</category>
      <category>businessanalyst</category>
    </item>
  </channel>
</rss>
