<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: J</title>
    <description>The latest articles on DEV Community by J (@joshghent).</description>
    <link>https://dev.to/joshghent</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/joshghent"/>
    <language>en</language>
    <item>
      <title>Be friendly and don't ignore Recruiters</title>
      <dc:creator>J</dc:creator>
      <pubDate>Thu, 10 Mar 2022 00:00:00 +0000</pubDate>
      <link>https://dev.to/joshghent/be-friendly-and-dont-ignore-recruiters-md9</link>
      <guid>https://dev.to/joshghent/be-friendly-and-dont-ignore-recruiters-md9</guid>
      <description>&lt;p&gt;Increasingly, I’ve noticed an increased level of resistance to recruiters among engineers. There has always been a love/hate relationship between the two parties. From the beginning of my career, I always heard “recruiters are bad”. And, to be honest, I accepted that as truth for the longest time. But, reflecting on it now, I don’t understand it at all.&lt;/p&gt;

&lt;p&gt;I’m writing this article for fellow software engineers to not accept the rhetoric of what others say - see for yourself. In a nutshell, you and I are not “better” because we are engineers. Recruiters are another role that is necessary for the sector to function.&lt;/p&gt;

&lt;p&gt;Here’s my advice for software engineers dealing with recruiters.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Use canned responses
&lt;/h3&gt;

&lt;p&gt;One of the main aversion to recruiters is sending countless emails to irrelevant jobs. This is understandable annoying. Especially when you are specific about not looking for a job, or the types of job you want. But consider for a second why this happens. Recruiters are paid for successful placements. Because there is a shortage of talented engineers, it figures that most engineers are already employed. If recruiters took the fact that someone was already employed as a reason to not approach them, it would be impossible to hire. For myself, I’ve always been employed when approached about another job.&lt;/p&gt;

&lt;p&gt;So, we can’t avoid the inMail and emails. But, what’s the solution? &lt;strong&gt;Canned Response&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Canned responses save you time by being clear about your expectations for a job and what kind of work you might be open to. You can find many templates for this online, but they are mostly full of snark.&lt;/p&gt;

&lt;p&gt;Here is one I use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Hi X, 

Thanks for your message! This looks like a fantastic opportunity.

Unfortunately, I have recently accepted a position at a new company and am not currently seeking new opportunities.

I’ll be sure to bear you in mind for future roles.

Kind regards,
Josh
https://joshghent.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Short, friendly, clear. Simple as that.&lt;/p&gt;

&lt;p&gt;Or, if you are open to work. Here’s another template I use for that:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Hey X,

Thanks for your message! This looks like a fantastic opportunity.

Although this job doesn't fit me, I am currently looking for a Remote Senior Developer position working with NodeJS, React, and AWS. I have worked with these technologies for over 5 years across an array of projects. Most recently, I have been architecting and building a greenfield project for a large eCommerce company.

I have attached my CV in case you have anything that fits the bill.

Kind regards,
Josh
https://joshghent.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Again, it’s short and sweet but gets across the message.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Recruiters are great at building communities
&lt;/h3&gt;

&lt;p&gt;Due to being involved with a large network of developers, recruiters are incredible at creating communities. Many events that I have run, have been so well supported in large part due to the work of recruiters leveraging their networks. Meeting these people at events can then further your own career.&lt;/p&gt;

&lt;p&gt;Additionally, they have more access to commercial ends of businesses. If you have a technical event you’re running, recruitment agencies are usually among the first to sponsor the event. They get the exposure and you get the finances to run a kick-ass event. Win, win. Leverage these resources that they have access to.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Don’t let a bad egg put you off the whole batch
&lt;/h3&gt;

&lt;p&gt;Now, I know what you’re thinking. You’ve read this article so far and said to yourself “that’s all well and good. But this recruiter was &lt;em&gt;truly&lt;/em&gt; awful”.&lt;/p&gt;

&lt;p&gt;I agree.&lt;/p&gt;

&lt;p&gt;There are bad recruiters out there. Terrible ones. But there are lots of bad engineers too. There are bad healthcare workers, builders, architects, designers, painters. With anything and everything, there is a “bad” version of. And that’s ok.&lt;/p&gt;

&lt;p&gt;At worst, a “bad” recruiter might spam you with some emails or calls. You can easily block these. A bad engineer might give you a haunting nightmare of yarn that you have to untangle over the next year. The effects of these are vastly different in size.&lt;/p&gt;

&lt;p&gt;But just as when we recognise a bad engineer, we don’t assume all engineers are bad. We should think the same about recruiters. Don’t let one bad egg spoil the whole batch. There are good eggs out there. This brings me nicely onto my next point…&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Work with individuals, not companies
&lt;/h3&gt;

&lt;p&gt;A recruitment agency, like any other company, is a faceless emotionless entity. Inside each company, there will be some great people and some not so great people. Find the individuals in those businesses that you get on with and place you into jobs you enjoy. Then work with them throughout your career. Personally, I’ve held 5 jobs given to me by 2 recruiters that I work with and have built up trust over time.&lt;/p&gt;

&lt;p&gt;You can build trust with them by running events with them and referring people in your own network to them.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Accept them as part of the process
&lt;/h3&gt;

&lt;p&gt;Many believe we should live without recruitment agencies entirely. This is an understandable viewpoint. But this belief underscores a fundamental misunderstanding about business - stuff costs money. And if that “stuff” is hiring people, then it costs a lot. Why? Reviewing resumé’s/CV’s, interviewing and technical skill tests all takes time. Time from someone who is paid by the company. A recruitment agency’s main value offering lies in getting high-quality candidates from a large network and handling all the marketing associated with advertising for a job.&lt;/p&gt;

&lt;p&gt;Just as many developers “outsource” their code by using third party libraries to save time, businesses do the same with recruiters. It saves time, and there is no point in reinventing the wheel. It allows them to unlock access to resources that would have taken a considerable amount of time to develop otherwise.&lt;/p&gt;

&lt;p&gt;For better or worse, recruiters are part of the process of getting a job and are here to stay.&lt;/p&gt;




&lt;p&gt;Hopefully, this post has helped soften your attitude toward recruiters. I’m not trying to win favours with recruiters by writing this. It’s a response to several snarky posts about the recruitment industry and tech. At the end of the day, these are people trying to do their jobs - like you and I. Sure there are some bad apples, but where isn’t there? Default to truth and follow the advice outlined above, it will work out in your best interests to do so.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Mistakes I made as a self-taught developer</title>
      <dc:creator>J</dc:creator>
      <pubDate>Tue, 15 Feb 2022 00:00:00 +0000</pubDate>
      <link>https://dev.to/joshghent/mistakes-i-made-as-a-self-taught-developer-2jf</link>
      <guid>https://dev.to/joshghent/mistakes-i-made-as-a-self-taught-developer-2jf</guid>
      <description>&lt;p&gt;Learning to become a software developer is not a trivial task. There is a plethora of guides, tutorials and courses to take. And of course, there is the question of self-teaching or going to university.&lt;/p&gt;

&lt;p&gt;But no matter, which path you chose you will always make mistakes.&lt;/p&gt;

&lt;p&gt;I made a tonne of mistakes. And I want to share them with people who are learning software development. Here are 5 mistakes I made whilst learning to code.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Analysis paralysis of resources
&lt;/h3&gt;

&lt;p&gt;I wasted a lot of time analysing the “best” place to learn to code. Early on, I often found myself reading articles about how “good” Codecademy was vs. a collection of “Head-first” books. This time would have been better used on doing the work. It’s easy to enter this state of panic thinking you’re committed to a certain learning path. But the truth is, you aren’t. Mix and match, try different things and go with what works for you.&lt;/p&gt;

&lt;p&gt;I also did this same analysis when choosing a language to learn.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Is Python the best? What about Javascript? My favourite sites use Rails, maybe I should learn that?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That was the list of questions that plagued me. It was exhausting and I shouldn’t have spent so much time overthinking this. My advice for new programmers is to learn Javascript and web technologies. Yes, you might want to learn games, but Javascript is so widely used (&lt;a href="https://www.infoq.com/news/2020/06/javascript-spacex-dragon/"&gt;now even in space&lt;/a&gt;!) that it will allow you to go down any path you want later on.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lesson&lt;/strong&gt; : Just learn Javascript. Mix and match resources that work for you to build a consistent practice of learning.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Not building early and often
&lt;/h3&gt;

&lt;p&gt;What do I mean by “building”? I mean creating libraries, API’s, demo sites and more. But to begin with, I thought the idea of “building” something was too complex to handle. Translating the code into something practical would have made me familiar solving problems. When I did start building projects, I found it challenging to shift from syntax to finished products. To liken it to real languages, it’s the difference between knowing the word for “Apple” and knowing how to say “Would you like an Apple?“.&lt;/p&gt;

&lt;p&gt;This fixation on learning the syntax also had the downside that I didn’t have anything to show for it at the end. To a potential employer, I was just someone who said they had learnt to code and could do some simple problems. By having a portfolio of projects that I could show off, I would have proven to myself and others that I had the skills to work.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lesson&lt;/strong&gt; : Apply syntax to real-world style projects after learning it.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Being tied to tutorials rather than problems
&lt;/h3&gt;

&lt;p&gt;In line with the above, I spent too much time on specific tutorials. I should have learnt to break a project down into small problems and seek solutions. This practice of breaking larger projects into small problems would have been valuable when I started working. It would have also helped me train my “google-fu” to search out error codes, and problems I needed to solve.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lesson&lt;/strong&gt; : Learn to break projects down into small solvable problems.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Sweating “interview” preparation
&lt;/h3&gt;

&lt;p&gt;Before I began to interview I spent hours doing code “katas” - small challenges using no libraries in your language of choice. I did this because they are supposedly common interview questions. But, I have never been asked to do specific coding challenges like this on a whiteboard or otherwise (having interviewed for 7 jobs in total). What I have had is take-home projects, and asked general technical questions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lesson&lt;/strong&gt; : If you are interviewing for the FAANG companies you are likely to get these questions. If you are not, then skip this. Focus on projects instead.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. No SQL Exposure
&lt;/h3&gt;

&lt;p&gt;In self-taught land, starting a new project is quite simple. Install NodeJS, install React, launch Chrome - job done. But, installing and using SQL? That was far too scary for me to tackle. There were ports to configure, connecting it to NodeJS and then the table structure. In part, MongoDB gained popularity because it’s so simple to set up. By not having the exposure to SQL, I struggled when I got a job.&lt;/p&gt;

&lt;p&gt;It also meant that my coding style tended to lean on parsing data within the language. Over time, I’ve trained myself (for software performance reasons) to use the database to crunch and mould the data for me.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lesson&lt;/strong&gt; : My advice on this point is to set up a free account with Render.com or Heroku and add a MySQL or PostgreSQL instance.&lt;/p&gt;

&lt;p&gt;If you’re learning to become a software engineer, don’t give up! It is difficult no matter which path you chose. I hope by listing my failures that you can avoid them. You will make others in your journey and I implore you to write about them and learn from them.&lt;/p&gt;

</description>
      <category>career</category>
      <category>beginners</category>
      <category>webdev</category>
      <category>codenewbie</category>
    </item>
    <item>
      <title>Building Collaboration with Remote Teams</title>
      <dc:creator>J</dc:creator>
      <pubDate>Tue, 08 Feb 2022 00:00:00 +0000</pubDate>
      <link>https://dev.to/joshghent/building-collaboration-with-remote-teams-576g</link>
      <guid>https://dev.to/joshghent/building-collaboration-with-remote-teams-576g</guid>
      <description>&lt;p&gt;&lt;strong&gt;TL;DR&lt;/strong&gt; : Provide the tools, empower people to use them and embrace remote work for what it is - remote.&lt;/p&gt;

&lt;p&gt;Steve Jobs designed Apple Headquarters to maximise the length of time it would take people to get to the bathroom. He did this to increase collaboration stemming from running into others in the corridor.&lt;/p&gt;

&lt;p&gt;Since the pandemic, remote working skyrocketed. &lt;a href="https://resources.owllabs.com/state-of-remote-work"&gt;Owl Labs recorded in 2021&lt;/a&gt; that almost 70% of full-time workers in the US were working from home. And based on the stats from job boards such as &lt;a href="https://remoteok.com/open"&gt;RemoteOk.com&lt;/a&gt;, you can see a massive uptick in remote job opportunities that are not dying down.&lt;/p&gt;

&lt;p&gt;But the question is, how can you “bump into” people in the corridor in the remote-first working world? Many say that weekly meetings to keep people aligned are the answer. But I disagree, meetings are viewed with disdain. The same study by Owl Labs found that 80% of remote workers wanted at least a day per week without any meetings at all.&lt;/p&gt;

&lt;p&gt;Instead, I’d suggest a different approach.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Make communication public by default.&lt;/strong&gt; The main rebuff of remote work is that the communication is too “formalized”, recorded and preserved. People use that excuse to communicate only in direct messages. But, putting messages in public channels, allows everyone to be informed and to take part. Additionally, making individuals comfortable with public communication will open the doorway for asynchronous work. It will discourage people from calling meetings simply to gather a consensus or input.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Don’t recreate the watercooler.&lt;/strong&gt; Commonly, I’ve seen teams create a #watercooler chat in Slack, an informal space to post memes and have a chit-chat. Although in ultra-large businesses I’ve seen these spaces improve individual relationships, I have not seen them successfully increase collaboration. These spaces are attempting to recreate an in-person space. Whilst well-intentioned, these spaces do not translate to the online realm. Embrace remote work for what it is, remote. It will naturally take time for an in-person organisation to change into the “remote” frame of mind.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Avoid hybrid working.&lt;/strong&gt; Where possible, avoid having some of the team in an office and the rest remote. I’ve been on both sides of the office divider for this one. And it doesn’t do well on either. On the office side, you can often be blocked by a remote worker not working the same hours and not having the tools to perform a task asynchronously. But on the remote side, you have a constant FOMO for all the undocumented decisions that have been made. Pushing all this communication asynchronous and online alleviates these issues and keeps everyone up to date.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use the tools.&lt;/strong&gt; We are all using Zoom and Slack. But that’s not the end. There is a myriad of tools to help you unite a remote team. Use Google Drive for documents and presentations (share them without a meeting!), Tuple for pair programming and GitHub for tickets and code. Directing people toward these channels (asynchronous) and away from meetings (synchronous) will accelerate collaboration for your team by accomplishing meaningful objectives.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Build a documentation culture.&lt;/strong&gt; In an office, decisions can be made and passed around in small conversations. In remote teams, that vanishes. So, make sure that every decision is documented in a consistent and precise way. Encourage people to write about problems they’ve solved and how they solved them. Help people to write up accurate guides on getting up and running with a system. And document various approaches to a particular ticket. Make sure you lead by example here. Don’t rely on Slack to store a decision. If you ever hear the phrase “I forget exactly what was said”, it is a chance to write it down. Further, address this at its source by making sure to hire individuals with good writing skills.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Not all these techniques will work with your team. I encourage you to experiment and see what works.&lt;/p&gt;

&lt;p&gt;Overall, embrace the work situation you are in and capitalize on its advantages. If your business can be in-person, capitalize on the fact that it will likely be more collaborative. If you have a remote business, embrace a diverse global workforce, lower costs and asynchronous work for increased productivity.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Facing the Legacy Code Monster</title>
      <dc:creator>J</dc:creator>
      <pubDate>Tue, 25 Jan 2022 00:00:00 +0000</pubDate>
      <link>https://dev.to/joshghent/facing-the-legacy-code-monster-1bn3</link>
      <guid>https://dev.to/joshghent/facing-the-legacy-code-monster-1bn3</guid>
      <description>&lt;p&gt;I start new jobs like a spelunking caver, exploring all the systems, code and pipelines. I time myself to see how long I can ask questions about a system before I get a dreaded response:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;“Oh, that’s a critical system that handles our core business. A developer wrote it years ago, who has since left. Now we reboot it when it breaks. You don’t want to mess with that.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I’m sure we’ve all heard words like that.&lt;/p&gt;

&lt;p&gt;It seems almost a law at this point that every mature software company has at least one system that “works”. But, has a few horrendous bugs meaning it needs rebooting and there are workarounds to its inflexibility that have built over time.&lt;/p&gt;

&lt;p&gt;It’s software that has no tests. Runs on an extremely specifically configured server (likely in a closet in the office). Where documentation has been passed down in folklore. And the person who originally wrote it now resides on the moon where they are unreachable.&lt;/p&gt;

&lt;p&gt;Oftentimes, this software is left to rot because it’s so critical that it’s safer to have the workarounds and reboots than to fix it for good.&lt;/p&gt;

&lt;p&gt;But, you shouldn’t be deterred by this rhetoric of what current employees tell you. Why not?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;No one deliberately writes broken or bad software&lt;/strong&gt;. I say no one, there are of course exceptions. But buy-and-large, software is written with good intentions in mind. And as it is running in production, it means it solved the original problem. Adopting this ethos will help dissolve the image of this evil unknown developer spiting you.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Current employees might be misinformed&lt;/strong&gt;. Depending on how long ago the software was implemented, current employees might have been misinformed. Swaths of developers may have come and gone and simply repeated what they were told on their first day. What started as a healthy respect for a critical system, soon becomes the subject of fear and anxiety.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Fighting these tendencies to buy into the anti-legacy cult will mean you can deal with legacy code. It will be dirty work, but gaining a deep level understanding of these systems will make you a hero among your team and an invaluable employee.&lt;/p&gt;

&lt;p&gt;How can you gain insight into these systems?&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Understand why it became legacy in the first place&lt;/strong&gt;. Finding the why on these systems is vital because it can help clarify your own journey’s priorities. In some cases, it might be that the system was impossible to run in a sandbox. If so, that should be your first port of call. Focus on first principles and get to the core of the problem you’re trying to solve. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Get it running in a sandboxed environment&lt;/strong&gt;. Provide safety for yourself and others, by setting this up to run in a sandbox. This might need the buy-in from someone on the DevOps team. This is a critical dependent step to working with a legacy system as without it you do not have the psychological safety to make changes. This process might take a long time and should be considered just as important as writing tests.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Follow the code, and document it&lt;/strong&gt;. Target your reading of the codebase by following the various paths it takes. Review the entry points and build a map of the various “journeys”. Then, pretending you are one of those calls, follow the path that the code takes. It’s sometimes helpful to draw these paths on paper for further reference. After reading it through once, read it again, but document the functions as you go. In some places, you might not know “why” something has been written a certain way; mark these areas with a quick &lt;code&gt;TODO&lt;/code&gt; comment for later review.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Write integration tests&lt;/strong&gt;. Not unit, functional or anything else. Focus on integration at the moment. This maximises the test coverage whilst minimizing the level of understanding you need. Early on in this investigation, you likely don’t know all the in’s, out’s and gotchas of the system. So, to avoid getting overwhelmed, it’s best to write a suite of integration tests that mirror the “journeys” you discovered in the previous step.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Share the knowledge&lt;/strong&gt;. Don’t let yourself become another bus factor. Be quick to share the knowledge, even if there are gaps. This also encourages other developers to get involved, helping write tests and documentation.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This process is by no means perfect. But, be sure not to skip any of the steps, as they are all as critical as each other. It will likely not mean you are a complete expert in this system. Nor will it mean you can rewrite it in a more modern tech stack with the confidence of a solid test suite behind you. But, it does mean you have shed some light on an otherwise dark scary legacy code monster.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How to Ship Software Faster</title>
      <dc:creator>J</dc:creator>
      <pubDate>Tue, 18 Jan 2022 00:00:00 +0000</pubDate>
      <link>https://dev.to/joshghent/how-to-ship-software-faster-1bec</link>
      <guid>https://dev.to/joshghent/how-to-ship-software-faster-1bec</guid>
      <description>&lt;p&gt;Remember when software came on a physical medium like discs, USB sticks or &lt;a href="https://www.smithsonianmag.com/smithsonian-institution/margaret-hamilton-led-nasa-software-team-landed-astronauts-moon-180971575/"&gt;punch cards&lt;/a&gt;? Me either. Software release lifecycles used to be lengthy - years-long in most cases.&lt;/p&gt;

&lt;p&gt;As software flourished on the web, we grew accustomed to “moving fast and breaking things”. This approach has a lot of drawbacks. Not least because some customer bases are more sensitive to problems than others.&lt;/p&gt;

&lt;p&gt;They still wanted to “move fast”, but not “break things”. The speed of the web, with the safety of physical releases.&lt;/p&gt;

&lt;p&gt;The solution that many teams came up with was to mass up lots of work and then release it every so often. Taking the bad of both release systems. The lack of safety from a “move fast” release and the slow speed of physical releases.&lt;/p&gt;

&lt;p&gt;It’s likely that you work in a place like this, or have worked there in the past. Organisations where DevOps are a secondary concern to the application itself. Places where “continuous delivery” is considered voodoo reserved for the FAANG’s.&lt;/p&gt;

&lt;p&gt;I have found myself at these organisations all my working life. I quickly noticed the following patterns:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Developers have little to no confidence that a new release will not break something.&lt;/li&gt;
&lt;li&gt;That low confidence means there is anxiety when it comes time to release.&lt;/li&gt;
&lt;li&gt;The time between releases meant upstream work causes conflicts.&lt;/li&gt;
&lt;li&gt;Manual testing cycles had to be done to establish any confidence.&lt;/li&gt;
&lt;li&gt;Bugs upon release caused finger pointing; with &lt;a href="https://www.pageittothelimit.com/psy-safety-with-tom-geraghty/"&gt;psychological safety&lt;/a&gt; diminishing as a result.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;What can you do if you notice these patterns?&lt;/p&gt;

&lt;h3&gt;
  
  
  The solution is to ship faster. How?
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Set expectations of delivery time&lt;/strong&gt;. Start by opening a discussion, with stakeholders, about what the expected time to ship new versions will be. Establishing these rough boundaries govern the setup of processes used to ship software. Generally speaking, shareholders will want features as soon as possible. But, if you are currently releasing once a month, you should aim to start releasing bi-weekly. Get a bit further along before promising to &lt;a href="https://instagram-engineering.com/continuous-deployment-at-instagram-1e18548f01d1"&gt;deploy 30-50 times a day like Instagram&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Make systems observable&lt;/strong&gt;. Low confidence in releases often originates from systems with little observability. This means that if something does go wrong it’s a nightmare to figure out why. Before starting to increase deployment frequency, you need a system you trust. Focus on the fundamentals - searchable logging, automatic monitoring of key website pages and API endpoints (using &lt;a href="https://uptimerobot.com"&gt;UptimeRobot&lt;/a&gt;) and automatic tests (integration and unit at least).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Set small concise deliverables&lt;/strong&gt;. Doing manual releases requires an immense amount of cognitive overhead. Having a small number of tickets and clear deliverables in each release reduces this cognitive load. There is less to remember to test and check. And other areas of the system are less likely to be affected. If releases are simple to do, it’s more likely they’ll get done.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Invest in your DevOps&lt;/strong&gt;. This is the crucial technical step. There are many other articles about having top quality development tools to aid deployment, so I won’t add to them. But principally, look at the areas that take the most time or have the least confidence, and automate them. For example, a bash script written 2 years ago for bundling the app is unreliable, take time to address this.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use Feature flags&lt;/strong&gt;. Often releases get delayed because stakeholders don’t want to reveal new features to customers. Using feature flags allows you to ship unfinished features without breaking things for everyone. A further selling point for stakeholders is that feedback can be gathered from select customers before a full rollout is done.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Make mistakes a non-issue&lt;/strong&gt;. If the risk of a new release causing a bug is on par with starting a nuclear war, people will shy away from it. By making it easy for developers (or better yet, a blue-green system) to rollback to the last known stable release, it will break down the fear around releasing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use checklists&lt;/strong&gt;. Anything that cannot be automated (or you don’t have time to do so), should be made as programmatic as possible. Using checklists reduces the guesswork out of manual tasks. It means reliable releases are simple to do.&lt;/p&gt;




&lt;p&gt;Shipping software faster is a mix of both cultural and technical aspects of an organisation. Both are equally as difficult. Work towards the “release nirvana” that awaits once these systems are set up. Your team will be rewarded with lower blood pressure and your business will be rewarded by getting and retaining more customers.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Save $$$ by Caching Auth0 M2M Tokens</title>
      <dc:creator>J</dc:creator>
      <pubDate>Tue, 11 Jan 2022 00:00:00 +0000</pubDate>
      <link>https://dev.to/joshghent/save-by-caching-auth0-m2m-tokens-5ged</link>
      <guid>https://dev.to/joshghent/save-by-caching-auth0-m2m-tokens-5ged</guid>
      <description>&lt;p&gt;&lt;a href="https://auth0.com"&gt;Auth0&lt;/a&gt; is an easy to integrate service that handles all your applications authentication needs. But, if you’ve worked with it before, you’ll know it’s downfalls.&lt;/p&gt;

&lt;p&gt;One of them Machine-to-Machine (M2M) tokens; used to authenticate between your services.&lt;/p&gt;

&lt;p&gt;But the limits are restrictive for serverless infrastructures. In the free plan you only get 1000 per month. And even on a paid plan, it would be expensive to get the number of tokens you might need in a given month.&lt;/p&gt;

&lt;p&gt;The solution is to &lt;strong&gt;cache Machine-to-Machine tokens&lt;/strong&gt; so we don’t need to request new ones until they expire.&lt;/p&gt;

&lt;p&gt;In traditional infrastructure, this would be trivial. Save the token globally somewhere and done.&lt;/p&gt;

&lt;p&gt;Serverless architectures are a tricky because there is no persistence between instances.&lt;/p&gt;

&lt;p&gt;Here’s how to handle caching Auth0 Tokens for AWS Lambda Microservices. But, the same principles apply for other cloud providers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create the DynamoDB Table
&lt;/h3&gt;

&lt;p&gt;(or equivalent serverless DB table in other cloud providers)&lt;/p&gt;

&lt;p&gt;Set your own name for the table, and set the partition key to &lt;code&gt;token&lt;/code&gt; as a &lt;em&gt;String&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="///static/7cb82ba7de7ba67310c65847284f4e22/5b4a1/dynamodb-creation.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UjVsl7CW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://joshghent.com/static/7cb82ba7de7ba67310c65847284f4e22/fcda8/dynamodb-creation.png" alt="Screenshot 2022-01-11 at 15.44.50" title="Screenshot 2022-01-11 at 15.44.50" width="590" height="410"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Add the name of the table as an environment variable &lt;code&gt;CACHE_TOKEN_DB&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Retrieve and store tokens
&lt;/h3&gt;

&lt;p&gt;First let’s add a method to store new M2M&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// ===
// cacheToken.ts
// ===
import AWS from 'aws-sdk';

const storeNewToken = async (token: string) =&amp;gt; {
  const docClient = new AWS.DynamoDB.DocumentClient();
  const response = await docClient.put({ TableName: `${process.env.TOKEN_CACHE_DB}`, Item: { token } }).promise();
  return response;
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The code is simple enough and fairly self explanatory.&lt;/p&gt;

&lt;p&gt;So, let’s move on and add method that we can use in our Lambda Handler to retrieve a new M2M token.&lt;/p&gt;

&lt;p&gt;There are two paths for this method&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;There is a existing unexpired token in DynamoDB, so we use that.&lt;/li&gt;
&lt;li&gt;There is no token or only expired ones, so we generate a new one, store it in DynamoDB and use that.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We will design this system to only store one token at a time. This means we do not have to worry about old tokens and filtering them out on each initialization.&lt;/p&gt;

&lt;p&gt;So let’s write our method!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// ===
// cacheToken.ts
// ===
import request from 'request-promise';

export const getAuthToken = async (): Promise&amp;lt;string&amp;gt; =&amp;gt; {
  const token = await getExistingToken();
  if (token !== '' &amp;amp;&amp;amp; hasTokenExpired(token) === false) {
    return token;
  }

  const params = {
    method: 'POST',
    url: `https://${process.env.AUTH0_NAME}.auth0.com/oauth/token`,
    headers: { 'content-type': 'application/json' },
    body: `{"client_id":"${process.env.AUTH0_CLIENT_ID}","client_secret":"${process.env.AUTH0_CLIENT_SECRET}","audience":"${process.env.AUTH0_AUDIENCE}","grant_type":"client_credentials"}`,
  };

  const result = JSON.parse(await request(params));
  if (!result["access_token"]) { throw new Error("No Access Token returned"); }

  await deletePreviousTokens(token);
  await storeNewToken(result['access_token']);

  return result["access_token"];
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s break this down a little&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;We first get the &lt;strong&gt;existing token in DynamoDB&lt;/strong&gt;. It returns the token or an empty string.&lt;/li&gt;
&lt;li&gt;If it returns a token, we check it’s not expired and then return that token.&lt;/li&gt;
&lt;li&gt;If it is expired, or there is no token, we go ahead an &lt;strong&gt;generate one from Auth0&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;We then &lt;strong&gt;delete the old token in DynamoDB, and store the new one&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Potentially, with this flow (and the fact that DynamoDB is non-locking), could mean that multiple instances of your service save a token at the same time. But, this will be minor compared to how much you’re able to save by caching in the first place.&lt;/p&gt;

&lt;p&gt;Let’s now create the methods we referenced in the &lt;code&gt;getAuthToken&lt;/code&gt; function that help us interact with the tokens storage and validation&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// ===
// cacheToken.ts
// ===
import jwt_decode from 'jwt-decode';

const deletePreviousTokens = async (token: string) =&amp;gt; {
  const docClient = new AWS.DynamoDB.DocumentClient();
  const tokenRecords = await getAllTokens();

  // Clear down the table
  if (tokenRecords.Items) {
    tokenRecords.Items.forEach(async (row) =&amp;gt; {
      const token = row.token;
      await docClient.delete({ TableName: `${process.env.TOKEN_CACHE_DB}`, Key: { "token": token } }).promise();
    });
  }
};

const hasTokenExpired = (token: string) =&amp;gt; {
  const decoded = jwt_decode(token) as { exp: number; iat: number; };
  if (decoded) {
    return decoded.exp &amp;lt; (new Date().getTime() / 1000);
  }

  return false;
};

const getAllTokens = async () =&amp;gt; {
  const docClient = new AWS.DynamoDB.DocumentClient();
  const response = await docClient.scan({
    TableName: `${process.env.TOKEN_CACHE_DB}`
  }).promise();

  return response;
};

const getExistingToken = async () =&amp;gt; {
  const response = await getAllTokens();

  if (response.Items &amp;amp;&amp;amp; response.Items.length &amp;gt; 0) {
    return response.Items[0]['token'];
  }

  return '';
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Again, let’s break this down&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In &lt;code&gt;deletePreviousTokens&lt;/code&gt; we grab all existing tokens and delete them one by one. This is to avoid concurrency issues where another instance has written a new token which we do not want to delete.&lt;/li&gt;
&lt;li&gt;In &lt;code&gt;hasTokenExpired&lt;/code&gt; we do a basic JWT validation to make sure that it is not expired. This could be improved by not using the token if it’s only got 1ms left but has worked so far for me.&lt;/li&gt;
&lt;li&gt;In &lt;code&gt;getExistingToken&lt;/code&gt; we get all rows in the table and return the first token or an empty string if none is found.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Usage in the handler
&lt;/h3&gt;

&lt;p&gt;Now all that’s left to do is to add it to your Lambda functions handler method.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export const handler = async (event: any, context: any) =&amp;gt; {
    const token = await getAuthToken();

  // Do something with the token
  await sendResultsToService(token, event.Results);
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Hopefully you found this interesting and saved some money on your Auth0 Bill!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>node</category>
      <category>auth0</category>
      <category>dynamodb</category>
    </item>
    <item>
      <title>How You Work</title>
      <dc:creator>J</dc:creator>
      <pubDate>Fri, 07 Jan 2022 00:00:00 +0000</pubDate>
      <link>https://dev.to/joshghent/how-you-work-5c34</link>
      <guid>https://dev.to/joshghent/how-you-work-5c34</guid>
      <description>&lt;p&gt;Learning how you work best is a superpower. Imagine, creating and seeking environments where you succeed best. Likely you remember times where you got into a flow state and produced magic, but you can’t pinpoint why.&lt;/p&gt;

&lt;p&gt;I was in this position and so took some time to figure out how I work best.&lt;/p&gt;

&lt;p&gt;Product user manuals have a section dedicated to “ideal working conditions” - to get the best out of the machine. This includes the maintenance, the temperature and location and how it should be operated. In the same way, you can develop a “personal user manual” documenting how you work best and where.&lt;/p&gt;

&lt;p&gt;In my user manual, it’s broken down into five sections.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The kind of work I do best&lt;/li&gt;
&lt;li&gt;The environment for doing that work&lt;/li&gt;
&lt;li&gt;How I enjoy doing that work&lt;/li&gt;
&lt;li&gt;How I receive feedback&lt;/li&gt;
&lt;li&gt;How I get motivated&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For example, my user manual is included below.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The kind of work I do best&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Creating things that I know will benefit people.&lt;/li&gt;
&lt;li&gt;Automation and saving time.&lt;/li&gt;
&lt;li&gt;Turning rough specs into tangible products.&lt;/li&gt;
&lt;li&gt;Thinking of edge cases and being able to dream of scenarios where bugs will occur at scale.&lt;/li&gt;
&lt;li&gt;Building clear API’s that are secure and scalable.&lt;/li&gt;
&lt;li&gt;Writing technical documentation.&lt;/li&gt;
&lt;li&gt;Architecting and building systems that can handle millions of customers.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The environment for doing that work&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I’m informed of the bigger picture and the impact of my work. &lt;/li&gt;
&lt;li&gt;I like a small to-do list.&lt;/li&gt;
&lt;li&gt;Requirements stay reasonably consistent. But I have the autonomy to figure out how to meet those requirements.&lt;/li&gt;
&lt;li&gt;The benefit of my work is somewhat measurable.&lt;/li&gt;
&lt;li&gt;There is a culture of gathering and reviewing data to make decisions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;How I enjoy doing that work&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I prefer to work asynchronously and reserve synchronous work for solving specific problems.&lt;/li&gt;
&lt;li&gt;In line with the above, I like to keep meetings to an absolute minimum. Including many of the sprint reviews, standups and retros that are commonplace in most software businesses.&lt;/li&gt;
&lt;li&gt;I like to have the flexibility to work inconsistent hours. Often, I find I solve problems better away from the computer rather than bashing my head against the wall.&lt;/li&gt;
&lt;li&gt;I believe in transparency and equality, I prefer to work with organisations that foster that same ethos.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;How I receive feedback&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I love to improve my work and constantly question personally on how to do this. But, I need to understand the reason why something is better or is important.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Benefit
&lt;/h2&gt;

&lt;p&gt;By preparing a user manual, you can quickly establish a rapport with your co-workers and effectively communicate your “ideal working conditions” to your manager. Having led teams, I ended up putting together versions of a “user manual” for everyone in my team in my head. A written document from that person would have proved vital.&lt;/p&gt;

&lt;p&gt;The advantages are that both yourself and the people around you can understand each other and make everyone happy by aiming to keep work within those ideal parameters. Of course, this is not always possible. Tough, stupid things sometimes need doing. But a keen-eyed manager will always be aiming to balance these tasks with work you love.&lt;/p&gt;

&lt;p&gt;I’ve found my manual allows me to reaffirm my ideal work. Whenever I am thinking “I hate this work”, I review this manual and figure out &lt;em&gt;why&lt;/em&gt; I dislike it.&lt;/p&gt;

&lt;p&gt;I’d love to see your manuals! Send them to me on Twitter - &lt;a class="mentioned-user" href="https://dev.to/joshghent"&gt;@joshghent&lt;/a&gt; or via email &lt;a href="//mailto:me@joshghent.com"&gt;at me@joshghent.com&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Continuous Delivery to ECS with Terraform</title>
      <dc:creator>J</dc:creator>
      <pubDate>Wed, 11 Aug 2021 00:00:00 +0000</pubDate>
      <link>https://dev.to/joshghent/continuous-delivery-to-ecs-with-terraform-5b7n</link>
      <guid>https://dev.to/joshghent/continuous-delivery-to-ecs-with-terraform-5b7n</guid>
      <description>&lt;p&gt;Continuous delivery is something that we’re all striving for. I was doing the same, but there was a snag:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;My terraform code and API code were in separate projects&lt;/li&gt;
&lt;li&gt;I wanted to make updates to the API code and have it build and update the ECS service&lt;/li&gt;
&lt;li&gt;I didn’t want to manage the container definition separately as it had too many dependant resources (Datadog sidecar etc.)&lt;/li&gt;
&lt;li&gt;You have multiple environments and don’t want to have to use a separate git branch for the API code.&lt;/li&gt;
&lt;li&gt;You have a branch for each environment for your infrastructure that is independently deployed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Maybe you have this problem too. Because Terraform uses a specific ECR image path to build the container definition, how do we ever update it automatically?&lt;/p&gt;

&lt;p&gt;There are several ways to solve this problem, many of which are discussed in &lt;a href="https://github.com/hashicorp/terraform-provider-aws/issues/632"&gt;this thread&lt;/a&gt;. But, today, I’m going to show you how I resolved this issue.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Setup your Docker build/deploy pipeline&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;First things first, we need the API Docker image inside ECR. This will vary according to what CI system you use - in our case we use Azure DevOps.&lt;/p&gt;

&lt;p&gt;When the image is being built, we want to tag it uniquely. Ideally, you want something numerable. In our case, we chose to use Azure’s built in “BuildId” parameter to tag the images.&lt;/p&gt;

&lt;p&gt;Below you can see the build steps we take in the CI pipeline. After the image is built, it creates a text file with the BuildId in it and ships that as an “Artifact”. This will become important later. But the main thing is you need to trigger a further pipeline for your environments based on that parameter changing.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- task: Docker@2
  inputs:
    command: build
    DockerFile: "$(Build.SourcesDirectory)/Dockerfile"
    repository: ${{parameters.projectName}}
    tags: |
      $(Build.BuildId)

- task: ECRPushImage@1
  inputs:
    imageSource: "imagename"
    sourceImageName: ${{parameters.projectName}}
    sourceImageTag: "$(Build.BuildId)"
    repositoryName: ${{parameters.projectName}}
    pushTag: "$(Build.BuildId)"

- task: Bash@3
  displayName: "Upload Build Artifact of the Docker image Id"
  inputs:
    targetType: "inline"
    script: |
      # Add the build Id to a new file that will then be published as an artifact
      echo $(Build.BuildId) &amp;gt; .buildId
      cat .buildId

- task: CopyFiles@2
  displayName: "Copy BuildId file"
  inputs:
    Contents: ".buildId"
    TargetFolder: "$(Build.ArtifactStagingDirectory)"

- task: PublishBuildArtifacts@1
  displayName: "Publish Artifact"
  inputs:
    pathToPublish: $(Build.ArtifactStagingDirectory)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Make sure to run this pipeline now you’ve created it.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Setup an SSM (Systems Manager) parameter&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;SSM is an AWS service I had previously never really used. Its parameter store feature will allow us to store a variable that we can update later - in this case, the docker image tag.&lt;/p&gt;

&lt;p&gt;Create a new parameter by going to AWS Systems Manager &amp;gt; Application Management &amp;gt; Parameter Store. Name the parameter something like &lt;code&gt;/my-api/${env}/docker-image-tag&lt;/code&gt; (where &lt;code&gt;env&lt;/code&gt; is the environment, you’ll need to duplicate this parameter for all the environments you have). It should be a “String” variable and be the unique tag that is generated by your CI build pipeline - in my case, the BuildId.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create your deployment pipeline&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now, we need to define a way to update the image just in a certain environment (e.g., just &lt;code&gt;development&lt;/code&gt;). How can we do that? Because of our setup, the duplication effort is fairly minimal. We already have our image build (which is consistent across all environments). We just need to update that SSM parameter to use the unique tag (BuildId) that the build pipeline generated.&lt;/p&gt;

&lt;p&gt;In Azure, you can trigger a pipeline based on an artifact as we generated in step 1. I then configured 3 tasks based on this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Get the BuildId from the file and add it to the runners environment&lt;/li&gt;
&lt;li&gt;Update the SSM parameter for that environment to the new BuildID&lt;/li&gt;
&lt;li&gt;Trigger the Infrastructure/Terraform pipeline for that environment - this is where the new SSM parameter value will get picked up and used in a container definition.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="///static/801b5914c7263a82f8ee9df1386f3acd/84d4d/azure-update-ssm-pipeline.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--EYLkxVIl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://joshghent.com/static/801b5914c7263a82f8ee9df1386f3acd/fcda8/azure-update-ssm-pipeline.png" alt="Sample of the update SSM job" title="Sample of the update SSM job"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Update Terraform to use the SSM parameter&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now you’ve got the SSM parameter being updated each time there is a new build, we need to set up Terraform to use the SSM parameter as part of the image name.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Import the SSM parameter
// This can be done on a module level because it depends on the environment
data "aws_ssm_parameter" "docker_image_id" {
  name = "/my-api/${var.environment}/docker-image-tag"
}

// Use it later on...
container_image = "&amp;lt;AWS_ACCOUNT_ID&amp;gt;.dkr.ecr.&amp;lt;REGION&amp;gt;.amazonaws.com/my-api:${data.aws_ssm_parameter.docker_image_id.value}"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;All done!&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now you’ve set up a complete continuous deployment pipeline with Terraform and ECS. To review, here’s how the system works&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Creates and builds a new ECR image tagged in a unique way&lt;/li&gt;
&lt;li&gt;The build pipeline notifies the release pipeline of this new image tag in some way (in Azure’s case, a build artifact)&lt;/li&gt;
&lt;li&gt;The release pipeline updates the SSM parameter based on the image tag and triggers the Terraform deployment pipeline for that environment.&lt;/li&gt;
&lt;li&gt;Terraform picks up the new SSM value and implements it&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;There are two obvious downsides to this approach:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Multiple updates to multiple API’s could cause locking issues where you have to manually run the Terraform pipeline once.&lt;/li&gt;
&lt;li&gt;It’s a bit slower than other approaches, but the best one I could find. Plus if your Terraform repo deploys in less than 2 minutes like ours, then it’s not a big problem.&lt;/li&gt;
&lt;/ol&gt;

</description>
    </item>
    <item>
      <title>How to Run Sequelize Migrations in Azure Pipelines</title>
      <dc:creator>J</dc:creator>
      <pubDate>Wed, 07 Apr 2021 00:00:00 +0000</pubDate>
      <link>https://dev.to/joshghent/how-to-run-sequelize-migrations-in-azure-pipelines-2h7f</link>
      <guid>https://dev.to/joshghent/how-to-run-sequelize-migrations-in-azure-pipelines-2h7f</guid>
      <description>&lt;p&gt;Database migrations are the concept of managing your database schema via reversible, version controlled files. A program is then used to run these “migrations” and keep track of which ones have been run on your database. Migrations are immutable, meaning if you want to change a column name, type or anything else then you have to create a new “migration”. Handling your database programmatically gives you many benefits. Namely, providing a consistent schema across all your environments and portability if something happens to your DB. Further, with these files being committed to source control, migrations can be reviewed by others on your team. If you’re not already using a tool to do this, I’d encourage you to do so.&lt;/p&gt;

&lt;p&gt;Throughout this article, I’ll refer to two terms&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Migrations - meaning files that change the schema of the database but not the underlying data&lt;/li&gt;
&lt;li&gt;Seeders - files that insert anonymous data into our staging environments for testing purposes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Normally, these migration and seed files would have to be run manually from a developers computer against the different databases. But, with my obsession with automation, this wouldn’t fly. I decided to create an Azure Pipeline runner to handle this for us. It will run automatically whenever new commits on our development or master branch are found. It also reduced stress for me as I know that I will make mistakes, whereas a computer, configured correctly, won’t! 😅&lt;/p&gt;

&lt;p&gt;Although this article is designed around building azure pipelines for &lt;a href="https://sequelize.org"&gt;Sequelize&lt;/a&gt; migrations. This process can be adapted to other ORM’s such as &lt;a href="https://knexjs.org"&gt;Knex&lt;/a&gt; and &lt;a href="https://typeorm.io/#/"&gt;TypeORM&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create Your Artifacts
&lt;/h2&gt;

&lt;p&gt;If you’re not familiar with Azure, it has a concept of &lt;a href="https://azure.microsoft.com/en-us/services/devops/artifacts/"&gt;“Artifacts”&lt;/a&gt;. These are a collection of files that can then be used by other pipelines. We need to create two artifacts, one for our migrations pipeline and the other for our seeding pipeline. In your source control create the following two files - you can copy-paste the code below!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# azure-migrate.yml
pool:
  name: azure-pipeline-runner
pr: none

steps:
  - task: CopyFiles@2
    displayName: "Copy migration scripts"
    inputs:
      contents: "$(Build.SourcesDirectory)/migrations/**"
      targetFolder: $(Build.ArtifactStagingDirectory)

  - task: PublishBuildArtifacts@1
    displayName: "Publish Artifact"
    inputs:
      pathToPublish: $(Build.ArtifactStagingDirectory)
      artifactName: migrate

# azure-seed.yml
pool:
  name: azure-pipeline-runner # the name of your azure pipeline runner
pr: none # Don't run this pipeline for pull requests

steps:
- task: CopyFiles@2
  displayName: 'Publish SequelizeRC'
  inputs:
    Contents: .sequelizerc
    FlattenFolders: true
    TargetFolder: '$(Build.ArtifactStagingDirectory)'

- task: PublishBuildArtifacts@1
  displayName: 'Publish Seed'
  inputs:
    PathtoPublish: seeders
    TargetPath: '$(Build.ArtifactStagingDirectory)'
    ArtifactName: seeders

- task: PublishBuildArtifacts@1
  displayName: 'Publish Sequelize Config Folder'
  inputs:
    PathtoPublish: config
    TargetPath: '$(Build.ArtifactStagingDirectory)'
    ArtifactName: config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the code above, it first copies the &lt;code&gt;.sequelizerc&lt;/code&gt; file that is used to denote various configuration parameters to Sequelize. Next, it creates “build artifacts” for the seeding and migration folders where all the files related to setting up the database are stored. Finally, it publishes them to the azure artifacts library.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configure Your Runner (Optional)
&lt;/h2&gt;

&lt;p&gt;This step is optional because it depends on your existing setup. All you need to make sure is that your Azure runner (whether self-hosted or not) can access the Database.&lt;/p&gt;

&lt;p&gt;We use MySQL Aurora to host our database which sits in a VPC. Our &lt;code&gt;azure-pipeline-runner&lt;/code&gt; (defined in the “pool” parameter) is hosted inside the same VPC but a different security group. So, we needed to allow access from the runners’ security group to the RDS’ security group. This is called “ingress” in AWS. The port you need to allow access to may vary - in our case, it’s 3306 which is the default for MySQL.&lt;/p&gt;

&lt;p&gt;&lt;a href="///static/b822f1e88ddc158ebea85eacc4df5d19/da5ba/runner-sg-config.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hucpi0s7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://joshghent.com/static/b822f1e88ddc158ebea85eacc4df5d19/fcda8/runner-sg-config.png" alt="Allowing ingress from one security group to another on Port 3306 - the default for MySQL" title="Allowing ingress from one security group to another on Port 3306 - the default for MySQL"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Getting this setup is a simple process. Check &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/working-with-security-groups.html#adding-security-group-rule"&gt;this guide for more info&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create Your Pipeline
&lt;/h2&gt;

&lt;p&gt;Now we’ve got our runner configured and our build artifacts published, we can move onto creating the actual pipeline. Go to &lt;strong&gt;Pipelines&lt;/strong&gt; and &lt;strong&gt;Releases&lt;/strong&gt; and click ”+ New” and select “Create Release Pipeline” from the dropdown.&lt;/p&gt;

&lt;p&gt;You’ll be prompted to select a template but we can click “Empty Job”&lt;/p&gt;

&lt;p&gt;&lt;a href="///static/54001573d8a7fbb1dc1aeee2a0b6aa29/0cf26/runner-1.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YhiCwmmr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://joshghent.com/static/54001573d8a7fbb1dc1aeee2a0b6aa29/fcda8/runner-1.png" alt="Empty Azure Pipeline job with an interface to select a template. We chose to start from scratch with an empty job." title="Empty Azure Pipeline job with an interface to select a template. We chose to start from scratch with an empty job."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, click the Artifacts box on the left and then find your artifact by searching for it.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;You’ll be able to find the name of the artifact under “Pipelines &amp;gt; Pipelines”. You should see your migration or seeding pipeline that you created earlier. Clicking into one of the runs of this job will reveal the artifact name that the job created.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now, configure a stage. Click on the “Tasks” tab at the top of the page. This will take you to the list of “tasks” that will run for each stage.&lt;/p&gt;

&lt;p&gt;&lt;a href="///static/165e41ade5a24fe519bf0b3006cc03bb/541fe/runner-2.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6oKrgcx1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://joshghent.com/static/165e41ade5a24fe519bf0b3006cc03bb/fcda8/runner-2.png" alt="Viewing the first default stage of the azure pipeline job" title="Viewing the first default stage of the azure pipeline job"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click to add a new task and search for “npm”. We want to first install Sequelize globally on the command line so that it can be used to run the migrations or seeding process. Because we are using MySQL, we also need to install the &lt;code&gt;mysql2&lt;/code&gt; package.&lt;/p&gt;

&lt;p&gt;The job should end up looking something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="///static/232ccaab769fde32fc09bb3decc89aab/0486e/runner-3.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--n1TRzOjB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://joshghent.com/static/232ccaab769fde32fc09bb3decc89aab/fcda8/runner-3.png" alt="An azure pipeline task with a configured job to install sequelize and other dependant packages required to run the migrations on the command line. The dependencies are installed globally with NPM." title="An azure pipeline task with a configured job to install sequelize and other dependant packages required to run the migrations on the command line. The dependencies are installed globally with NPM."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we need to add the stage that runs the migrations or seeding process. Click the plus button again and select the “Command Line” job. This will allow us to run the Sequelize commands.&lt;/p&gt;

&lt;p&gt;The command we want to run for migrations is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sequelize-cli db:migrate --url ${DB_URL}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For the seeding process it is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sequelize-cli db:seed:all --url ${DB_URL}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The documentation for these commands can be found &lt;a href="https://github.com/sequelize/cli#documentation"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We aren’t doing anything fancy asides from passing our Database URL. Since this won’t be stored in our Git repo, we need to provide it here as an environment variable.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If something goes wrong. You may need to check that the runner is running files in the correct directory. Ensure that the directory is the root. It should contain a folder called “seeders” or “migrations”. These folders should contain the migration and seed files.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Below is how our job ended up&lt;/p&gt;

&lt;p&gt;&lt;a href="///static/f54e4feb310bc1e1d73d5bc2d315bffc/b9bfc/runner-4.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hOavWbq0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://joshghent.com/static/f54e4feb310bc1e1d73d5bc2d315bffc/fcda8/runner-4.png" alt="An azure pipeline task with a configured job running the sequelize command to migrate the database. It shows the working directory and an environment variable of DB_URL." title="An azure pipeline task with a configured job running the sequelize command to migrate the database. It shows the working directory and an environment variable of DB\_URL."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now you’ve configured one stage, you can clone it for the others! Go back to the Pipelines view and click the clone button beneath the stage card.&lt;/p&gt;

&lt;p&gt;&lt;a href="///static/34ba91ff3a9db917ec7babef56d24d02/10e91/runner-5.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pNkLFo0C--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://joshghent.com/static/34ba91ff3a9db917ec7babef56d24d02/10e91/runner-5.png" alt="The Azure pipeline stage card under the Pipelines view. It demonstrates how to click the clone button" title="The Azure pipeline stage card under the Pipelines view. It demonstrates how to click the clone button"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrapping Up
&lt;/h2&gt;

&lt;p&gt;I hope this helped you configure your database migrations! Here is a quick summary of what we have learnt.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Database migrations and seeding processes are important to codify for consistency, portability and review purposes.&lt;/li&gt;
&lt;li&gt;How to configure Azure runners to allow ingress to RDS&lt;/li&gt;
&lt;li&gt;How to create build artifacts in Azure Pipelines&lt;/li&gt;
&lt;li&gt;How to write basic azure pipeline jobs&lt;/li&gt;
&lt;li&gt;How to migrate and seed your database using Azure pipelines.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I’m glad to have this work sorted as it was a bit of a hassle to configure. But, we got there in the end and this is now a durable process that will scale with the team.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>sequelize</category>
      <category>mysql</category>
      <category>database</category>
    </item>
    <item>
      <title>Super Fast React/Node App Testing with GitHub Actions</title>
      <dc:creator>J</dc:creator>
      <pubDate>Thu, 25 Mar 2021 00:00:00 +0000</pubDate>
      <link>https://dev.to/joshghent/super-fast-react-node-app-testing-with-github-actions-5l7</link>
      <guid>https://dev.to/joshghent/super-fast-react-node-app-testing-with-github-actions-5l7</guid>
      <description>&lt;p&gt;A seldom thought of component of performance is that of continuous integration performance. Here at &lt;a href="https://york-e.com"&gt;York Press&lt;/a&gt;, we are big users of both Azure Pipelines and GitHub Actions. Due to us hosting our Azure pipeline runners, “job minute” restrictions were never a concern from a billing perspective. Although, the long running jobs did frustrate the team. Having moved some key processes over to GitHub Actions, I decided it was time that we looked at improving the performance of one of our core repositories. Not only would this mean developers had quicker feedback, but it would also mean we burned through our actions minutes a lot slower. Here’s how I did it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Initial Investigation
&lt;/h2&gt;

&lt;p&gt;GitHub actions (and I promise this isn’t an ad) has a handy feature whereby you can see how many seconds each stage of the job took.&lt;/p&gt;

&lt;p&gt;&lt;a href="///static/896199d0e6ccab284f16233857548040/fa88c/github-actions-timing.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gyjJ-R5I--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://joshghent.com/static/896199d0e6ccab284f16233857548040/fcda8/github-actions-timing.png" alt="github actions timing" title="github actions timing"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The first thing I spotted was how long installing npm modules took - nearly 3 minutes! Because of this, I chose to combine the Test and Lint pipeline so that we would not need to duplicate the module installation.&lt;/p&gt;

&lt;p&gt;Secondly, I swapped out my normal &lt;code&gt;npm ci&lt;/code&gt; command for another action &lt;code&gt;bahmutov/npm-install@v1&lt;/code&gt;. This action handles all the cache invalidation and storage of node modules across builds so you can save time with installing them. After those changes, here is what the timings looked like…&lt;/p&gt;

&lt;p&gt;&lt;a href="///static/4e1a2fb9ad2398ce03d9d72fb087bdfd/72923/github-actions-timing-2.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hgX7LDLl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://joshghent.com/static/4e1a2fb9ad2398ce03d9d72fb087bdfd/fcda8/github-actions-timing-2.png" alt="github actions timing 2" title="github actions timing 2"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Half the time gone! That’s a good start but still not far enough. I found the modules were taking ages to install due to a Webpack plugin responsible for optimizing images, something we didn’t need in the CI process. I moved this out into an &lt;code&gt;optionalDependencies&lt;/code&gt; and then set the command to &lt;code&gt;npm ci --no-optional&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  ESLint
&lt;/h2&gt;

&lt;p&gt;The other big fish to fry was ESLint, it took nearly 2:30 minutes to run. I tried to debug this locally using an environment variable &lt;code&gt;TIMING=1&lt;/code&gt;. This gives you a table view of how long each ESLint rule took to check.&lt;/p&gt;

&lt;p&gt;&lt;a href="///static/3dd5f2abfc77e21683513ff51817045d/3dd3e/eslint-timing.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oF-EN18O--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://joshghent.com/static/3dd5f2abfc77e21683513ff51817045d/fcda8/eslint-timing.png" alt="eslint timing" title="eslint timing"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Interestingly, it was the &lt;code&gt;import/&lt;/code&gt; rules that were taking the longest. After some google-fu, &lt;a href="https://github.com/benmosher/eslint-plugin-import/issues/1793"&gt;I discovered that it was due to having to build a dependency graph&lt;/a&gt; across the codebase. Our codebase is fairly large so it was understandable why it would take this long. I didn’t want to remove the rule entirely as it was useful, but surely there was a way around it…&lt;/p&gt;

&lt;p&gt;Actions to the rescue! Fortunately, a kind internet person has created a github action that will run ESLint only on the files that have changed. I swapped this out like so&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- uses: tinovyatkin/action-eslint@v1
  with:
    repo-token: ${{secrets.GH_TOKEN}}
    check-name: eslint
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This completely eliminated that time taken if no files had been changed matching the scanning glob.&lt;/p&gt;

&lt;p&gt;From there, I spent more time than I care to admit trying to trim the time down. The main blockers were the dependency install (1:20s average) and the Jest test suite (50s average). Although there are ways to &lt;a href="https://imhoff.blog/posts/parallelizing-jest-with-github-actions"&gt;run the Jest suite in parallel&lt;/a&gt; it sort of seems redundant at this stage. The install is the big job, but the unfortunate battle is that we have Webpack image-loader as a &lt;code&gt;devDependency&lt;/code&gt;. This then installs a whole host of binary packages that then get built from source - every. single. time. Anywho, I’m pleased with reducing it by 76%.&lt;/p&gt;

&lt;p&gt;Here are my main takeaways:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Spend time speeding up your CI - fast developer feedback is important and saves you money (if you’re restricted on pipeline minutes)&lt;/li&gt;
&lt;li&gt;Use the pre-built actions - there is a huge marketplace of actions that solve a bunch of problems and have smart defaults. GitHub Actions is great (and I promise this isn’t an ad), in part, because it’s like code lego.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I hope this helps you with your journey in speeding up your CI.&lt;/p&gt;

</description>
      <category>actions</category>
      <category>github</category>
      <category>react</category>
      <category>ci</category>
    </item>
    <item>
      <title>SpellcheckCI</title>
      <dc:creator>J</dc:creator>
      <pubDate>Thu, 04 Mar 2021 00:00:00 +0000</pubDate>
      <link>https://dev.to/joshghent/spellcheckci-18c</link>
      <guid>https://dev.to/joshghent/spellcheckci-18c</guid>
      <description>&lt;p&gt;Making sure you have correct spelling on your blog posts is vital to keep readers attention. Unfortunately, it’s a laborious process and sometimes things fall through the cracks. Being the nerd I am, I decided I needed a shell script to solve this problem.&lt;/p&gt;

&lt;p&gt;Thankfully, someone has created an open source markdown based spellcheck module that is Node based - &lt;a href="https://github.com/lukeapage/node-markdown-spellcheck"&gt;mdspell&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Since I’m using Gatsby, my posts can be found under &lt;code&gt;content/blog/*/index.md&lt;/code&gt; - where &lt;code&gt;*&lt;/code&gt; is the name of the blog post. The command to run the spell check was then&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ npm i -g node-markdown-spellcheck &amp;amp;&amp;amp; mdspell -a -n "content/blog/**/*.md"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This would go through each of my posts and then validate the spelling is correct. When it comes across an incorrect spelling, it notifies me and asks me if I want to correct it, or add it to a local dictionary.&lt;/p&gt;

&lt;p&gt;But, because I often blog from my iPad, where I don’t have a terminal, I wanted this feedback to be visible on the CI for the new blog posts. My workflow for creating new posts is create a new git branch, create the file and write the post, push to github and create a new pull request. You can find this exact blog post’s pull request &lt;a href="https://github.com/joshghent/blog/pull/165"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Time to Automate
&lt;/h2&gt;

&lt;p&gt;I’m a big user of GitHub Actions so I went with that to setup this process.&lt;/p&gt;

&lt;p&gt;Initially, I went down the road of installing all the node dependencies, then installing mdspell and then running the spellcheck. However, I found that it took over a minute to download all the node modules! It turns out, I could have used &lt;code&gt;npx&lt;/code&gt; to use mdspell without having to install the project.&lt;/p&gt;

&lt;p&gt;Here is the complete GitHubCI - which, across over 50 blog posts, takes around 10 seconds to run!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# ./.github/workflows/spellcheck.yml
name: Spellcheck

on: [pull_request]

jobs:
  spellcheck:
    runs-on: ubuntu-latest

    strategy:
      matrix:
        node-version: [14.x]

    steps:
      - uses: actions/checkout@v2
      - name: Use Node.js ${{ matrix.node-version }}
        uses: actions/setup-node@v1
        with:
          node-version: ${{ matrix.node-version }}
      - run: npm i markdown-spellcheck -g
      - run: mdspell -a -n -r "content/blog/**/*.md"
        name: Spellcheck
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I hope this proves useful to you for your own blog. If you don’t have one already, I’d highly recommend creating one!&lt;/p&gt;

</description>
      <category>github</category>
      <category>actions</category>
      <category>ci</category>
      <category>blogging</category>
    </item>
    <item>
      <title>Setting up LightHouse CI for React in GitHub Actions</title>
      <dc:creator>J</dc:creator>
      <pubDate>Tue, 16 Feb 2021 00:00:00 +0000</pubDate>
      <link>https://dev.to/joshghent/setting-up-lighthouse-ci-for-react-in-github-actions-3mc3</link>
      <guid>https://dev.to/joshghent/setting-up-lighthouse-ci-for-react-in-github-actions-3mc3</guid>
      <description>&lt;p&gt;At &lt;a href="https://york-e.com"&gt;York Press&lt;/a&gt;, we noticed that our pages were gaining weight. In some cases, pages were loading over 1MB of resources before showing for the customer. This was unacceptable considering the modal broadband speed is around 1MB/s. So, we decided we needed stricter checks. This would ensure that pages are lighter than an ants leg made of clouds. And, faster load times would mean customers could get to studying faster - which I trust they yearn for.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lighthouse to the Rescue!
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/GoogleChrome/lighthouse-ci"&gt;Lighthouse&lt;/a&gt; is a tool developed by Google. It analyses a page and gives it a score, out of 100, on SEO, Performance, Accessibility, PWA and Best Practises. Although these are arbitrary numbers, they give a rough guide to how your website is doing. These scores are also used to rank your page in Google search rankings. So they are vital to maintain for business reasons, not technical prowess.&lt;/p&gt;

&lt;p&gt;The challenge is how to get this tool setup as there are lots of outdated articles and guides. Furthermore, none of these seem to cover a regular use case - setting up Lighthouse for your React app.&lt;/p&gt;

&lt;p&gt;Here’s a definitive guide on how to setup LighthouseCI for your React app - and have it tracked in Github Actions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup Lighthouse CI
&lt;/h2&gt;

&lt;p&gt;First, you will want to install LighthouseCI and http-server locally for testing purposes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ npm i -g @lhci/cli http-server
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The former is the LighthouseCI tool. The latter is a small module to run the React app after it has been built.&lt;/p&gt;

&lt;p&gt;Next you can create a file called &lt;code&gt;lighthouserc.json&lt;/code&gt;. This should have the following contents&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "ci": {
    "collect": {
      "url": [
        "http://127.0.0.1:4000"
      ],
      "startServerCommand": "http-server ./build/client -p 4000 -g",
      "startServerReadyPattern": "Available on",
      "numberOfRuns": 1
    },
    "upload": {
      "target": "temporary-public-storage"
    },
    "assert": {
      "preset": "lighthouse:recommended",
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The section under “collect” is where the server that runs the React app is defined. The interesting properties are the &lt;code&gt;startServerCommand&lt;/code&gt; and &lt;code&gt;startServerReadyPattern&lt;/code&gt;. The first tells Lighthouse how to start your application. And the second, tells Lighthouse what text to look for to see that the server is running and the test can begin. In this case, it starts the server via &lt;code&gt;http-server&lt;/code&gt; and then it listens for the text &lt;code&gt;Available On&lt;/code&gt;. Run the command shown above for yourself and see what text it displays in your terminal. You may need to change &lt;code&gt;/build/client&lt;/code&gt; to the directory where your application gets built&lt;/p&gt;

&lt;p&gt;Now you can give your LighthouseCI a whirl! Build your application (if you used create-react-app then run &lt;code&gt;npm run build&lt;/code&gt;), then run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ npm run build
$ lhci autorun
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should then see an output like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;✅ .lighthouseci/ directory writable
✅ Configuration file found
✅ Chrome installation found
Healthcheck passed!

Started a web server with "http-server ./build/client -p 4000 -g"...
Running Lighthouse 1 time(s) on http://127.0.0.1:4000
Run #1...done.
Done running Lighthouse!

Checking assertions against 1 URL(s), 1 total run(s)

33 result(s) for http://127.0.0.1:4000/ :
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Setting up GitHub Actions CI
&lt;/h2&gt;

&lt;p&gt;Now, let’s automate that. The best way to enforce these sorts of checks is to make them part of your pull request workflow. This means preventing merge on requests that fail to meet these standards.&lt;/p&gt;

&lt;p&gt;All we need to do with GitHub Actions is imitate the commands we did in the setup process. Paste the following into a new file called &lt;code&gt;/.github/workflows/lighthouse.yml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# ./.github/workflows/lighthouse.yml
name: LighthouseCI

 on:
   pull_request:

 jobs:
   lighthouse:
     runs-on: ubuntu-latest
     steps:
       - uses: actions/checkout@v2

       - name: Setup node
         uses: actions/setup-node@v1
         with:
           node-version: "14.x"

       - name: Install
         run: npm ci &amp;amp;&amp;amp; npm i -g http-server @lhci/cli

       - name: Build
         run: npm run build

       - name: LighthouseCI
         run: lhci autorun
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, push up your changes and create a new pull request. You should see your Action running at the bottom of the pull request.&lt;/p&gt;

&lt;p&gt;&lt;a href="///static/5b38311beb56b9bd2a8515013e4b30b1/89a37/lighthouseci-pr.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_3ELPd-w--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://joshghent.com/static/5b38311beb56b9bd2a8515013e4b30b1/89a37/lighthouseci-pr.png" alt="Pull Request Feedback for LighthouseCI Github Action" title="Pull Request Feedback for LighthouseCI Github Action"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And that’s that! I hope that has saved you a lot of time if you were struggling to get your React app to play nice with GitHub Actions.&lt;/p&gt;

</description>
      <category>githubactions</category>
      <category>lighthouseci</category>
      <category>react</category>
      <category>performance</category>
    </item>
  </channel>
</rss>
