<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: kerry convery</title>
    <description>The latest articles on DEV Community by kerry convery (@kerryconvery).</description>
    <link>https://dev.to/kerryconvery</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kerryconvery"/>
    <language>en</language>
    <item>
      <title>Your experience using AI to keep dependencies up to date.</title>
      <dc:creator>kerry convery</dc:creator>
      <pubDate>Mon, 02 Mar 2026 03:50:14 +0000</pubDate>
      <link>https://dev.to/kerryconvery/your-experience-using-ai-to-keep-dependencies-up-to-date-2p2h</link>
      <guid>https://dev.to/kerryconvery/your-experience-using-ai-to-keep-dependencies-up-to-date-2p2h</guid>
      <description>&lt;p&gt;I'm wondering is anyone has had experience using LLMs automatically keep dependeny packages up to date in their JS or Typescript projects.  &lt;/p&gt;

&lt;p&gt;My organisation currently using rennovate to help keep dependencies up to date but occasionally the update will have a breaking change that needs human intervention to resolve.  I'm wondering if an LLM can instead iterate in the background until the issue is resolved and whether anyone has had experience with this approach?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>discuss</category>
      <category>javascript</category>
      <category>llm</category>
    </item>
    <item>
      <title>How would you handle expirying presigned urls on the frontend?</title>
      <dc:creator>kerry convery</dc:creator>
      <pubDate>Thu, 19 Feb 2026 11:39:32 +0000</pubDate>
      <link>https://dev.to/kerryconvery/how-would-you-handle-expirying-presigned-urls-on-the-frontend-2nak</link>
      <guid>https://dev.to/kerryconvery/how-would-you-handle-expirying-presigned-urls-on-the-frontend-2nak</guid>
      <description>&lt;p&gt;We have a case where we use presigned S3 urls as the src for image tags.  These urls are valid for 5 minutes.&lt;/p&gt;

&lt;p&gt;Our API returns a url for each available image size.&lt;/p&gt;

&lt;p&gt;The frontend gets a list of user profiles which includes two profile photo urls for each profile, a url for a thumbnail and a url for a large view of the photo.  All urls are valid for 5 minutes.  The frontend caches the urls and displays the list of profiles.  When a user clicks on a profile photo we display the larger version of the profile photo.&lt;/p&gt;

&lt;p&gt;The problem is that it might be more than 5 minutes when a user clicks on a thumbnail so when the frontend displays the larger version of the profile photo the url for that image has already expired so we can't display it.&lt;/p&gt;

&lt;p&gt;How to solve this problem?&lt;/p&gt;

&lt;p&gt;One option is to use a hidden image tag for the larger image and make it visible when the user clicks or hovers over it.  Because it's hidden, the browser will download it when the page is first rendered.  The problem is that the list of profiles can be 100 items long which means that a lot of images will be downloaded and most will probably not be clicked on.&lt;/p&gt;

&lt;p&gt;Another option is the extend the expiry time of the url to 12 hours.  While this works it doesn't help if the user leaves their browser open for longer.&lt;/p&gt;

&lt;p&gt;Finally, we thought maybe we can fetch the image presigned url when the user clicks on the thumbnail.&lt;/p&gt;

&lt;p&gt;Why do we set an expiry on the image urls?  It is because a user can hide their profile and when hidden their profile photo should not be available for viewing.  So we set the url expiry to 5 minutes and we do not serve a new url if the profile has since be hidden.&lt;/p&gt;

&lt;p&gt;Any thoughts on other ways to solve this issue?  I feel that this must be a common problem and therefore there must be acceptable solutions that frontend devs use.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>aws</category>
      <category>frontend</category>
      <category>help</category>
    </item>
    <item>
      <title>How to create a simple DeekSeek NodeJS server using a local model</title>
      <dc:creator>kerry convery</dc:creator>
      <pubDate>Sat, 01 Feb 2025 06:11:01 +0000</pubDate>
      <link>https://dev.to/kerryconvery/how-to-create-a-deekseek-nodejs-server-using-a-local-model-45a2</link>
      <guid>https://dev.to/kerryconvery/how-to-create-a-deekseek-nodejs-server-using-a-local-model-45a2</guid>
      <description>&lt;p&gt;This is a quick 5 minute overview of how to setup a nodeJS server that responds to prompts using a local Deepseek model, or any model supported by Ollama.&lt;/p&gt;

&lt;p&gt;This is based off of the instructions found here.&lt;br&gt;
&lt;a href="https://github.com/sgomez/ollama-ai-provider" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Download and install Ollama&lt;br&gt;
&lt;a href="https://ollama.com/" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Pull a Deepseek model of your choice.  You can find more models &lt;a href="https://ollama.com/search" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;br&gt;
&lt;code&gt;ollama pull deepseek-r1:7b&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Initialise your project&lt;br&gt;
&lt;code&gt;pnpm init&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Install Vercel AI&lt;br&gt;
&lt;code&gt;pnpm install ai&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Install the ollama provider&lt;br&gt;
&lt;code&gt;pnpm install ollama-ai-provider&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Install fastify&lt;br&gt;
&lt;code&gt;pnpm install fastify&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Install Zod&lt;br&gt;
&lt;code&gt;pnpm install zod&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Create a index.ts file and paste in the following code&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { generateText } from 'ai';
import { createOllama } from 'ollama-ai-provider';
import createFastify from 'fastify';
import z from 'zod'

const fastify = createFastify();

const promptSchema = z.object({
  prompt: z.string()
})

const ollama = createOllama({
  baseURL: 'http://localhost:11434/api',
});

fastify.post('/prompt', async (request, reply) =&amp;gt; {
  const promptResult = promptSchema.safeParse(request.body);

  if (promptResult.error) {
    console.log(promptResult.error)
    return reply
      .code(500)
      .send();
  }

  const result = await generateText({
    model: ollama('deepseek-r1:7b'),
    prompt: promptResult.data.prompt
  });

  return { answer: result.text }
})

await fastify.listen({ port: 3000 }, () =&amp;gt; {
  console.log('listening on port 3000')
})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Make sure you have tsx installed and run&lt;br&gt;
&lt;code&gt;npx tsx index.ts&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You can curl the endpoint to get a response&lt;br&gt;
&lt;code&gt;&amp;gt;curl -X POST http://localhost:3000/prompt -H "Content-Type: application/json" --json "{\"prompt\": \"Tell me a story\"}"&lt;/code&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Question: How do you keep on top of the endless renovate PRs?</title>
      <dc:creator>kerry convery</dc:creator>
      <pubDate>Thu, 11 May 2023 08:47:43 +0000</pubDate>
      <link>https://dev.to/kerryconvery/question-how-do-you-keep-on-top-of-the-endless-renovate-prs-1l76</link>
      <guid>https://dev.to/kerryconvery/question-how-do-you-keep-on-top-of-the-endless-renovate-prs-1l76</guid>
      <description>&lt;p&gt;Hi Folks&lt;br&gt;
My team is currently battling a tsunami of renovate PRs for npm package upgrades.  We are using renovate to help automate package upgrades to ensure that we are always on the latest version which in turn helps to improve our security posture.  &lt;/p&gt;

&lt;p&gt;However, over the past 6 to 12 months it has come to a point where we are struggling to keep up with fixing the broken builds due to a renovate PR trying to upgrade a package.&lt;/p&gt;

&lt;p&gt;Some ideas we have thought of to try and combat this are:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a culture of everyone spending an hour or two every
morning working on renovate PRs.&lt;/li&gt;
&lt;li&gt;Reduce the number of repos we have (we have quite a few) by merging some into mono-repos, thereby allowing us to share more packages between projects reducing the number of PRs.&lt;/li&gt;
&lt;li&gt;Focus on the package upgrades raised by Snyk instead of renovate as security is our highest priority.&lt;/li&gt;
&lt;li&gt;Create a team culture of, if you are working in a repo, make sure you resolve all renovate PRs before releasing to prod.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Each option has pros and cons, for example, option 1 may not scale well, option 2 could mean that although a package upgrade works for one project in the mono-repo it may not work for another thereby delaying the upgrade for all projects in the repo, or the added complexity of being able to deploy projects in a mono-repo individually. Option 3 means that some packages can be far behind making a future upgrade require a bigger jump between versions.  Option 4 means that seldom touched repos don't get upgraded as frequently resulting in a similar problem to option 3.  Therefore,a combination of these options, or other options would be require.&lt;/p&gt;

&lt;p&gt;I'm keen to learn what strategies others use to keep their package upgrades under control and ensure that your code remain secure.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How we used module federation to implement micro frontends</title>
      <dc:creator>kerry convery</dc:creator>
      <pubDate>Sun, 07 Aug 2022 00:43:00 +0000</pubDate>
      <link>https://dev.to/kerryconvery/module-federation-learnings-37oi</link>
      <guid>https://dev.to/kerryconvery/module-federation-learnings-37oi</guid>
      <description>&lt;p&gt;A while ago I posted a question to the community about problems I've encountered when implementing a micro frontend architecture and whether this is common or if there is a better way.&lt;/p&gt;

&lt;p&gt;Thank you to those who responded. &lt;a href="https://dev.to/kerryconvery/microfrontends-ke4"&gt;https://dev.to/kerryconvery/microfrontends-ke4&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Not so long ago I was a involved in a project as a tech lead leading one of many teams to build micro frontends using module federation. My role was to work with the lead architect and other teams in guide my own team through the implementation, finding solutions to problems along the way.  It was arguably one of the best roles I've held so far.  I'd like to share how we went about it and share our solutions to certain problems we faced.&lt;/p&gt;

&lt;p&gt;We were building several applications that spanned not only across a large organization but also across it's partners and subsidiaries as well.  It was to be a giant web of interconnected micro frontends so the solution had to scale very well in terms of being able to roll out new features and fixes quickly across all affected apps. Module federation was seen as a way to enable this.&lt;/p&gt;

&lt;p&gt;Module federation is a feature of Webpack 5 which allows an application to consume code from another application at runtime using either static or dynamic imports.&lt;/p&gt;

&lt;p&gt;You can read more about module federation here &lt;a href="https://webpack.js.org/concepts/module-federation/" rel="noopener noreferrer"&gt;https://webpack.js.org/concepts/module-federation/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Our Micro frontend (MFE) applications consisted of a top level shell application that consumed federated mini-apps which we referred to as MFE components.  These MFE components can themselves consume other federated MFE components, creating a tree of nested MFE components.  The shell application acts as the entry point for end users and handles things like authentication, logging, analytics, responsibilities that it could delegate to other MFE components.&lt;/p&gt;

&lt;p&gt;To help illustrate an MFE built using module federation, picture a shopping app that consumes a catalog MFE component and a shopping-cart MFE component.  The shopping-cart component also consumes a payment MFE component.  The shell application does not know that the shopping cart component consumes a payment component, also the catalog component does not know about the shopping-cart component. All of these components hosted inside the shell application form a user journey from searching to ordering to paying.&lt;/p&gt;

&lt;p&gt;The following image is an attempt to give a visual representation of how such an app could be stitched together.  Each colored dotted box represents a MFE component embedded in the shell application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgbrs7l1u5pmr1ccofac0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgbrs7l1u5pmr1ccofac0.png" alt="example of a micro frontend application"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now imagine that these MFE components can be reused in many applications, for example the catalog MFE component could also be used in a back office application to list out the products currently being sold and could render edit and remove buttons for each product based on roles; more on that later.  If you later decide to change the look and feel of the catalog component or fix a vulnerability, then both the public facing application and the back office application could be updated as soon as the new code is deployed and without needing to redeploy the shell applications.  This is why module federation chosen, the ability to roll out changes to multiple applications at scale without having to rebuild and redeploy all of those applications.  This all sounds good on paper but it requires discipline within the development teams to keep the architecture clean, as we know that even a seemingly minor changes can have unforeseen consequences.  Only time will tell how well this strategy will actually work in practice.&lt;/p&gt;

&lt;p&gt;There are many aspects to consider when building micro frontends. What follows are the areas that we probably spent the most time on trying to get right.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Communication&lt;/strong&gt;&lt;br&gt;
Communication between the shell and the MFE components is done via props and browser events.  Props allow a consumer to configure the MFE component or pass in callback functions. Browser events are published by MFE components and used for global/cross cutting notifications which would be difficult to implement cleanly using callbacks.  In the above example the catalog component would publish an ADD_ITEM event that the shopping cart listens for so that it knows the user wants to add an item to the cart.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Contracts&lt;/strong&gt;&lt;br&gt;
We defined very specific contracts for each MFE component and events.  Typescript was chosen as the way to enforce those contracts at development time.  To share these contracts they were published in an npm package.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Versioning&lt;/strong&gt;&lt;br&gt;
We followed semantic versioning but we had a rule that there can be only 1 major live version at a time; maybe 2 in rare cases.  The reason for this rule was because only a major version is meant to contain breaking changes to a contract thereby forcing a consumer to pull the latest contracts, test and redeploy their app. In the case of minor and patch updates, as soon as a component is deployed into production the consuming app will immediately get the latest changes without needing to be redeployed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multiple domains and multiple environments&lt;/strong&gt;&lt;br&gt;
It is common for an application to be running in different environments, Dev, Staging, Prod but in a module federated world this presents a bit of a challenge.  How do you tell a consumer of an MFE component where it should consume it from?  These are the methods I've seen used in the past:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Bundling a json containing the urls of the consumed MFE component for each environment&lt;/li&gt;
&lt;li&gt;Using the promise support in the module federation plugin to pull the configuration from an remote source at runtime.&lt;/li&gt;
&lt;li&gt;Using module federation dynamic loading to again pull the configuration from a remote source at runtime&lt;/li&gt;
&lt;li&gt;Baking the urls into the application at build time.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;While these methods work, and I have used the promise based method for an earlier MFE, they do have their challenges.  Particularly when different domains are involved as with our case, because you run into trouble with CORS and whitelisting the domains of all consumers of each MFE component isn't practical.  We solved this by utilizing the path based routing capability of Akamai to route requests from the shell application to static assets or experience APIs.  This means that the consumers don't need to know the location of the components they are consuming and since all requests looked like they came from the same domain, CORS also becomes a non-issue.&lt;/p&gt;

&lt;p&gt;This diagram illustrates how this works.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fivm3lfvxjmm67ogrg8ko.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fivm3lfvxjmm67ogrg8ko.png" alt="Web request routing diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Analytics and Logging&lt;/strong&gt;&lt;br&gt;
The MFE components and the shell had a need to report analytics and logging but we wanted this to work at the shell level.  Therefore browser events were raised by each MFE component and capture by the shell.  The shell was then responsible for sending those messages to the analytic and logging platforms.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Npm package dependencies&lt;/strong&gt;&lt;br&gt;
Module federation allows federated components to bring their own npm dependencies and you can use the Medusa dashboard to see what dependencies are in use.  Some packages such as React can only exist once in an application and Module Federation allows you to specify a package as being a singleton so that it will only download it once.  Though React versions 16 through to 18 are mostly backwards compatible as long as everyone is using only the features that are supported by the shell React version.  If the shell wants to upgrade the version of a singleton dependency that is not backwards compatible then all MFE components would need to upgrade first.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Routing&lt;/strong&gt;&lt;br&gt;
Each MFE component can introduce sub routes but the shell is not meant to know this as it is meant to consume MFE components without knowing how they work.  As such, the thinking was that an MFE component would bring its own router, and this is something we had trouble with.  We had standardized on React Router 6 and without doing hacks we couldn't keep the browser history across all of the routers in sync so we settled on having just one router in the shell. Another issue we had was federating a single MFE component that contains the entry point view and the routes.  This couldn't be done because when the user navigates so that an MFE component is not being rendered anymore its routes will stop working since they are also not being rendered.  Therefore we decided that an MFE component should also federate a separate routes component which is then nested into the shells router.  Maybe other routers do this better but I would at least recommend standardizing on a specific router.  If an MFE component consumes another MFE component that has routes, then since consumers are not meant to know about the second MFE component, I think the first MFE component could re-expose the second MFE components routes as its own routes, and so on.&lt;/p&gt;

&lt;p&gt;If anyone has a better way to handle routing within a federated app then I would be interested to learn about it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Role based access&lt;/strong&gt;&lt;br&gt;
One last thing was how we used role based access to control the visibility of UI elements within MFE components. Basically the shell applications consumed an authorization MFE component that was responsible for handling things like redirecting the user to the login page, refreshing the access token, caching basic user and role information, etc.  Each MFE component also consumed a Role MFE component which had access to the cached role information through React context API.  This works because the shell application + the MFE components it consumes become a single app.&lt;/p&gt;

&lt;p&gt;The MFE components wrap UI elements that are controlled by role with the Role MFE component and pass in the required role(s) as a prop.  The Role MFE component decides to either render or not render it's children based on whether the user had the required role(s).  This isn't really related to module federation and micro frontends but I think it is a nice pattern.&lt;/p&gt;

&lt;p&gt;This post has become quite long and I've talked about the main areas I wanted to cover so I think I'll leave it there folks.  If you've made it this far then thank you for reading and I hope others embarking on micro frontends using module federation can take some food for thought from it.&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>programming</category>
      <category>webdev</category>
      <category>react</category>
    </item>
    <item>
      <title>Micro frontends</title>
      <dc:creator>kerry convery</dc:creator>
      <pubDate>Sun, 22 May 2022 12:18:51 +0000</pubDate>
      <link>https://dev.to/kerryconvery/microfrontends-ke4</link>
      <guid>https://dev.to/kerryconvery/microfrontends-ke4</guid>
      <description>&lt;p&gt;I used to work for a company building modules for an application that used a micro frontend architecture. I did this for about 3 years and during that time the main problem I saw was the tight coupling between the a module and the host.&lt;/p&gt;

&lt;p&gt;Each module had to be on the same version of React, the same version of React Router and the same version of the design system as the host provided each of these.  The modules where also tightly coupled to the API provided by the host for things like analytics, feature switching and logging.&lt;/p&gt;

&lt;p&gt;Upgrades to any of the common frameworks and libraries were coordinated across multiple teams and took a couple of months because each team had to upgrade, test, wait for fixes, rinse and repeat.&lt;/p&gt;

&lt;p&gt;The micro frontend framework was built in-house and there was only one application that used it.&lt;/p&gt;

&lt;p&gt;The organisation I am currently working for has recently embarked on building micro frontends but on a much grander scale.  We are using Webpack module federation to build multiple applications using micro frontend architecture not across the organisation but across multiple partners as well where each application is comprised of multiple smaller applications. I am the tech lead for one of the teams involved in this effort.&lt;/p&gt;

&lt;p&gt;We are not really using module federation to share code between running applications.  Rather we have a host and are using module federation to import components from a CDN into the host at runtime instead of build time like you would with npm packages, but the end result is the same.&lt;/p&gt;

&lt;p&gt;I bought into module federation as I believed that it would somehow not suffer from the tight coupling that my previous organisation experienced.  However now that we are deep into building our first micro frontends I am seeing the same problems begin to emerge.&lt;/p&gt;

&lt;p&gt;Micro frontend architecture was inspired by micro services architecture but there is a key difference in my view.  With micro services, each service remains independent and communication is done over an agreed protocol such as http.  You are not trying to build a monolith service by stitching the code of one micro service into another.  This allows each micro service to remain independent in terms of language, frameworks, cloud vendor, monitoring, etc.&lt;/p&gt;

&lt;p&gt;Contrast this with micro frontends where you are actually building a monolith out of smaller parts, a kind of Frankenstein with parts that mostly work together stitched onto it plus a few hacks here and there thrown in.&lt;/p&gt;

&lt;p&gt;Before we went down the road of micro frontends we had built separate applications which when connected together through urls formed a user flow that took the user from a dashboard to ordering to payment.  The user would be hopped from one application to another and this worked, except for the need for each application to refetch data from backend systems instead of being able to share state within the browser.  Each application was built and maintained by a separate team.&lt;/p&gt;

&lt;p&gt;The reason as I understand it that the organisation decided to switch to module federation was so that code could be reused between applications plus you can more easily share state without taking a performance hit.&lt;/p&gt;

&lt;p&gt;However I'm beginning to wonder if it is worth it.  You can share common components using npm via your design system or some other ecosystem.  My previous company utilised atomic design principles for shared components which I think worked well.  For shared state, there is session storage or you could utilise a shared low latency cache.  Smart routing rules would allow each application to appear to be on the same domain and a consistent look and feel between applications could be achieved through a design system.&lt;/p&gt;

&lt;p&gt;I think that by having separate applications only connected by urls each team gets more freedom and are less coupled together.  Plus there are less coordinated upgrade efforts and each team can really move forward on their own without having to worry that they can't move to React 19 because it has some breaking changes with React 18 and they need to wait until other teams have upgraded their apps.  Meanwhile they publish a version of their app using React 19 but still have to maintain the React 18 version and double implement new features. etc.&lt;/p&gt;

&lt;p&gt;This was a bit long but I would really like to hear the thoughts of the community, especially those who have more experience with micro frontends than I do.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>react</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Smart code vs dump code.</title>
      <dc:creator>kerry convery</dc:creator>
      <pubDate>Sun, 22 May 2022 10:56:36 +0000</pubDate>
      <link>https://dev.to/kerryconvery/smart-code-vs-dump-code-3i3j</link>
      <guid>https://dev.to/kerryconvery/smart-code-vs-dump-code-3i3j</guid>
      <description>&lt;p&gt;About 6 months ago my team wrote an app that allows a user to change aspects of their order.&lt;/p&gt;

&lt;p&gt;The app works by first initiating a transaction and then guides the user through a series of steps until a "review changes" page is reached. On the review page the user can review their changes and either submit, repeat the flow and make further changes or cancel.&lt;/p&gt;

&lt;p&gt;Between each step in the flow an api call, updateOrder, is made to save progress.  However when moving from the last page in the flow to the review page a different api call, updateOrderWithPrice, needs to be made, which, in addition to saving their changes also recalculates the price of the order to be presented on the review page.&lt;/p&gt;

&lt;p&gt;The api call updateOrderWithPrice takes longer than updateOrder because in addition to saving the changes to the transaction it has to also calculate the new price.&lt;/p&gt;

&lt;p&gt;Everything works well, except that the decision of when to use updateOrder or updateOrderWithPrice is left up to the developer and therefore the developer has to be familiar with the api. This is particularly a problem when onboarding new developers and I have seen the wrong api being used.&lt;/p&gt;

&lt;p&gt;We know that the updateOrderWithPrice api is synonymous with moving from the last page in the flow to the review page. likewise, updateOrder is synonymous with moving between the earlier pages in the flow.&lt;/p&gt;

&lt;p&gt;Therefore to reduce the cognitive load and human error we should remove the choice from the developer and instead have an underlying mechanism in the application that makes to decision for the developer.  Therefore the developer only needs to specify which page to move to next and the application will decide whether it needs to call updateOrder or updateOrderWithPrice.  This removes the burden from the developer and therefore the chance for human error.&lt;/p&gt;

&lt;p&gt;There are many ways this could be done and it would depend on the application. This particular application uses React Redux so we can therefore take advantage of this by introducing a middleware that persists calls the appropriate api when the the action to move to the next page is dispatched.&lt;/p&gt;

&lt;p&gt;To me this is the difference between smart code and dumb code.  Dumb code simply executes what it's told to and leaves all of the decisions up to the developer. Smart code by contrast is  goal focused and takes care of the underlying details of achieving that goal.  In the case above, the developers goal is the move to the next page and only needs to decide which page that is, the rest is left up to the application.&lt;/p&gt;

&lt;p&gt;This would also simplify testing as you would initially have a test to prove that we moving between pages the right api is called.  The when creating a new flow you would only need to test that the user is navigated to the right page, not which api is called since that test already exists.&lt;/p&gt;

</description>
      <category>programming</category>
    </item>
  </channel>
</rss>
