DEV Community

Cover image for Some things I learnt from working on big frontend codebases
Stefano Magni
Stefano Magni

Posted on • Updated on

Some things I learnt from working on big frontend codebases

Until now (May 2023), I had two experiences working on very big front-end (React+TypeScript) codebases: WorkWave RouteManager and the Hasura Console. Either of them are ~ 250K LOC, and the two experiences are very different. In this article, I report the most important problems I saw while working on them, things that usually are not big deals if working on smaller codebases, but that become a source of big friction when the app scales.

Photo by Sander Crombach on Unsplash


My direct experience

First of all, let me describe the main characteristics of the two projects:

  1. WorkWave RouteManager: the product is very complex due to some back-end limitations that force the front-end to take care of a bigger complexity. Anyway, due to the strong presence of the great front-end Architect (that's Matteo Ronchi, by the way), the codebase can be considered front-end perfection. The codebase is completely new (rewritten from scratch from 2020 to 2022), and trying and using new tools happens on a high cadence (for instance: we started using Recoil way sooner than the rest of the world, we migrated the codebase from Webpack to Vite in 2021, etc.), and the coding patterns are respected everywhere. Here I was the team leader of the front-end team.

  2. Hasura Console: the complexity of the project is not so high but the startup needs (pushing out new features as soon as possible) later resulted in some technical debt and antipatterns that are now big friction points for the developers working on them. Here, I joined as a senior front-end engineer and then I became the tech leader of the platform team.

Following, is a non-exhaustive list of examples coming from some of the characteristics/activities/problems I saw, grouped by categories.

Generic approaches

Managing more cases than the needed ones

This innocent approach leads to big problems and a waste of time when you have to refactor a lot of code trying to maintain the existing features. Some examples are:

  1. Components/functions with optional props/parameters and fallback default values: when you need to refactor the components you need to understand what are the indirect consumers of the default values... But what happens if the usage of the default values is driven by network responses? You need to understand and simulate all the edge cases! And what happens if you find out that the default values are not used at all? I once saw a colleague of mine wasting four hours during a refactor for an unused default value...

  2. Types that are typed as a generic string or generic record<string, any> when in reality the possible values are known in advance. The result is a lot of code that manages generic strings and objects while managing the real finite amount of cases would be 10x easier. Again, when you need to refactor the code managing "generic" values, you are going to waste time.

I touched on these topics in my How I ease the next developer reading my code article.

Leaving dead code around

You refactor a module, you remove an import of an external module and you are fine. What happens if the module was the last consumer of the external one? The external module becomes dead code that will not be embedded in the application (nice) but that will confuse everyone that's going around the codebase looking for solutions/utilities/patterns and will confuse the future refactorer that will blame anyone that left the unused module there!

And obviously, it's a waterfall... the external module could import other unused modules and they could depend on an external NPM dependency that could be removed from the package.json, etc.

Internal code dependencies and boundaries

Not enforcing (through ESLint rules or through a proper monorepo structure) strong boundaries among product features/libraries/utilities bring unexpected breaks as a result of innocent changes. Something like FeatureA imports from internal modules of FeatureB that imports from internal modules FeatureA and FeatureC, etc. brings you to break 50% of the product by changing a simple prop in a FeatureA's component. And if you have a lot of JavaScript modules never converted to TypeScript, you will also have a hard time understanding the dependency tree among features...

I strongly suggest reading React project structure for scale: decomposition, layers and hierarchy.

Implicit dependencies

They are the hardest things to deal with. Some examples?

  • Global styles that impact your UI's look&feel in unexpected ways
  • A global listener on some HTML attributes that does things without the developer knowing about them
  • A generic MSW mock server that all the tests used but it's impossible to know what handlers are used by what tests

Again, poor the refactorer that will deal with those. Instead, explicit imports, speaking HTML attributes, inversion of control, etc. allow you to easily recognize who consumes what.

Big modules

This is another very subjective topic: I prefer to have a lot of small and single-function modules compared to long ones. I know that a lot of people prefer the opposite so it's mostly a matter of respecting what is important for the team.

Code readability

I'm a fan of the The Art of Readable Code book and after spending 2.5 years working on a big and complex codebase with zero (!!!) tests, I can tell how much code readability is important.

This also really depends on the number of developers working on a codebase, but I think it's worth investing in some shared coding patterns that must be enforced in PRs (or even better if they can be automated through Prettier or similar tools).

I publicly shared the ones we were using in WorkWave in this 7-article series: RouteManager UI coding patterns. The internal rule we had was that "patterns must be recognizable in the code, but not authors".

No silver bullets here, the important thing IMO is that readability and refactorability are kept in mind by everyone when writing code.

Uniformity is better than perfection

If you are about to refactor a module but you do not have time to refactor also the two modules that are coupled to it... Consider not refactoring it to leave the three modules uniform among them (uniformity means predictability and less ambiguity).

Working flow

No PR description and big PRs

That's such an important topic that I wrote four articles about it. Start with the most important one: Support the Reviewers with detailed Pull Request descriptions

And if you are curious you can dig into some real-life examples I documented here

Suggesting big changes and approaches during code reviews

PRs are not the best place to suggest big changes or completely change the approach because you are indirectly blocking releasing a feature or a fix. Sometimes is crucial to do it, but maybe the initial analysis and estimation steps, pair programming sessions etc. works better to help shape the approach and the code.

When to fix the technical debt?

That's a great question, no silver bullet here... I could only share my experience until now

  1. In WorkWave we were used to dealing with technical debt on a daily basis. Fixing tech debt is part of the everyday engineers' job. This can slow down the feature development in favour of having a deep knowledge of the context and keeping the codebase in a good shape. It's like knowing that you are slowing down today's development to keep tomorrow's development at the current pace.
  2. In Hasura, we cannot deal with technical debt due to the needs to deliver new features. This transformed in a lot of frontenders going slower compared to their potential, sometimes introducing bugs, and offering an imperfect UX to the customers. It happened after years, obviously.

You can read more about a good example of Hasura's problems in my Frontend Platform use case - Enabling features and hiding the distribution problems article. Also, you could read what happened to our E2E tests here after all the tech debt problems we were facing.

No front-end oriented back-end APIs

By "no front-end oriented" I mean APIs not designed with the end customers' UX in mind and a lot of complexity pushed to the front-end in order to keep the back-end development lean (ex. Embedding a lot of DB queries in the front-end avoiding exposing a new API from the back-end). This approach is natural during the initial evolution of a product but will lead to more and more complex front-ends when the product needs to scale.

Never updating the NPM dependencies

Again, based on my own experiences:

  1. In WorkWave I was used to updating the external dependencies on a weekly basis. Usually, it takes me 30 minutes, sometimes 4 hours.
  2. In Hasura, we were used not to update them, finding ourselves, enabling legacy-peer-deps by default, leveraging NPM's overrides and being unable to update any GraphQL-related dependency. Other than having a lot of PRs that completely break the build because of a new dependency.

And since maintaining dependencies has a cost, you should carefully consider if you really need an eternal dependency or not. Is it maintained? Does it solve a complex problem I prefer to delegate to an external part?

TypeScript

Bad practice: Generic TypeScript types and optional properties

It is very common to find types like this

type Order = {
  status: string
  name: string
  description?: string
  at?: Location
  expectedDelivery?: Date
  deliveredOn?: Date
}
Enter fullscreen mode Exit fullscreen mode

that should be represented with a discriminated union like this

type Order = {
  name: string
  description?: string
  at: Location
} & ({
  status: 'ready'
} | {
  status: 'inProgress'
  expectedDelivery: Date
} | {
  status: 'complete'
  expectedDelivery: Date
  deliveredOn: Date
})
Enter fullscreen mode Exit fullscreen mode

that is more verbose but acts as pure domain documentation, removes tons of ambiguity, and allows writing better and clearer code.

The topic is so important and has so many great advantages that I wrote a dedicated article to the topic: How I ease the next developer reading my code.

Type assertions (as)

Type assertions are a way to tell TypeScript "shut up, I know what I'm doing" but the reality is that barely you know what you are doing, especially thinking about the consequences of what you are doing...

This happens very frequently in tests, where big objects are "typed" with type assertions... Resulting in the object going outdated compared to the original type... But you realize it only when the tests will fail and you now left room for a lot of future doubts about the test failures...

The solution: type everything correctly and eventually prefer @ts-expect-error with an explanation of the error you expect.

Read Why You Should Avoid Type Assertions in TypeScript to know more about the topic (and keep in mind that also the JSON.parse example shown there can be typed by using Zod parsers).

@ts-ignore instead of @ts-expect-error and broad scope

@ts-expect-error issues could be auto-fixable in the future, compared to @ts-ignore (that's another way to shut up TypeScript).

More, @ts-expect-error should be applied to the smallest possible scope to TS accepting unintended errors.

// ❌ don't
// @ts-expect-error TS 4.5.2 does not infer correctly the type of typedChildren.
return React.cloneElement(typedChildren, htmlAttributes); // <-- the whole line is impacted by @ts-expect-error

// ✅ do
return React.cloneElement(
  // @ts-expect-error TS 4.5.2 does not infer correctly the type of typedChildren.
  typedChildren, // <-- only typedChildren is impacted by @ts-expect-error
  htmlAttributes
);
Enter fullscreen mode Exit fullscreen mode

any instead of unknown

TypeScript's any gives you freedom (that's generally bad) of doing everything you want with a variable, while unknown forces you to strictly guarantee runtime the runtime value before consuming it. any is like turning off TypeScript while unknown is like turning on all the possible TypeScript alerts.

ESLint rules kept as warnings

ESLint warnings are useless, they only add a lot of background noise and they are completely ignored. Rules should be on or off, but never warnings.

Validating the external data

In the software world, the rule of "never trust what the frontend sends to the backend" is crucial, but I'd say that in a front-end application armed with TypeScript types, you should not trust any kind of external data. Server responses, query strings, local storage, JSON.parse, etc. are potential sources of runtime problems if not validated through type guards (read my Keeping TypeScript Type Guards safe and up to date article) or, even better, Zod parsers.

React

HTML templating instead of clear JSX

JSX which includes a lot of conditions, loops, ternaries, etc. are hard to read and sometimes unpredictable. I call it "HTML templating". Instead, smaller components with a clear separation of concerns among the components are a better way to write clear and predictable JSX.

Again, I touched on this topic in my How I ease the next developer reading my code article.

Lot of React hooks and logic in the component's code

I'm a great fan of hiding the React component's logic into custom hooks whose name clearly indicates the scope of the hook and the consuming it inside it. The reason is always the same: long code before the JSX makes reading the JSX harder.

Tests

Bad tests

As a test lover and instructor (I teach about front-end testing at private companies and conferences) I can say that bad tests are the result of lacking experience on this topic, and the only solution is help, mentoring, help, mentoring, help, mentoring, etc.

Anyway, the false confidence that tests can offer is a big problem in every codebase.

I suggest reading two of my articles:

E2E tests everywhere

E2E tests do not scale well because of the need for real data, a real back-end, etc.

Also, in this case, I suggest reading some of my articles:

Developer Experience

Deprecated APIs

When code is marked as @deprecated, the IDE shows it as strikethrough'ed and present the documentation, helping the developers realize that they should not use it.

An example:


/**
 * @deprecated Please use the new toast API /new-components/Toasts/hasuraToast.tsx
 */
export const showNotification = () => { /* ... */ }

Enter fullscreen mode Exit fullscreen mode

Care about the browser logs

Console warnings (coming from ESLint, from TypeScript, from React, from Storybook, etc.) add a lot of background noise that mixes with the important logs you could trace. Care and remove them in order to avoid the developers ignoring your own important alerts due to the high noise.

Developer alerts for unexpected things

Runtime things (ex. server responses) could not be aligned with the front-end types. If you do not want to break the user flow by throwing an error, at least track the error through something that could alert you about it (like Sentry, or whatever other tool), so a short time will pass between the error coming out and you fixing it.

React-only APIs

If you are creating an internal library, prefer to expose only React APIs. The big advantage is that you count on React's reactivity system, and managing dynamic/reactive cases in the future will be easier because you are sure the consumers of your React APIs are re-rendered for free and always deal with fresh data.

Credit where credit is due

Thank you so much to M. Ronchi and N. Beaussart for teaching me so many important things in the last few years ❤️ a lot of content included in this article comes from working with them on a daily basis ❤️

Top comments (37)

Collapse
 
jrrs1982 profile image
Jeremy Smith

Great article!! Agree to it all!

Collapse
 
noriste profile image
Stefano Magni

Thank you, Jeremy!! 😊

Collapse
 
stefanonepa profile image
stefanonepa

Lots of wise advices!
Thanks

Collapse
 
noriste profile image
Stefano Magni

Thanks to you for leaving your appreciation here 😊

Collapse
 
hassansuhaib profile image
Hassan Suhaib

This is gold! Thanks for sharing Stefano. Learned a ton!

Collapse
 
dipanjan profile image
Dipanjan Ghosal

This is a great read! Saving this so as to go through all the links later.

Collapse
 
noriste profile image
Stefano Magni

Sure, I know reading all of them takes some time 😅

Collapse
 
omril321 profile image
Omri Lavi

Amazing article, thank you! I plan to read most of the linked contents very soon.
I have a question - what do you do when your opinions about tests (or readable code) are different than your team's opinions? For example, if you find yourself working with a team that doesn't find the value of tests.
Thanks again :)

Collapse
 
noriste profile image
Stefano Magni

what do you do when your opinions about tests (or readable code) are different than your team's opinions?

That's a great question 😊

I never dealt with such a situation for a long time. Time makes the difference here, because when we speak about the short term, hoenstly there is no differences. If we speak about the medium and long term, instead, tests and code readabilty make a huge differrence.

Anyway, the approach is alwayus the same: I focus on the most important parts to improve (for instance, in a distributed company with a lot of devs, tests are more important than readability of the code itself. TypeScript Discriminated Unions are more important than code indentation, etc. especially if you consider the advent of Copilot etc.) and:

  1. I act as a model: most people do not have strong opinions, and when they see "the quality" of how more seasoned devs work they tend to emulate.
  2. I propose things: I get in touch with the authors of the code, I propose improvements, I jump in a sync call to discuss them, I listen to the proposals of the other ones, and also I show/demostrated what is the added value of my approaches.
  3. I do some refactors all my own and I jump in a call to discuss it with the author. Please note that, in this case, it's important to let original code as is, even if it leaves room for improvements and even if I refactored it. From the authors' perspective, you are respecting their work if you do not change it. The next time, there is a chance they will follow your suggestions because they know you respect them.
  4. I keep track of real-life examples that show that I'm right, and I keep them in a separate txt of mine. Then, when needed, I can recall them and point people there. It's hard to deny the evidence 😊

All the above means accepting that 90% of your suggestions (especially if you are a nitpicker) will not be considered at all... But the remaining 10%, the most important ones, maybe yes. And it's a great exercise for me too! Because as a perfectionist, I need to always learn more and more that not everything have the same importance.

What do you think? Do you have different direct experiences? 😊

Collapse
 
omril321 profile image
Omri Lavi

Thank you for the detailed reply!

in a distributed company with a lot of devs, tests are more important than readability of the code itself

I never thought about it, but I totally agree. On larger companies, each team usually owns its own codebase. They usually know who to approach when needing clarifications about the code. What they usually don't know is which part will break when something changes - that's where the importance of tests really shines.

I really like the 90%-10% approach, it makes a lot of sense to me. In some way, it can be parallelized to an important skill of a good developer: understanding what's important, and compromising where needed (e.g. on a quality vs. speed consideration).

I also think of myself as a perfectionist, but I try taking a pragmatic approach. Usually when I review a PR, I tend to be very pedant, and leave comments about "smaller" things as well. However, I make sure to emphasize what's important and what's not, and on the summary note I explicitly say what needs to change to get an approval from my end. I make sure not to become a burden, otherwise people will avoid approaching me.
When taking part on "live" sessions (e.g. design reviews), I try being more "nice", considering what's really important to me, and start by raising only these issues. In some ways, it's a bit harder than doing it "offline", since you need to analyze the details quickly. On the other hand, since it's usually face-to-face, people tend to be more receptive to the feedback.

What's your experience on this regard?

P.S.
I really like the content you write, both the subjects and the style. Keep up the amazing work!

Thread Thread
 
noriste profile image
Stefano Magni • Edited

Sorry for the delay, I was on vacation 😊

I really like the 90%-10% approach, it makes a lot of sense to me. In some way, it can be parallelized to an important skill of a good developer: understanding what's important, and compromising where needed (e.g. on a quality vs. speed consideration).

Could you tell me how you all use it in your company? 😊

However, I make sure to emphasize what's important and what's not, and on the summary note I explicitly say what needs to change to get an approval from my end.

That's an interesting approach I did not think about. Here, I always use Conventional Comment to express the importance of every single comment (since most of them are nitpicks).

When taking part on "live" sessions (e.g. design reviews), I try being more "nice", considering what's really important to me, and start by raising only these issues.

I do the same 😊

In some ways, it's a bit harder than doing it "offline", since you need to analyze the details quickly.

I have the same problem, usually my mind needs a bit more time to analyze things and lot of times I get back to the other dev later in the next hour with more thoughts 😊

On the other hand, since it's usually face-to-face, people tend to be more receptive to the feedback.

Also: it requires less time to convince people face by face than async IMO 😊

I really like the content you write, both the subjects and the style. Keep up the amazing work!

Thank you so much, it means a lot to me 🤗

Thread Thread
 
omril321 profile image
Omri Lavi

Hey Stefano, Thank you for the reply! I hope you had a good vacation 😊

Could you tell me how you all use it in your company?

I'm not sure if it's used by everyone in the company. Personally I keep a mindset similar to what you described: I understand people have a lot on their plates, and not all my suggestions can be applied. Since I understand that 90% of my suggestions may not be implemented, I try hard to find the 10% that are crucial, and should not be ignored (as I see it). It's a matter of prioritization and compromisation 😊

... I always use Conventional Comment ...

Wow, that's a great convention, I never heard of it! I think that using this depends heavily on the company's culture and size. In a smaller company, I believe it's easier to get a broader agreement about such convention. In a medium or large company, with different groups and sites, it's very likely that not everyone will agree about the benefits of such convention. This may add friction in a worse way than PR comments with no convention 😆
I wonder how larger companies integrate with such conventions in a way that is accepted by most of the workers... Do you happen to know about such processes?

... lot of times I get back to the other dev later in the next hour with more thoughts

I need to start doing this more 😆 I find myself many times trying to provide a solution quickly, just so I won't have to deal with another thing on my plate. (And perhaps since I want to be seen as "the guy with the answers" 😋). I should follow you as example more often, and take the time offline to think of an answer.

Thread Thread
 
noriste profile image
Stefano Magni

I wonder how larger companies integrate with such conventions in a way that is accepted by most of the workers... Do you happen to know about such processes?

I could tell you that in my experience, it simply happened organically. Who leaves 0/1 comments do not use it. Who leaves a lot of comments start using it when they see you using it. I think it's the best approach instead ot pushing it to everyone 😊

I should follow you as example more often, and take the time offline to think of an answer.

FYI: this takes time, obviously 😊 and I need to balance when to do it and when not otherwise it could ruin my days in a while 😊

Collapse
 
webbertakken profile image
Webber Takken

Excellent article. Thank you for sharing!

A quick note about your remark on ESLint warnings:

ESLint warnings are useless, they only add a lot of background noise and they are completely ignored.

Note that ESLint warnings can show different squiggly lines (yellow instead of red) in your IDE, meaning you can keep coding and fix them later, as they're less distracting. They're only useless if you don't enforce them being fixed eventually.

I would recommend using ˋeslint src --ext ts,tsx --max-warnings 0ˋ as a script in package.json and invoke that from a CI workflow. You could also add a precommit hook for faster feedback on staged files (explained) to improve developer experience.

Collapse
 
noriste profile image
Stefano Magni • Edited

Thank you, Webber! Did the "warnings during dev and errors in CI and on pre push" approach work well for your team/teams? I would prefer to set a strong alert since the beginning so the developers knows that they are hiding the dust but they need to fix the problems in the short term compared to giving soft warnings and then errors... But I'm very curious about the pros and cons you found with your approach!! 😊

Collapse
 
webbertakken profile image
Webber Takken

The yellow ones are clearly less daunting and can be helpful abstracting over the exact syntax of your implementation momentarily, which reduces cognitive complexity and allows focusing on the feature at hand.

Errors for no-console, react-hooks/exhaustive-deps and no-unused, to name a few, can be distracting during development, especially if you can not differentiate them from more structural problems.

Blocking them at pre-commit just means you have to fix them before committing. Therefore it does the same thing as you're describing, just with a bit more nuance: multiple colours, but all need to be fixed before committing.

Thread Thread
 
noriste profile image
Stefano Magni

It makes sense, thank you 😊

Collapse
 
cmcnicholas profile image
Craig McNicholas

Nice article, you pretty much encapsulate all my experiences.

To expand on your typed unions example I think this goes further into correct data modelling techniques.

Too often do I find hacky procedural scripting-like behaviour when people model front end data models but you should be taking as much care as your db, API etc. If application state is correctly modelled it makes the decision of what to guard against or implement in the resulting component so much simpler and unambiguous. Too many front end Devs don't appreciate good OO in this case and it hurts as the apps grow/scale.

Collapse
 
noriste profile image
Stefano Magni

I 100% agree, thanks for sharing it 👏👏👏

I have only frontend experience so I can't say if it's something specific to frontend devs or simply lack of "great" mindset, independently from frontend or backedn

Collapse
 
bobbyconnolly profile image
Bobby Connolly

I really like your approach and learned some things from your article like the discriminated union.

Personally, I often tell typescript to shut up and turn off noImplicitAny and strictNullChecks. However, I work alone and understand my "loose TS shrinkwrap." I really love how TS infers the return types and I don't mind typing my parameters for the most part.

Collapse
 
noriste profile image
Stefano Magni

However, I work alone

This changes a lot of things. My approach was the same if yours until I was almost working alone on frontend, I changed my mind when I needed to ensure everything was stable and secure for a lot of devs other than me 😊

Collapse
 
matiasherranz profile image
Matías Herranz • Edited

Great article! I agree on most points, and wanted to emphasize how crucial code style and uniformity of approach is imo. You should ideally never get to discuss these concerns on a PR level, but enforce them before, with automated tools (prettier, eslint, etc).

Collapse
 
noriste profile image
Stefano Magni

I agree, one of the next things I want to study is AST for leveraging ESLint for more advanced cases than the ones provided by the various (and great) plugins 😊

Collapse
 
chunting_liu_1c3f8cdd867 profile image
Chun Ting Liu

This sharing immediately becomes one of my favorite and reference. 👏 awesome!

Collapse
 
noriste profile image
Stefano Magni

Thank you, I'm glad you appreciated it 😍

Collapse
 
starswan profile image
Stephen Dicks • Edited

Was there a reason why you spent 2.5 years in a massive codebase with no tests? Did no-one think about writing some?

Collapse
 
noriste profile image
Stefano Magni

2.5 because the project was huge, no tests because... We had no time to also invest in tests. For "no time" I mean that we had to carefully choose what to invest in and what not. The team was only partially ready for tests and also the whole project was a big and log R&D process and tests do not play a good role in a R&D phase.
We heavily invested in TypeScript and shared patterns, instead, and even if they are very different from tests, we reached the end goal of always guaranteeing solidity and almost never introducing bugs.

From a testing-oriented one like me, working with such complexity without tests has been a formative experience 😊

Collapse
 
noriste profile image
Stefano Magni

More: yes, we thought a lot about writing tests and we started writing them, but the R&D nature and the complexity of the project (web workers, web sockets) forced us us to pause the investment there because we need more time than what we had

Collapse
 
starswan profile image
Stephen Dicks

I'm sure you know this, but if you have an R&D project, you try and spend a week or two spiking and experimenting, and then get serious writing tests (and abstractions, all the normal good practices). Your 2.5 year project would probably have got more done sooner. Tests aren't an 'investment' apart from the smallest most trivial throwaway project - and even those have a habit of becoming projects

Thread Thread
 
noriste profile image
Stefano Magni

Consider that 50% of the project was an R&D that could be validated only after six months of work (with big initial research, it's true, but in the end, the full project was an R&D one). As a testing fan I would agree with you but... let me elaborate on the kind of tests that we could have added:

  1. E2E: prohibitive, they do not scale in general but especially on this project. More, the QA team was on them, and overlapping did not make sense

  2. Full-frontend without backend tests (Cypress/Playwright testa with a mocked backend): 100% of the communication happened via websocket and there are not great (there are good but not great) plugins out there to simulate this scenario. More, mocking the server and all its peculiarities (even small chinks of the server based on the test's scope) would have been really hard. Last but not least, The 15 MB bundle does not play very well with the browser tests (I also wrote something about optimizing Vite for browser tests here).

  3. Integration tests for the UI part: not very useful since the UI was really dumb

  4. Integration tests for the "server data", the application running in the web worker that is a sort of BFF: this would have made sense, but again simulating/mocking the server and also having readable tests was hard

  5. Integration/unit tests on part of the "server data" application: it would have made a lot of sense! I 100% agree!

But let's take a step back for a moment and let's think about why you accept the complexity of tests (more dependencies, more patterns, something hard for a lot of devs, more "rigidity" in the codebase, etc.): you add tests to

  1. Prevent regressions
  2. Allow refactoring
  3. Document what the code does

Had we had regressions? Almost never!

The (small number of) regressions we encountered would have been prevented by tests. Partially.

Had we refactored big chunks of code and logic? At least on a monthly basis without regressions.

Had we thought about writing tests? A lot of times and did some spikes but it was really hard because of the generator-based (Redux Saga) nature of the "server data" application.

Had we other urgencies? Yes, if you consider that I made the migration from Webpack to Vite and from Recoil to Valtio during my weekends.

Why the company did not push for having tests? Mostly because the original plan for the optimizations was 6 months.. and turned out in a two-year full rewrite.

Please note that I'm not saying tests were useless! But, for this particular use case, the complexity and the context made the answer to "Should we write tests?" less granted. And I learned a lot, I now can refactor big applications without any tests almost without introducing bugs 😊

Signed by: a testing fan and instructor 😊

Collapse
 
annetawamono profile image
Anneta Wamono

Lots of great advice in this article. Though I haven't worked on a large frontend codebase before, I felt like I could take some of your points into the smaller projects I work on.

Collapse
 
noriste profile image
Stefano Magni

Sure, that was the goal! So when the project scale you hopefully will have less problems or, at least, you know some of them before they happen 😊

Collapse
 
reacthunter0324 profile image
React Hunter

Thank you

Collapse
 
cschliesser profile image
Charlie Schliesser

Thank you for sharing, there’s a lot of gold to glean from this.

Collapse
 
noriste profile image
Stefano Magni

You're welcome, thank you for the nice feedback 🤗

Collapse
 
radicitus profile image
Cameron Sherry

Ok I was honestly blown away by the typed unions. I’ve never seen anything like that but in hindsight it makes perfect sense. Going to be using that everywhere now!

Collapse
 
noriste profile image
Stefano Magni

I see your excitement! They are so solid, so expressive, so ambiguity-free that their wider counterparts (just strings and optional properties) feel like... Not using TypeScript at all... 😊