DEV Community

Cover image for From Greenfield to Minefield and back
DadCod
DadCod

Posted on • Edited on • Originally published at dadcod.es

From Greenfield to Minefield and back


original post

The dream

Imagine starting on a brand-new project. It's a blank canvas, no legacy code, no one to blame.
It's up to you and your team to build a state-of-the-art application.
As months go by and new colleagues join, they open the project and can't help but stop every few minutes, not to troubleshoot,
but to admire the code you've crafted.

Image description

The result

Unfortunately, this ideal scenario is rare, but perhaps the insights I share here can help make it a reality for you.
Let's walk through our project's timeline together and identify potential pitfalls.

Image description

DISCLAIMER

This isn't a step-by-step guide on how to build and scale a project from scratch.
Instead, I'll share our journey, the key decisions we made, the problems we encountered, and our solutions.
The focus will be on technical aspects, and while touching upon architectural topics,
I'll avoid definitive statements, offering a more philosophical perspective instead.

Image description

Project timeline

Using the project timeline as a framework, I'll highlight key decisions, milestones, team size, and challenges we faced.

Image description

Choosing a framework

This is one of the controversial topics that I won't go into much detail about, but I will share my experience and the reasons why we chose Angular. For context, the project is a web application used by doctors and patients. It's a medical platform designed to help doctors manage their patients' health.

The project began in late 2019. The initial members of the team were mostly backend (BE) oriented, with experience in Java and Spring. Angular was a logical choice for us because its mental model is similar (i.e., class components, services, dependency injection) and the language, TypeScript, is also similar. As a small team, the assistance of BE developers was invaluable, making Angular a sound choice.

Another important point is that we don't have dedicated architects. We are all equally responsible for the architecture of the project. When making a technical decision, we follow an RFC (Request for Comments) process. Typically, the person with the most experience and interest in the topic writes the RFC, and then we discuss it as a team to make a decision. This approach has its pros and cons. On the plus side, everyone is involved in the decision-making process and takes responsibility for the outcomes. However, it can be time-consuming, challenging to please everyone, and sometimes there's no clear right answer.

In this case, Angular also made sense because it was quite mature and opinionated at the time. For instance, if you need a router, Angular provides a built-in one; for HTTP requests, there's a built-in service; and for testing, it's all built-in as well. Had we chosen React, we would have had to spend more time on the initial setup and make more decisions about the project's architecture.

Moreover, Angular had been quite stable for a couple of years, with new updates being backward compatible (a lesson they likely learned from AngularJS). In contrast, React was constantly evolving (e.g., class components then functional components, hooks, server components, etc.), with many breaking changes.
A great article on that topic is this one.

The other options at that time didn't seem as reliable.

About frameworks

“…web dev is a pop culture with no regard for history\, dooming each successive generation to repeat the blunders of the old\, in a cycle of garbage software\, wrapped in ever-escalating useless animations\, transitions\, and framework rewrites.”
a quote from a favourite blog post

The framework is just a tool which helps you get the job done. Think about what suits better the business case and helps you build the features that you need. It is easier learning new framework then making the wrong framework fit your case.

Image description

Choosing a testing framework

As I mentioned earlier, Angular comes with a built-in testing framework called Karma.
Karma is a test runner that operates in the browser.
However, it spawns a new browser instance for each test, which tends to be slow.
Therefore, we decided to switch to Jest, a test runner that operates in the command line.
Jest is generally easier to integrate with CI/CD pipelines and is reputed to be faster. It also offers a lot of features right out of the box.

To be honest, I'm not entirely sure if we made the right choice. We found ourselves spending a considerable amount of time optimizing the tests to run faster.

Image description

In my experience, testing is often overlooked, but it's crucial, especially in large, evolving projects.
Setting up a solid testing framework from the start makes writing tests easier and more effective.

Choosing repository structure

The company grew, and I joined 🎉. We split into two teams and faced the decision of choosing an approach for our repository.
We opted for a monorepo, bringing together the backend (BE) and frontend (FE) code.
This strategy allowed us to move swiftly, enabling us to create merge requests (MRs) that encompassed changes in both areas.
It facilitated easy code sharing, as we had a shared library for common code.

Image description

We chose nx as our tool to manage the monorepo.
It offers several beneficial features, such as enforcing module boundaries, creating a dependency graph, and building and testing only the projects affected by changes.
The configuration process is straightforward - you simply add your project to the nx.json file, and you're all set.
Additionally, nx provides linting and formatting right out of the box.

/* nx.json */
{
  // ...
  "implicitDependencies": {
    "angular.json": "*",
    // ...
    ".eslintrc.json ": "*"
  },
  "projects": {
    "patient-portal": { "tags": [] },
    "staff-portal": { "tags": [] },
    "logging": { "tags": [] },
    "refmd-portal": { "tags": [] }
  }
  // ...
}
Enter fullscreen mode Exit fullscreen mode

--

// .eslintrc.json
"@nrwl/nx/enforce-module-boundaries": [
  "error",
  {
    "enforceBuildableLibDependency": true,
    "allow": [],
    "depConstraints": [
      {
        "sourceTag": "*",
        "onlyDependOnLibsWithTags": ["*"]
      }
    ]
  }
]
Enter fullscreen mode Exit fullscreen mode

Testing

As our team grew, we began developing a new part of the product and needed to add another project.
To make it easier, we created a shared library for common code, so we could quickly reuse components, services, etc.

Image description

But this method had its limits. It was fine at first, but as we grew, it started to cause some problems, especially with testing.
In Angular, you can do two kinds of tests. One is pure unit tests where you test just the component by itself, mocking the parts it depends on.
The other is integration tests where you test the component with all its connected parts, like mocking HTTP requests in the real app setting.
We often found integration tests more useful because they let us see how components worked together in the app.

The trouble was, for integration tests, we had to bring in all the modules a component used.
This got tough when a component used a lot of modules.
The easiest way seemed to be using the shared module, so we could get all dependencies in one go.
But this meant we were bringing in a bunch of stuff we didn't really need, and that problem started to become pretty obvious...

Image description

The main lesson for us was to keep a close eye on our tests, especially how long they take to run.
If the test time starts increasing, we need to understand why.
We found ourselves having to redo our tests, making sure we only used the modules we really needed.
We also got stricter with reviewing merge requests for unit tests, to avoid including modules we didn't need.

Image description

Library strategy

With our monorepo setup, we had to be extra careful about how projects depended on each other.
Our shared library turned into a kind of catch-all place where everything ended up.

Anyhow, this big-brain library strategy started to "shine" when we had to add yet another project.

Image description

Now, the shared library started to become a mess. There was stuff that Project1 and Project2 needed but Project3 didn't, and vice versa.

Image description

We needed to divide the shared library into two parts: one for sharing resources across all projects, and the other for sharing between specific projects.

Image description

This approach wouldn't scale well. Although it was easy at first to keep up the pace, it presented some challenges.
We were finally forced to rethink our strategy.

To sum up the takeaways:
Having one large shared library is definitely not the way to go if you want to scale. You end up not only importing things you don't need but also making it hard to find stuff.

Image description

So what is the solution? As with everything in software development it depends and there are tradeoffs.
The engineers from the team have all worked with different approaches and we had to find a common ground.
The first and most common one that we all had experience with was one library per component.

Image description

My personal experience with that approach was quite extreme: one library per component, each separated into a different repo and exposed as an npm package in an internal registry.

It was a nightmare to develop, test, and maintain. I recall the npm link often not working as expected. Versioning wasn't clear, and the release process was cumbersome.
In the end, it typically turned out that every consumer needed the latest version anyway.
I believe others had similar experiences, leading us to decide on a different approach.

Image description

We evaluated the DDD (Domain-Driven Design) approach, as we were already using it in some areas of the backend.
The advantage of DDD is its ease in separating domain logic, but this is effective only with clearly defined context boundaries.
In our case, we were concerned about the potential for excessive overhead.
We feared frequent refactoring and moving code around just to align with the DDD principles.
Therefore, we decided to explore the nx approach.

Image description

We would organize libraries by context. More specifically, we would have feature, UI, data access, and utility libraries.
This way, if we have some clearly defined bounded contexts, we can encapsulate them into fully functional feature libraries.
Otherwise, we can distribute reusable blocks across UI and data access libraries.
This approach seemed like a good middle ground.

The key takeaway here is: once you need to share code between projects, it's crucial to think about structure.
If you don't, you're merely postponing the problem.

Image description

Real-World Testing

We got back on track and continued developing features, resulting in a functional part of the product.
It was then time to test it with real users. This step was crucial to verify our assumptions and guide us in prioritizing upcoming features.

Image description

We didn't have to wait long before we got the first unexpected feedback.
It became apparent that internet reliability in the medical centers wasn't as high as we had anticipated.
Consequently, we needed to enhance the app's performance, ensuring it functioned smoothly even with poor internet connections.
Specifically, we focused on improving load times and managing request timeouts.

Image description

Improving the FCP (First Contentful Paint) was a top priority, and a logical starting point was the bundle size.
Our investigation into the bundle size revealed several surprises.
Notably, about 20% of the bundle size came from moment.js, as our import method wasn't tree-shakeable.
This meant we were including all the locales in our bundles.
Additionally, for some of the npm packages like rxjs and lodash, not using es6 imports meant we had to include the entire package.
These accounted for another approximately 7% of the bundle size.
Therefore, by switching to es6 imports and properly tree-shaking moment.js, we could potentially reduce our bundle size by around 27%.

Image description

Another method to enhance UX is by achieving a faster FP (First Paint).
One option is using SSR (Server Side Rendering), but it introduces considerable complexity.
It requires a node server to render the app, necessitates working around browser APIs, and calls for cautious handling of URLs, among other challenges.
Alternatively, we could manually inline the critical CSS and JS. We opted for this second approach, as it was simpler to implement and maintain.

Image description

Another area for improvement involved request timeouts.
Initially, if a request timed out, we displayed a generic error, leaving the user unaware of the specific problem.
To address this, we decided to implement a linear backoff strategy. This approach allowed us to retry the request a few times before presenting the error to the user.

Image description

intercept(request: HttpRequest<unknown>, next: HttpHandler): Observable<HttpEvent<unknown>> {
  const id = request.headers.get(RequestIdHeader);
  const requestRetries = this.failedRequestsService.getFailedRequest(id)?.retries || 1;
  return next.handle(request).pipe(timeout(requestRetries * backoffInterval));
}
Enter fullscreen mode Exit fullscreen mode

The Takeaways:

  • Regularly review the bundle size every time you add a new npm package.
  • Stay vigilant about core web vitals.

And to share another favorite quote of mine (from the same blog post I first linked to):
"Tight feedback loops are magic: live reloading is magical. Hot module replacement less so. With live reload, your development browser automatically reloads your dev page when your code changes. If you suck and your page loads slowly, then you suffer. Hot module replacement hides the pain and lets you pass the suffering onto the user."

Image description

State management

The project was expanding rapidly. We were adding numerous features, components, and services.
Eventually, we encountered our first complex state management issue.

Image description

Up until then, we had used a basic approach: employing a service that maintained the state and exposed it as an observable.

export class StateService {
  private state = new BehaviorSubject<any>({});
  state$ = this.state.asObservable();
  setState(state) {
    this.state.next(state);
  }
}
Enter fullscreen mode Exit fullscreen mode

However, we faced a situation where we needed to enhance PDF documents, which involved linking and categorizing pages, adding notes, creating entities for each page, and more.
This task turned our simple approach into one cluttered with complex RxJS statements.
We lacked set rules, such as ensuring the immutability of the state, handling side effects and so on.

Image description

We had to rethink our approach to find a solution suitable for such complicated scenarios.
All of us had previous experience with Redux in the form of ngrx, but we weren't fans of it.
There was too much boilerplate, too much ceremony, and an excessive amount of code for simple tasks.
It required all engineers to be well-versed in its use and even then, it was easy to create a chaotic mess.
So, we decided to search for an alternative.

Image description

We discovered Akita, which was lightweight, required minimal boilerplate, and integrated well with Angular's services.

Image description

Overall, I'm glad we managed to survive for so long without a dedicated state management library.
Navigating through the complexities and challenges of state management without this tool taught us invaluable lessons.
We experienced firsthand the pitfalls to watch out for and gained a deeper understanding of the essential features and functionalities we truly needed in a library.
This journey wasn't just about coping with the immediate difficulties; it was a learning curve that equipped us with the insights to make more informed decisions in selecting the right tools for our needs in the future.

Image description

Along the way, we encountered numerous other decisions and challenges.
These included questions like how to handle styling, logging, and observability, or whether to use MFE (Micro Frontends).
We even tackled implementing our own design system (which I'll discuss in another article) and delved into improving UX and exploring different rendering strategies (also covered in a separate article).
The journey was filled with such considerations.

Image description

I hope the insights shared in this article will help you sidestep some of the mistakes we made.
Ideally, your greenfield project will remain flourishing for a longer period.
And even if it does turn into a minefield, now you'll have a better idea of how to navigate through it.

Top comments (0)