DEV Community

Cover image for Building a Modern Full-Stack MonoRepo Application: A Journey with GraphQL, NextJS, Bun, and AWS
Drew Schillinger
Drew Schillinger

Posted on

Building a Modern Full-Stack MonoRepo Application: A Journey with GraphQL, NextJS, Bun, and AWS

Welcome to my exploration of building a modern full-stack application using a monorepo approach. With over 20 years of experience in web development, including roles at NBA.com, Adult Swim, and Goodr, I've had the opportunity to delve deeply into these various technologies. I just never took the time to write them up. And since I worked for companies with closed-source code and NDAs, it made sense to build something from the ground up!

Ricks Associated with their Morties is the NextJS running on CloudFront by way of Amplify and relying on a GraphQL Apollo "server" running on a lambda @ the Edge, both in a monorepo that at some point used bun and then switched back to yarn:

This blog post is a journey through a project leveraging GraphQL, Apollo Server, AWS Amplify, Lambda @ Edge, monorepos, and Progressive Web Apps (PWAs), demonstrating practical applications and insights gained along the way.

Project Overview:

Ricks and Morties

This project was conceived as a coding challenge and a demonstration of integrating modern web technologies into a cohesive application. The core of this project involves leveraging GraphQL to interact with the open-source Rick and Morty API, a task I often set for potential hires.

This task was not just a technical exercise but also an opportunity to showcase my expertise in federating data via GraphQL, developing PWAs with NextJS, and experimenting with Bun, especially in relation to Express, Docker, and Serverless technologies. And as I write this blog entry, I am expanding my GraphQL resolver to include the Morties from the PocketMorties game.

One of the unique aspects of this project was creating associations between Ricks and their corresponding Morties within the API—a relationship that doesn't exist in the original data sources. This challenge was an exciting way to demonstrate the power of GraphQL and my experience in crafting complex data relationships.

In addition to GraphQL, the project also focused on exploring the capabilities of Bun, particularly its speed in compilation and Docker compatibility. However, practical challenges led to a pivot back to Yarn, highlighting the importance of choosing the right tool for the job and balancing efficiency with stability.

Deep Dives into Key Technologies

1. Bun: Performance and Design Philosophy

Bun's Speed

Bun: Performance and Design Philosophy

Bun emerged as a compelling choice for this project due to its significant differentiation from traditional package managers like NPM, Pnpm, and Yarn. It's built using Zig, a language known for its performance and safety, allowing for rapid task execution. Key features include:

  • Speed: Bun's core feature is its speed in various operations, from installing packages to running scripts, significantly outpacing traditional package managers.
  • Concurrency Model: Unlike NPM and Yarn, Bun adopts a concurrency model similar to languages like Go, enabling efficient resource utilization and faster I/O-bound tasks.
  • Package Installation: Bun's approach to package installation, involving a global cache and concurrent downloading, contrasts with the more linear methods of NPM and Yarn.

Despite these advantages, practical challenges, such as the GraphQL integration issue, led me back to Yarn, emphasizing the need for stability in web development.

2. AWS Amplify: Simplifying Cloud Integration

AWS Amplify

#### The Role of AWS Amplify in the Project

AWS Amplify played a critical role in this project, streamlining the deployment and management of cloud services, and offering an integrated approach for both backend and frontend development. Amplify's suite of tools and services provided a cohesive platform that significantly simplified complex cloud operations.

Streamlined Workflow and Integration
  • Streamlined Workflow: Amplify's ability to abstract the complexities of cloud infrastructure provided a more user-friendly approach compared to managing custom Docker scripts. Its automated CI/CD pipelines facilitated continuous integration and delivery, ensuring a smooth deployment process.
  • Backend and Frontend Integration: Amplify's integration with backend AWS services and frontend optimizations, like server-side rendering and edge deployment, significantly enhanced the application's performance and user experience.
Cloud Storage, Delivery, and Security
  • Cloud Storage and Delivery: Amplify utilized AWS S3 for storing front-end assets, ensuring high durability and availability. The integration with Amazon CloudFront, AWS's CDN, allowed for efficient content delivery, reducing latency and improving load times globally.
  • SSL and Custom Domains: The provision of SSL encryption and support for custom domains enhanced the security and brand identity of the application, making it more professional and trustworthy.
Amplify's Developer-Friendly Nature

Amplify's developer-friendly interface, backed by extensive community support, made it a superior choice over custom Docker builds. This approach not only optimized the development process but also aligned with best practices for cloud-based applications.

Insights Gained from CloudFormation Designer Template

CloudFormation Designer Template

The project's use of AWS CloudFormation, visualized in the CloudFormation Designer template, represents the infrastructure as code (IaC) aspect of our deployment. This template is a visual representation of the serverless architecture, encompassing various AWS services and configurations:

  • S3 Bucket Configuration: It includes settings for an S3 bucket, essential for storing deployment packages.
  • Lambda Functions and IAM Roles: The template details Lambda functions and their associated IAM roles, outlining the permissions and policies necessary for secure and efficient function execution.
  • API Gateway: It illustrates the setup of the API Gateway, which acts as the entry point for the application's backend, handling requests and routing them to the appropriate Lambda functions.
  • Logging and Monitoring: The inclusion of log groups and CloudWatch roles highlights the focus on monitoring and logging, crucial for maintaining application health and performance.

The CloudFormation Designer template is a testament to the robustness and scalability of the AWS infrastructure utilized in the project. It showcases the intricate setup of serverless components, emphasizing the project's commitment to leveraging AWS services for optimal performance and security.

3. GraphQL: Enhancing API Interactions

GraphQL: Enhancing API Interactions

GraphQL
The heart of this application lies in its use of GraphQL. The custom rickAndMortyAssociations function within the GraphQL schema highlights the ability to create new relationships in existing APIs. This implementation demonstrates GraphQL's power in crafting flexible and efficient data queries, offering significant enhancements over traditional REST APIs.

Case Study - The GraphQL Implementation

GraphQL queries

Crafting the GraphQL Queries

In this project, the GraphQL queries were not just a tool for data retrieval; they were the linchpin for transforming and enriching data from diverse sources. The schema and resolvers were intricately designed to not only fetch data from the Rick and Morty API but to also incorporate data from the Pocket Morties game, demonstrating GraphQL's ability to federate disparate data sources.

A Refresher On How GraphQL Works

At its core, GraphQL is more than just a query language; it's a powerful tool for API design and data federation. Unlike REST APIs, which require multiple requests to fetch different types of data, GraphQL allows for fetching all necessary data in a single request. This capability is particularly beneficial in situations like this project, where data from fundamentally different sources needs to be federated and presented in a unified format. This approach significantly improves performance, especially on slow mobile network connections, by reducing the number of required network requests.

rickAndMortyAssociations Functionality

Associated Ricks and Morties
The rickAndMortyAssociations function in the GraphQL schema was a custom implementation addressing the absence of direct associations between Ricks and Morties in the original API. This function showcases GraphQL's flexibility in data manipulation and presentation:

  1. Schema Definition: The schema defines the rickAndMortyAssociations type, specifying the structure and types of the data that can be queried.
  2. Resolvers: The resolver logic processes these queries, combining data from the Rick and Morty API with the Pocket Morties game. This involves fetching relevant data and then applying custom logic to create meaningful associations between Ricks and their corresponding Morties.
  3. Query Execution: When a query for rickAndMortyAssociations is made, the GraphQL server executes the resolver, returning a combined set of data that enhances the original API's capabilities with additional context and relationships.

The implementation of rickAndMortyAssociations function is a prime example of GraphQL's power in federating and enriching data from multiple sources, creating a more comprehensive and nuanced data set that goes beyond the limitations of traditional APIs.

Navigating IaC Challenges: Terraform to Lambda Shift

Journey from Fargate via Terraform to Lambda via Serverless

Challenges with Terraform in AWS Fargate and ElastiCache Setup

The initial approach to infrastructure involved using Terraform for configuring AWS Fargate and ElastiCache. While I've been using Terraform since its initial alpha days, I still encounter specific challenges as AWS and HashiCorp add new features. In this case:

  1. Health Checks in ECS and ELB: Configuring health checks within Elastic Container Service (ECS) and Elastic Load Balancer (ELB) proved complex, crucial for ensuring service reliability.
  2. Service Discovery Issues: Ensuring effective service discovery of ElastiCache within the same Virtual Private Cloud (VPC) as Fargate was intricate and required precise configuration.

Pivot to AWS Lambda

Given these challenges, a strategic pivot was made to AWS Lambda for its simplicity and speed, because at this time, having something to show interviewers and perspective employers is more important than the technology I leverage.

This shift was influenced by:

  1. Simplicity and Efficiency: Lambda's streamlined approach was better suited for our project's time constraints and requirements.
  2. Statelessness and Resource Optimization: The stateless nature of Lambda aligned with our needs, offering cost-effectiveness and efficient resource utilization.
  3. Project Progression Focus: The pivot to Lambda was a pragmatic decision to keep the project moving forward, balancing the complexities of Terraform with the functionalities required.

Future Plans with Terraform

The journey with Terraform will be revisited in future updates, providing an opportunity to explore and document overcoming its initial challenges. This effort underscores a commitment to mastering complex IaC solutions and sharing these experiences.

Server vs. Server-Lambda: Dockerfile, Redis, and Lambda Enhancements

Server-Lambda

Distinct Server Configurations: Local and Lambda

In this project, I utilized two distinct server files to cater to different environments: server.ts for local development and server-lambda.ts for AWS Lambda deployment.

  1. server.ts for Local Development: This file configures a traditional server setup, optimized for local development and testing. It provides a streamlined and efficient process, free from the complexities of a serverless environment.

  2. server-lambda.ts for AWS Lambda: Tailored for deployment in a serverless architecture, this file includes specific adjustments and integrations, like Redis for efficient data caching, ensuring optimal performance and scalability in AWS Lambda.

  3. Dockerfile Solution: A Dockerfile was used to containerize the application, ensuring consistent environments and simplifying deployment. This approach aids in managing dependencies and environmental configurations across development and production stages.

The dual server file strategy reflects the project's adaptability, ensuring each environment's unique demands are met efficiently. This approach demonstrates the importance of tailoring the architecture to suit different deployment scenarios.

Conclusion and Reflections

Embracing Challenges and Learning

This journey through building a modern full-stack monorepo application has been as enlightening as it has been challenging. It reaffirmed my passion for digital architecture and the pursuit of innovative solutions in the web development realm.

Key takeaways from this project include:

  1. Adaptability in Technology Choices: The need to pivot from Bun to Yarn and from Terraform to AWS Lambda highlighted the importance of flexibility in technology choices. It showed that while cutting-edge tools can offer significant advantages, sometimes established technologies provide the necessary stability and reliability.

  2. Enhancing API Capabilities with GraphQL: The use of GraphQL to enrich the Rick and Morty API showcased the power of this query language in creating efficient, flexible data interactions, going beyond the limitations of traditional REST APIs.

  3. Balancing Innovation with Practicality: The project underlined the balance between embracing new technologies and ensuring practical, stable solutions, especially in a professional setting where reliability and maintainability are paramount.

  4. Infrastructure as Code (IaC) Learning Curve: The challenges faced with Terraform, and the subsequent switch to AWS Lambda, provided valuable insights into cloud infrastructure management, emphasizing the need for continuous learning and adaptability.

  5. Server Configuration for Different Environments: The use of separate server configurations for local development and AWS Lambda deployment highlighted the importance of environment-specific optimizations in software development.

Looking Ahead

As I continue to explore and master various technologies, I plan to revisit some of the initial challenges, like those encountered with Terraform, and document these experiences. This ongoing journey not only contributes to my professional growth but also serves as a resource for others navigating similar paths.

In summary, this project was a testament to the dynamic nature of web development and cloud infrastructure, where continuous learning, adaptability, and a pragmatic approach are key to success.

Top comments (1)

Collapse
 
seena profile image
seenA

I really like your technologies and stack good, fantastic for someone who dosen't code normally (i guess)