Introduction:
Starting a new Node.js project can be a daunting task, especially when it comes to setting up the right configuration and choosing the essential packages. In this article, we'll explore a powerful starter configuration that will help you kickstart your Node.js projects with confidence. We'll dive into the package.json
file and other configuration files to understand the purpose and benefits of each package included.
Directory Structure
Before we dive into the configuration details, let's examine the final directory structure our Node.js project will have:
├── .czrc
├── .github
│ └── workflows
│ └── node.js.yml
├── .gitignore
├── .husky
│ ├── _
│ ├── pre-commit
│ └── prepare-commit-msg
├── LICENSE
├── README.md
├── ava.config.js
├── package-lock.json
├── package.json
├── src
│ ├── index.js
│ └── sum.js
├── test
│ └── starter.spec.js
├── tsconfig.build.types.json
├── tsconfig.json
└── types
├── index.d.ts
└── sum.d.ts
Exploring the package.json
Let's begin our exploration by analyzing the final package.json
file, which will acts as the foundation of our Node.js project.
{
"name": "my-node-project",
"version": "1.0.0",
"description": "",
"main": "index.js",
"type": "module",
"scripts": {
"coverage": "c8 --reporter=lcov ava",
"coverage:view": "c8 --reporter=html --reporter=text ava",
"lint": "npm run lint:code && npm run lint:types",
"lint:code": "standard",
"lint:fix": "standard --fix",
"lint:types": "tsc",
"prepare": "husky install",
"test": "ava",
"test:watch": "ava --watch",
"build:types": "tsc -p tsconfig.build.types.json"
},
"repository": {
"type": "git",
"url": "https://github.com/your-username/my-node-project.git"
},
"keywords": [
"generator"
],
"contributors": [
{
"name": "Álvaro José Agámez Licha",
"email": "alvaroagamez@outlook.com"
}
],
"license": "MIT",
"devDependencies": {
"ava": "^5.3.1",
"c8": "^8.0.0",
"commitizen": "^4.3.0",
"cz-conventional-changelog": "^3.3.0",
"husky": "^8.0.3",
"sinon": "^15.2.0",
"standard": "^17.1.0",
"typescript": "^5.1.6"
},
"config": {
"commitizen": {
"path": "./node_modules/cz-conventional-changelog"
}
}
}
-
ava:
- Ava is a robust and developer-friendly test runner for Node.js.
- With concise syntax and fast execution, Ava simplifies the process of writing and executing tests.
- It allows you to create tests in the
test/**/*.spec.js
files specified in theava.config.js
file.
-
c8:
- C8 is a code coverage tool for Node.js projects.
- By generating code coverage reports, C8 helps you understand how well your tests cover your codebase.
- The
coverage
andcoverage:view
scripts in thepackage.json
file utilize C8 to generate detailed code coverage reports.
-
commitizen and cz-conventional-changelog:
- Commitizen and cz-conventional-changelog work together to enforce standardized commit messages in your project.
- Commitizen provides a command-line tool that guides you through creating consistent commit messages.
- cz-conventional-changelog serves as the adapter for Commitizen, enforcing the conventional changelog style.
- By adhering to a standardized commit message format, you can generate meaningful changelogs automatically.
-
husky:
- Husky simplifies the process of managing Git hooks in your project.
- Git hooks enable you to run custom scripts at specific points during your Git workflow.
- Husky's
prepare
script installs the necessary hooks, ensuring consistent behavior across different environments.
-
sinon:
- Sinon is a powerful library for creating test spies, stubs, and mocks in JavaScript.
- Although not explicitly used in this configuration, Sinon can be utilized in your test files to manage dependencies and create test doubles.
-
standard:
- Standard is a popular JavaScript code style and formatting tool.
- It enforces a set of predefined rules to maintain code consistency and readability.
- The
lint
andlint:fix
scripts in thepackage.json
file use Standard to check and automatically fix linting issues, respectively.
-
typescript:
- Static Typing: TypeScript introduces static typing to JavaScript, enabling you to define and enforce types for variables, parameters, return values, and more. This helps catch type-related errors early, provides code clarity, and improves overall code quality.
- Advanced Tooling: TypeScript integrates seamlessly with popular code editors and IDEs, offering advanced tooling features such as autocompletion, type checking, and refactoring support. This enhances developer productivity and provides real-time feedback on potential issues.
Note: This project don't use TypeScript to write the code, but use the TypeScript Compiler to check and generate the types defined using JSDoc.
Let's Start
Step 1: Project Initialization
To kick off our Node.js project, you need to initialize a new directory. Open your terminal and navigate to the desired location for your project. Now, execute the following command:
npm init
This command triggers the project initialization process and prompts you to provide essential information about your project, such as its name, version, description, entry point, and more. Feel free to answer these questions and let npm generate a shiny new package.json
file for you.
Activation ES Module System
I like to set "type": "module"
entry in the package.json
file to specify that our project uses ECMAScript modules (ES modules) rather than the CommonJS module system, which is the default in Node.js.
By setting "type": "module"
, you inform Node.js that your project's code will be written using ECMAScript modules syntax, which includes using import
and export
statements to import and export code between modules.
Here's a breakdown of the implications and benefits of setting "type": "module"
:
ES Modules Syntax: When
"type": "module"
is specified, you can use the ES modules syntax for importing and exporting code within your project. Instead of usingrequire()
andmodule.exports
, you useimport
andexport
statements.Static Analysis: With ES modules, the import and export statements are statically analyzable. This allows tools like bundlers, linters, and transpilers to better understand your code's dependencies and perform optimizations.
Improved Compatibility: Setting
"type": "module"
makes your code more compatible with modern JavaScript tooling and environments that support ES modules. It aligns your project with the direction of the JavaScript ecosystem and enables you to take advantage of the latest features and tools.
It's important to note a few considerations when using "type": "module"
:
File Extension: When using ES modules, it's recommended to use the
.mjs
file extension for your modules. However, Node.js allows using.js
for ES modules as well, as long as"type": "module"
is specified.Node.js Support: While many modern versions of Node.js support ES modules, older versions may have limited or no support. When using
"type": "module"
, make sure you are using a Node.js version that includes support for ECMAScript modules.Interoperability: Mixing ES modules and CommonJS modules (using
require
andmodule.exports
) within the same project can lead to compatibility issues. Ensure that all your project's modules are written using the same module system to avoid conflicts.
By specifying "type": "module"
in your package.json
, you signal that your project is designed to use ES modules, allowing you to leverage the benefits of the modern module system and aligning your project with the evolving JavaScript ecosystem.
Step 2: Package Installation
Now that your project has been initialized, it's time to install the necessary packages to empower your Node.js application. npm simplifies package installation through a straightforward command. Let's say you want to install the popular Express framework. Run the following command in your terminal:
npm i -D express ava c8 commitizen cz-conventional-changelog husky sinon standard
Sit back and relax as npm swiftly downloads and installs all the required packages. By specifying the package names and versions along with the -D
flag, npm adds them to your package.json
file as devDependencies.
Step 3: Commitizen Configuration
In your package.json
file, the "config"
section is used to define custom configuration options for various tools or libraries used within your project. In this case, the "config"
section is specifying a configuration option for the "commitizen"
tool.
The specific configuration option being set for Commitizen is "path"
. The "path"
option specifies the location of the commitizen adapter or module that should be used for generating commit messages in the desired format. In this case, the "path" is set to "./node_modules/cz-conventional-changelog"
.
This configuration allows Commitizen to understand and enforce the conventional commit message format defined by the "cz-conventional-changelog" module. The conventional commit message format typically includes a standardized structure for commit messages, including a type, scope, and description, which is useful for generating changelogs and tracking changes in a standardized way.
By defining this configuration option, you ensure that Commitizen uses the specified adapter for generating commit messages according to the conventional commit message format within your project.
Add the next code to your package.json
:
"config": {
"commitizen": {
"path": "./node_modules/cz-conventional-changelog"
}
}
npm Scripts
To create the scripts I use npm pkg set
command:
npm pkg set scripts.coverage="c8 --reporter=lcov ava"
npm pkg set scripts.coverage:view="c8 --reporter=html --reporter=text ava"
npm pkg set scripts.lint="npm run lint:code && npm run lint:types"
npm pkg set scripts.lint:code="standard"
npm pkg set scripts.lint:fix="standard --fix"
npm pkg set scripts.lint:types="tsc"
npm pkg set scripts.prepare="husky install"
npm pkg set scripts.test="ava"
npm pkg set scripts.test:watch="ava --watch"
npm pkg set scripts.build:types="tsc -p tsconfig.build.types.json"
The npm pkg set
command is used to set a value in our package.json
file. The syntax for the command is:
npm pkg set <field>=<value>
Additional Configuration Files
In addition to the package.json
file, there are a few additional configuration files that complement the setup:
-
.czrc:
- This file configures the Commitizen adapter, specifying the path to
cz-conventional-changelog
. - By referencing the correct adapter, Commitizen ensures that your commit messages follow the conventional changelog style.
- To generate the
.czrc
file execute the next command:
echo '{ "path": "cz-conventional-changelog" }' > .czrc
- This file configures the Commitizen adapter, specifying the path to
-
ava.config.js:
- This file exports a configuration object for Ava, the test runner.
- The specified
files
array determines which test files are included during test execution. - In this starter configuration, all files matching the pattern
test/**/*.spec.js
will be considered for testing. - Content of
ava.config.js
:
export default { files: [ 'test/**/*.spec.js' ] }
-
.husky:
- The
pre-commit
hook script is used to execute actions before committing changes, such as running tests or linting the code. - The
prepare-commit-msg
runs the Commitizen CLI in an interactive mode, allowing you to create standardized commit messages effortlessly.
- The
Husky Configuration
To configure Husky the first thing is to have a git
repository for our project, so if you don't have one right now, run the next command in your terminal (inside your project diretory):
git init
This command will create a new 'git' repository for our project. Now we can initialize our Husky project using the next command:
npm run prepare
After this command is executed a new folder called .husky
will be created inside our project directory. The purpose of the .husky
directory and its files is to define and configure the Git hooks for our project.
npx husky add .husky/prepare-commit-msg 'exec < /dev/tty && node_modules/.bin/cz --hook || true'
npx husky add .husky/pre-commit "npm test"
npx husky add .husky/pre-commit "npm run lint"
Creating a Generic Unit Test with Ava
To ensure that your unit tests are functioning correctly, it's essential to create a generic unit test case that can serve as a starting point. In this section, we'll demonstrate how to create a simple unit test using Ava, utilizing the provided code example.
To begin, follow these steps:
Navigate to our project's
test
directory.Create a new file named
starter.spec.js
.Add the following code to the
starter.spec.js
file:
import test from 'ava'
test('should pass the first starter test', t => {
t.pass()
})
Now, to execute the unit test, you can use the test script defined in the package.json file. The test script is set to run Ava, the test runner, and execute all the tests located in the test directory.
To run the test, execute the following command in your terminal:
npm run test
Understanding the Provided Test Scripts
Let's take a closer look at the test-related scripts defined in the package.json file:
"scripts": {
"test": "ava",
"test:watch": "ava --watch"
}
"test"
: This script runs Ava and executes all the tests located in thetest
directory. When you runnpm run test
, Ava starts executing the test files and displays the results in the terminal."test:watch"
: This script also runs Ava, but with the--watch
flag. This flag enables the watch mode, which means that Ava continuously monitors the test files for changes. When a file is modified, Ava automatically reruns the corresponding tests, providing instant feedback during development. To use this script, executenpm run test:watch
in your terminal.
Ava will execute the starter.spec.js test, and if everything is set up correctly, you should see the test result, indicating that it passed successfully.
Linting Your Code with Standard
Linting is a crucial part of the development process that helps maintain code quality and enforce consistent code style. In this section, we'll discuss how the standard package is used for linting our code, along with the two provided scripts: lint
and lint:fix
.
Now, let's explore the purpose and functionality of the two linting scripts:
Script: "lint"
"lint": "standard"
The lint
script invokes the standard
command, provided by the Standard package. When executed, it analyzes your project's source code files and reports any linting errors or warnings. This script allows you to quickly identify code quality issues and enforce consistent style across your codebase.
To run the linting process, execute the following command in your project's directory:
npm run lint
Upon execution, Standard will scan your source code files and display any linting errors or warnings in the console. These messages will provide details about the specific issues found and their locations in your code.
Script: "lint:fix"
"lint:fix": "standard --fix"
The lint:fix
script performs the same linting analysis as the lint
script but with an additional capability to automatically fix certain issues. By appending the --fix
flag to the standard
command, the script attempts to resolve linting errors or apply suggested fixes wherever possible. However, be cautious when using automatic fixes, as they can introduce unintended changes to your code. It's always recommended to review the changes made by the lint:fix
script manually.
To run the linting process with automatic fixes, execute the following command in your project's directory:
npm run lint:fix
Generating TypeScript Types using JSDoc and the Benefits of Type Checking
JSDoc can be a powerful tool not only for generating TypeScript types but also for enabling type checking in your JavaScript code. By providing type annotations through JSDoc comments, you can improve code clarity, maintainability, and catch potential errors early on. In this section, we'll explore how to use JSDoc to generate TypeScript types and discuss the benefits of incorporating type checking into your development process.
The Benefits of Type Checking
Type checking offers several key benefits for your JavaScript codebase:
Early Error Detection: By incorporating type checking, you can catch errors early in the development process. The TypeScript compiler examines your code and provides immediate feedback on potential type-related issues, such as passing incorrect arguments to functions or assigning incompatible values to variables. This helps identify and resolve bugs before they manifest at runtime.
Enhanced Code Clarity: Type annotations offer improved code documentation and self-documenting code. When reviewing the codebase, developers can quickly understand the expected types of variables, parameters, and return values, making it easier to reason about the code's behavior. The type information serves as a form of documentation, reducing ambiguity and enhancing collaboration.
Refactoring Confidence: With type checking, refactoring becomes less error-prone. When making changes to your codebase, the TypeScript compiler highlights type mismatches and inconsistencies, enabling you to confidently update code without the fear of introducing subtle bugs. The compiler flags any potential issues, allowing you to address them proactively.
Tooling and IDE Support: TypeScript's static typing enables advanced tooling features in Integrated Development Environments (IDEs) and text editors. Autocompletion, intelligent code suggestions, and real-time error highlighting are some of the benefits that come with type checking. These features enhance developer productivity, making it easier to navigate and write code.
Maintainable and Scalable Codebase: As projects grow larger and more complex, maintaining code quality becomes increasingly challenging. Type checking helps ensure the integrity and maintainability of the codebase. It enables better collaboration among team members, facilitates code reviews, and reduces the likelihood of introducing regression bugs.
To generate TypeScript types using JSDoc, follow these steps:
-
Ensure that your JavaScript code files contain JSDoc comments with appropriate type annotations. For example, consider the following JSDoc comment:
/** * Calculate the sum of two numbers. * * @param {number} a - The first number. * @param {number} b - The second number. * @returns {number} The sum of the two numbers. */ function sum(a, b) { return a + b; }
In this example, the JSDoc comment provides type annotations for the
a
andb
parameters and the return value of thesum
function. Configure your TypeScript compiler options appropriately to generate TypeScript types from JSDoc comments. We'll examine the provided
tsconfig.json
andtsconfig.build.types.json
files to understand the necessary settings.
Understanding the tsconfig Files
Now, let's explore the provided tsconfig.json
files:
tsconfig.json:
{
"compilerOptions": {
"target": "ESNext",
"module": "NodeNext",
"moduleResolution": "node",
"lib": [
"ESNext"
],
"allowJs": true,
"alwaysStrict": true,
"checkJs": true,
"esModuleInterop": true,
"noEmit": true,
"noImplicitAny": true,
"noImplicitThis": true,
"strict": true
},
"include": [
"test",
"src"
],
"exclude": [
"node_modules"
]
}
Compiler Options: The
compilerOptions
field specifies the TypeScript compiler options. These options define how TypeScript should compile your code and generate the appropriate JavaScript and TypeScript files.-
JSDoc Support: In this configuration, key options related to JSDoc support include:
-
allowJs: true
: This option enables TypeScript to process JavaScript files in your project. -
checkJs: true
: This option instructs TypeScript to check the validity of JSDoc comments and type annotations in JavaScript files.
-
Include and Exclude: The
include
field specifies the directories or files that should be included in the compilation process. In this case, both thetest
andsrc
directories are included. Theexclude
field specifies the directories or files that should be excluded from the compilation, such as thenode_modules
directory.
tsconfig.build.types.json:
{
"extends": "./tsconfig",
"compilerOptions": {
"declarationDir": "./types",
"noEmit": false,
"declaration": true,
"emitDeclarationOnly": true
},
"include": [
"src"
]
}
Extending Base Configuration: The
extends
field is used to extend the basetsconfig.json
configuration, inheriting its settings and overriding or adding specific options for thebuild:types
script.-
Declaration Files: The options within
compilerOptions
are specific to generating TypeScript declaration files (.d.ts
). The key options are:-
declarationDir
: This option specifies the output directory where the declaration files will be generated. In this example, they will be outputted to thetypes
directory. -
noEmit
: This option is set tofalse
to enable TypeScript to emit output files, including the declaration files. -
declaration
: This option is set totrue
to generate the declaration files. -
emitDeclarationOnly
: This option ensures that only the declaration files are emitted, excluding the compiled JavaScript files.
-
Include: The
include
field specifies the directories or files that should be included in the compilation process for generating the declaration files. In this case, only thesrc
directory is included.
Explaining the npm Scripts
Let's examine the two npm scripts related to TypeScript type generation and their functionalities:
"scripts": {
"lint:types": "tsc",
"build:types": "tsc -p tsconfig.build.types.json"
}
"lint:types"
: This script uses the TypeScript compiler (tsc
) to perform a type check on your TypeScript types. It ensures that your JSDoc comments provide valid and accurate type information. When you runnpm run lint:types
, the TypeScript compiler checks the types and reports any errors or inconsistencies."build:types"
: This script uses the TypeScript compiler with a customtsconfig.build.types.json
configuration file. It generates TypeScript declaration files (.d.ts
) based on your JSDoc comments. The generated declaration files provide type information for TypeScript-aware tools and libraries. Runningnpm run build:types
triggers the TypeScript compiler with the specifiedtsconfig.build.types.json
configuration, which is explained in the next section.
Code Coverage
Ensuring adequate test coverage is crucial for maintaining code quality and identifying potential areas of improvement. Ava provides support for code coverage, allowing you to measure how much of your code is exercised by your unit tests. Let's explore the coverage-related scripts provided in the package.json
file and understand their functionalities.
Understanding the Provided Coverage Scripts
The package.json
file includes the following coverage-related scripts:
"scripts": {
"coverage": "c8 --reporter=lcov ava",
"coverage:view": "c8 --reporter=html --reporter=text ava"
}
-
"coverage"
: This script measures the code coverage by running Ava with thec8
code coverage tool and generating an LCOV (Line Coverage) report.- The
--reporter=lcov
option specifies the LCOV reporter, which produces a detailed coverage report in a standardized format. - When executing
npm run coverage
, Ava runs the tests, andc8
tracks the code coverage during the test execution. After the tests finish, it generates the LCOV report.
- The
-
"coverage:view"
: This script generates an HTML coverage report and also includes a text-based summary.-
c8
is again used for code coverage analysis, but this time with different reporters. - The
--reporter=html
option instructsc8
to generate an HTML coverage report. - The
--reporter=text
option ensures that a text-based summary of the coverage results is also displayed. - When executing
npm run coverage:view
, Ava runs the tests, andc8
collects the code coverage data. It then generates an HTML report and displays a summary in the terminal.
-
Running the Coverage Scripts
To measure code coverage and generate reports, you can execute the coverage-related scripts defined in the package.json
file. Follow the instructions below:
Open your terminal or command prompt.
To generate an LCOV coverage report, execute the following command:
npm run coverage
Ava will run the tests while c8
tracks code coverage. Once the tests complete, the LCOV report will be generated. You can find the report in a file named coverage/lcov-report/index.html
.
- To generate an HTML coverage report and view a summary, execute the following command:
npm run coverage:view
Ava will execute the tests, and c8
will generate an HTML coverage report. The report will be saved in the coverage
directory, and a summary will be displayed in the terminal.
By using these coverage scripts, you can gain insights into the effectiveness of your tests and identify areas that may require additional testing or refactoring. Understanding the coverage metrics allows you to make informed decisions and improve the quality and reliability of your codebase.
Feel free to integrate the coverage scripts into your development workflow, executing them alongside your regular testing process or as part of your CI/CD pipeline.
Automating Node.js Continuous Integration with GitHub Actions
Continuous Integration (CI) plays a crucial role in modern software development, allowing developers to automate build, test, and deployment processes. In this section, we'll explore how to leverage GitHub Actions to set up a CI workflow for a Node.js project. Let's dive into the details of the provided GitHub Actions workflow and understand each part:
name: Node CI
on:
push:
branches: [master]
pull_request:
branches: [master]
Name: The
name
field sets the name of the workflow. In this case, it's set to "Node CI," but you can customize it to reflect the purpose of your CI workflow.Triggers: The
on
field specifies the events that trigger the workflow. In this example, the workflow is triggered on push events to themaster
branch and pull requests targeting themaster
branch. You can modify the branch names to match your project's branch configuration.
Next, let's dive into the jobs
section of the workflow:
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [16.x, 18.x, 20.x]
Job: The
jobs
section defines one or more jobs that make up the workflow. In this case, there is a single job namedbuild
.Runs-on: The
runs-on
field specifies the type of machine to execute the job. In this example, the job runs on the latest version of the Ubuntu operating system.Strategy: The
strategy
field allows you to define a matrix of configurations for the job. Here, thenode-version
matrix defines different versions of Node.js to be tested. The workflow will run the job for each version specified.
Now, let's explore the steps defined within the build
job:
steps:
- name: 'Checkout Project'
uses: 'actions/checkout@v3'
with:
fetch-depth: 0
- name: Use Node.js ${{ matrix.node-version }}
uses: 'actions/setup-node@v3'
with:
node-version: ${{ matrix.node-version }}
- name: 'Install Dependencies'
run: 'npm ci'
- name: Lint Files
run: 'npm run lint'
- name: 'Run Tests and Coverage'
env:
CI: true
run: 'npm run coverage'
- name: 'Coveralls Parallel'
uses: 'coverallsapp/github-action@v2'
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
parallel: true
path-to-lcov: './coverage/lcov.info'
- name: 'Coveralls Finished'
uses: 'coverallsapp/github-action@v2'
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
parallel-finished: true
Checkout Project: The first step checks out the project's source code using the
actions/checkout
GitHub Action. It fetches the full commit history by settingfetch-depth: 0
, ensuring access to all previous commits.Use Node.js: The
actions/setup-node
GitHub Action is used to set up the desired Node.js version specified in the matrix. It ensures the correct version is available for subsequent steps.Install Dependencies: This step executes the
npm ci
command to install the project dependencies using thenpm
package manager. Theci
command installs dependencies based on thepackage-lock.json
ornpm-shrinkwrap.json
files, ensuring reproducibility and consistency.Lint Files: The
npm run lint
command is executed to run the linter and perform static code analysis. This step helps identify and enforce code quality and style standards.Run Tests and Coverage: The
npm run coverage
command is executed with theCI=true
environment variable. This typically runs the tests using the test runner (Ava, in this case) and generates code coverage reports.Coveralls Parallel: The
coverallsapp/github-action
GitHub Action is used to upload the code coverage information to a service like Coveralls. This step runs in parallel with other jobs and utilizes the specifiedpath-to-lcov
file to provide coverage details.Coveralls Finished: The second
coverallsapp/github-action
step signifies the end of parallel coverage uploads. It ensures that all parallel jobs have completed and aggregates the coverage information before finishing the workflow.
By using this GitHub Actions workflow, you can automate the build, test, and coverage processes for your Node.js project. It provides a robust CI pipeline that runs on different Node.js versions, installs dependencies, performs linting, executes tests, and uploads coverage reports to a service like Coveralls. Tailor the workflow to fit your specific project requirements and enhance it with additional steps, such as deployment processes, as needed.
Keep in mind that you can customize the workflow further based on your project's specific needs, integrating additional testing frameworks, static analyzers, or other tools to suit your development process.
Coveralls Integration
To activate the Coveralls integration for your GitHub repository, follow these steps:
Sign in to your Coveralls account at coveralls.io. If you don't have an account, you can sign up for free.
Once you're signed in, click on the "Add Repos" button in the top navigation bar.
On the "Add Repos" page, you will see a list of your GitHub repositories. Find the repository you want to enable Coveralls for and click the "Add" button next to it.
After executing the GitHub action, we can see the Coveralls report which will look like this:
Conclusion
In this article, we've explored a powerful Node.js starter configuration that incorporates essential packages to enhance your development workflow. By leveraging the packages mentioned above, you can streamline your testing, ensure code quality, enforce commit message standards, and simplify Git workflows.
With this solid foundation, you can focus on building amazing Node.js applications without worrying about the initial setup. Feel free to adapt and customize this starter configuration to match your specific project requirements.
If you want to add Docker and Docker Compose to this project, you can read my previous article.
The full code for this Node.js Starter is available on Github.
Happy coding!
If you have any questions or need further assistance, please let me know in the comments section below or message me at Twitter or LinkedIn.
Top comments (0)