loading...
Cover image for Testing your Amplify Application with Jest and Cypress

Testing your Amplify Application with Jest and Cypress

rakannimer profile image Rakan Nimer ・16 min read

In this post we will be writing static, unit, integration and end-to-end (e2e) tests for a web app deployed with Amplify Console and that uses an Amplify-generated AppSync GraphQL API to query, mutate and subscribe to data.

The app we're testing can be found here and the final code with tests here.

Introduction

Before we continue, if you're not sure what the differences are between the different types of tests, or what each type means then read this post by @kentcdodds (Honestly, you should read it even if you do).

Static tests are done not by executing the code, but by reading it, parsing it and trying to find problems in it. We will be using TypeScript, ESLint and Prettier for our static testing.

Unit tests make sure that individual units of code (functions, components, classes...) produce the right output (and effects) for a given input. We will be unit testing the app's React reducer, a pure function (deterministic and no side-effects).

Integration tests give us confidence that different units of code work together as we expect them to. We will test our routes component with React Testing Library

And finally, e2e tests interact with our app as our end users would.
We'll build our code then interact with it and run assertions on it with cypress and Cypress Testing Library.

Static tests

Typescript

The app we're testing uses Next.js. Starting from version 9, Next.js has TypeScript support out of the box with no configuration required (More info).

If you're starting from scratch run : npx create-next-app --example with-typescript

So we'll just write our code in TypeScript and run the TypeScript compiler to verify there are no errors before every push.

To do that we will need to add a git hook that runs the TypeScript compiler before every push and prevents us from pushing if the code compiles with errors.

Husky makes adding and configuring git hooks easy.

We start by adding husky as a development dependency :

npm i -D husky # Or yarn add -D husky

And then in package.json, add a husky section with git hooks configured

{
  "husky": {
    "pre-push": "tsc"
  }
}

And that's it for TypeScript, now any time we try to push code that doesn't compile, husky will throw and prevent us from doing it.

ESLint

Starting from 2019, ESLint has gotten full TypeScript support. TSLint will soon be deprecated in favor of ESLint so it might be wiser to use ESLint in new projects.

To do that we will start by setting up ESLint with JavaScript then add TypeScript support

Start by installing eslint, the eslint react plugin and the typescript parser

yarn add -D eslint @typescript-eslint/parser @typescript-eslint/eslint-plugin eslint-plugin-react # npm i -D eslint @typescript-eslint/parser @typescript-eslint/eslint-plugin eslint-plugin-react

And then initialize eslint with the config file .eslintrc.js in the root dir of the project:

module.exports = {
  extends: [
    "eslint:recommended",
    "plugin:@typescript-eslint/recommended",
    "plugin:react/recommended"
  ],
  parserOptions: {
    ecmaFeatures: {
      jsx: true,
      modules: true
    },
    ecmaVersion: 2018,
    sourceType: "module"
  },
  parser: "@typescript-eslint/parser",
  plugins: ["react"],
  rules: {
    // I usually turn off these rules out of personal, feel free to delete the rules section in your project
    "@typescript-eslint/explicit-function-return-type": "off",
    "react/prop-types": "off"
  }
};

To lint your code, run :

# Lint all ts or tsx files in src/ and src/{any}/
yarn eslint src/**/*.ts*  src/*.ts* # or $(npm bin)/eslint src/**/*.ts*  src/*.ts

or add a script in package.json to run the command :

{
  "scripts": {
    "lint": "eslint src/**/*.ts*  src/*.ts*"
  }
}

Since the project uses Amplify Codegen we'll need to tell eslint to ignore the generated code emitted by the cli using a .eslintignore file.

As made evident by the name, it behaves like .gitignore but for eslint.

# Path to code generated by Amplify
src/graphql/
src/API.ts

And finally download and install an eslint plugin for your editor to see warning and errors as you type the code. Link to plugin if you're using VSCode.

Prettier

Using prettier is a no-brainer and it also counts as a form of static testing because it parses the code and throws when it's not un-able to.

yarn add -D prettier # npm i -D prettier

Then add prettier to your code editor and never think about formatting again.

The final git hooks in package.json becomes :

{
  "husky": {
    "pre-commit": "prettier --write \"src/*.ts\" \"src/**/*.ts*\"",
    "pre-push": "tsc && yarn lint"
  }
}

Note that this lints and runs prettier on your whole codebase, if you're working on a large codebase then it might be a good idea to use lint-staged to verify changed files only.

Setting up Jest with TypeScript and ESLint

There are two ways to setup Jest and TypeScript, you can either use babel to strip the types before running the code (no type checks) or use the typescript compiler to compile the code before running it. The official docs seem to point the user towards Babel, and Jest is much faster with Babel vs ts-jest with tsc. So we'll go with Babel and use a pre-commit hook to handle type checks.

1. Setup Jest with Babel

Run

yarn add -D jest @types/jest babel-jest @babel/core @babel/preset-env @babel/preset-react

Create a babel.config.js file in the root directory and in it, add :

module.exports = {
  presets: [
    [
      "@babel/preset-env",
      {
        targets: {
          node: "current"
        }
      }
    ],
    "@babel/preset-react"
  ]
};

2. Add TypeScript support to Babel

yarn add -D @babel/preset-typescript

and in babel.config.js :

- "@babel/preset-react"
+ "@babel/preset-react",
+ "@babel/preset-typescript"

3. Configure ESLint with Jest

Install eslint-plugin-jest

yarn add -D eslint-plugin-jest # npm i -D eslint-plugin-jest

And in the .eslintrc.js file, add the jest plugin and jest globals (describe, test, expect...) :

module.exports = {
  env: {
    browser: true,
-    es6: true
+    es6: true,
+    "jest/globals": true
  },
-  plugins: ["@typescript-eslint", "react"],
+  plugins: ["@typescript-eslint", "react", "jest"],
}

At this point, Jest should be setup correctly with ESLint and TypeScript.

Running a test consists of adding a TS file in __tests__ directory and executing :


yarn jest # $(npm bin)/jest # npx jest

Unit tests

Unit tests make sure that functions behave as expected given some input.

Pure functions lend themselves well to unit testing.

The React reducer we use contains the main logic of the app and is a pure function. For every given combination of state and action the function returns a new state.

Jest is a testing framework that focuses on simplicity that will be used for unit and integration tests.

Testing the Reducer

Testing the reducer function consists of calling the reducer with different actions and states and running assertions on the output.

We define every test to be of the following type :

type ReducerTest = {
  state: State;
  action: Action;
  assertions: (newState: State, state: State, action: Action) => void;
};

For example, a simple test to make sure that adding a channel works, would look like the following :

import cases from "jest-in-case";

const reducerTest = {
  name: "Can append channel to empty state"
  state: getInitialState(),
  action: {
    type: "append-channels",
    payload: { items: [createChannel()], nextToken: null }
  },
  assertions: (newState, state, action) => {
    expect(newState.channels.items.length).toEqual(1);
  }
};

const tests = [reducerTest];

const runTest = reducerTest => {
  const newState = reducer(reducerTest.state, reducerTest.action);
  reducerTest.assertions(newState, reducerTest.state, reducerTest.action);
};

cases("works", runTest, tests);

and adding tests consists of adding items to your tests array.

jest-in-case is a "Jest utility for creating variations of the same test"

More tests can be found here.

Integration Tests

These will give us confidence that our components work as expected together. We will be testing and running assertions on route components.

But before doing that we need to setup mocking.

Choosing what to mock

Mocking consists of replacing a unit of code with another that has the same API but not the same effects.

For example, suppose we wanted to mock the API object from @aws-amplify/api.

The app only uses the graphql method of API, and the graphqlOperation method so it would be sufficient to mock it.

@aws-amplify/api is an npm module, in order to mock it, we need to add a __mocks__ folder to the root directory and inside it, create a folder @aws-amplify and file called api.ts.

__mocks__/@aws-amplify/api.ts would look like this :

const API = {
  graphql: operation => {
    if (isSubscription(operation)) return Observable;
    else return Promise;
  }
};
export const graphqlOperation = (query, variables) => ({ query, variables });
export default API;

But mocking at this low level will make it harder to test the right behavior.

For example, if on mount, a component calls API.graphql 3 times, once for a mutation once for a query and once for a subscription.

To test it we'd need to make the API.graphql mock relatively complex, it would need to parse the query on each call and return the appropriate type of data depending on it ) so we'll go one level higher.

Instead of mocking the @aws-amplify/api module, we will mock our models.

Models in this app are the only interfaces available to the UI to interact with the remote API. Components are not allowed to use @aws-amplify/api, they use models that talk with the API, massage the data when needed and return it back to the caller using an Observable or a Promise.

For example to get a promise that lists all channels we'd write something like this :

In App.tsx

import * as React from "react";
import { models } from "./models/ModelsContext";

const App = () => {
  const [channels, setChannels] = React.useState({ items: [], nextToken: "" });
  React.useEffect(() => {
    models.Channels.getChannels().then(chans => {
      setChannels(c => ({
        items: [...c.items, ...chans.items],
        nextToken: chans.nextToken
      }));
    });
  }, []);
  const loadMore = () => {
    models.Channels.getChannels(channels.nextToken).then(chans => {
      setChannels(c => ({
        items: [...c.items, ...chans.items],
        nextToken: chans.nextToken
      }));
    });
  };
  return (
    <Some>
      <ReactTree
        onEndReached={() => {
          loadMore();
        }}
      >
        {channels.items.map(chan => (
          <ChannelCard channel={chan} />
        ))}
      </ReactTree>
    </Some>
  );
};

And in models/Channels.tsx :

import API, { graphqlOperation } from "@aws-amplify/api";
import { queryToGetChannels } from "path/to/generated/graphql/queries";

const EMPTY_CHANNELS = { items: [], nextToken: "" }

export const getChannels = async () => {
  try {
    const channels = await API.graphql(graphqlOperation(queryToGetChannels));
    if (isValidChannelsData(channels))){
      return channels;
    }
    return EMPTY_CHANNELS;
  } catch (err) {
    return EMPTY_CHANNELS;
  }
};

If you're not sure what the deal is with nextToken and how to use it, please refer to my
earlier blog post on pagination and sorting with AWS Amplify.

Mocking models will give us confidence that the app works IF the Amplify API works as expected and that should be enough for the integration tests.

In addition to the models, dependencies that rely on browser features that are not available in JSDOM should also be mocked. The only dependencies of this kind is react-intersection-observer which relies on the IntersectionObserver API, and next/router which returns a null router in the JSDOM environment. Mocking the former should be simple as it's a simple React hook and the latter even simpler since it's just a useContext call.

Mocking useRouter from next/router

If you look at the code of useRouter, it's only a React.useContext call to the router context :

import { RouterContext } from "next-server/dist/lib/router-context";
export function useRouter() {
  return React.useContext(RouterContext);
}

So we won't need to mock the useRouter with Jest, we just have to wrap our tests in a new RouterContext.Provider and the children components will get a custom router injected per test.

import { RouterContext } from "next-server/dist/lib/router-context";

render(
  <RouterContext.Provider
    value={{
      pathname: "/",
      push: jest.fn()
      //...
    }}
  >
    <App />
  </RouterContext.Provider>
);

And now the App will get access to the context provided object above when calling useRouter().

Make sure to read the React docs on Context if you haven't worked with it before.

Mocking react-intersection-observer

Mocking npm dependencies with Jest is very straight-forward :

  1. Create a folder called __mocks__ in the root directory.
  2. Add a file called react-intersection-observer.ts.
  3. Inside it mock the behavior of the module.

In __mocks__/react-intersection-observer.ts.

import * as React from "react";

export const useInView = jest.fn().mockImplementation(() => {
  return [React.useRef(), true];
});

export default {
  useInView
};

jest.fn() is a nice Jest utility function to create customizable, overridable and inspectable mock functions.

An example test for a component using useInView would look like this :

The component :

import * as React from "react";
// When running this code in our tests, the import will be replaced with the code from __mocks/react-intersection-observer
import { useInView } from "react-intersection-observer";

export const Comp = () => {
  const [ref, inView] = useInView();
  return <div ref={ref}>{inView ? "Visible" : "Hidden"}</div>;
};

The test :

import * as React from "react";
import { render } from "@testing-library/react";

import { useInView } from "../__mocks__/react-intersection-observer";
import { Comp } from "../components/Comp";

describe("Comp with use-in-view", () => {
  test("is displayed when inView true", () => {
    useInView.mockImplementation(() => {
      return [React.useRef(), true];
    });
    const { getByText } = render(<ComponentUsingInView />);
    getByText("Visible");
  });
  test("is hidden when inView false", () => {
    useInView.mockImplementation(() => {
      return [React.useRef(), false];
    });
    const { getByText } = render(<ComponentUsingInView />);
    getByText("Hidden");
  });
});

Read more about @testing-library/react here.

Testing the App with Mocked Models

Mocking user modules with Jest is similar to mocking node modules :

  1. Create a folder called __mocks__ in the same directory of the file or directory you want to mock.
  2. Inside __mocks__ add a file with the same name as the file you want to mock.
  3. If the test code uses the mock too, then set it up before running the test by calling jest.mock('./path/to/module')

Models that interact with the Amplify API will return either a Promise (for queries and mutations) or an Observable (for subscriptions).

Once the promise resolves or the observable emits a value we'll update the state to reflect the changes. For example, when getChannels resolves, the app code will trigger a state update to show the new data.

The UI of an app will tend to look differently before and after these promises/observables resolve/emit. It would be nice to be able to run assertions before and after it happens.

const { getAllByLabelText } = render(<Component />);
const allChannels = getAllByLabelText("channel");

// before getChannels resolves
expect(allChannels.length).toEqual(0);
// Do something here 👇 to resolve getChannels
// ...
// after getChannels resolves
expect(allChannels.length).toEqual(4);

To do that we'll need to provide custom mocks per test or test suite to those promises and observables.

Promise Returning Methods

The models' mocks are simple jest mock functions. It is left to the test suite to provide the right implementation and data.

For example, the getChannels mock is a one-liner in src/models/__mocks__/Channels.ts :

export const getChannels = jest.fn();

In __tests__/channels.test.tsx we'll provide the right behavior for this mock before rendering our component :

import * as React from "react";
import { act } from "react-dom/test-utils";
import { render } from "@testing-library/react";
import { getChannels } from "../src/models/__mocks__/Channels.ts";

const dataBank = {
  channels: () => [
    {
      id: "channel-1"
      //,...otherFields
    }
  ]
};
type TestUtils = ReturnType<typeof render>

const selectors = {
  channelList: (testUtils:TestUtils) => testUtils.getAllByTestId("Channel Card");
}

describe("channels", () => {
  let resolveGetChannels;
  getChannels.mockImplementation(() => {
    return new Promise(resolve => {
      // Now a test can resolve getChannels whenever and with any data
      resolveGetChannels = resolve;
    });
  });
  test("works", async () => {
    const testUtils = render(<Channels />);

    // Expect getChannels to be called ( it's called on mount )
    expect(getChannels.toBeCalled());

    // And getChannels hasn't resolved yet because we haven't called resolveGetChannels
    expect(() => {
      selectors.channelList(testUtils)
    }).toThrow();

    // Wait for promise to resolve and ui to update
    await act(async () => {
      resolveGetChannels(dataBank.channels());
    });

    // Make sure that channels are visible
    expect(selectors.channelList(testUtils).length).toEqual(1);
  });
});

If you're not sure what act is, or what it's doing then read this excellent explainer by @threepointone

Observable Returning Methods

Like promise returning models we start out by defining the method as :

export const onCreateChannel = jest.fn();

And we'll define the right implementation in the test suite.

For GraphQL subscriptions, the AWS Amplify API library returns an Observable. The libary uses the zen-observable to create observables. But this is just an implementation detail, we can use RxJS or any other Observable implementation to mock the return type.

If you haven't worked with RxJS or Observables, you just need to think of an Observable as a Promise that

  1. Can resolve more than once.
  2. Can be listened to using subscribe instead of then.

This is a simplification based on what we need for this post only.
For a more in-depth explanation, check out Andre Staltz's The introduction to Reactive Programming you've been missing.

// Creating a promise that is invoked after {ms}ms
const delay = ms => {
  return new Promise(resolve => {
    setTimeout(resolve, ms);
  });
};
// Creating an observable that emits every {ms}ms
const interval = ms => {
  return new Observable(observer => {
    setInterval(() => observer.next(), ms);
  });
};

// Getting the resolved value from a promise
// Fires only once
delay(10).then(value => {});

// Getting the resolved value from a observable
// Fires indefinitely
interval(1000).subscribe(value => {});

In our tests we're going to want to hijack the observer.next method and give it to an individual test to invoke whenever they want :

import * as React from "react";
import { act } from "react-dom/test-utils";
import { Observable } from "rxjs"; // or 'zen-observable'
import { render } from "@testing-library/react";

import { onCreateChannel } from "../src/models/__mocks__/Channels.ts";

const dataBank = {
  channel: () => ({
    id: "channel-1"
    //,...otherFields
  })
};

describe("channels", () => {
  let emitOnCreateChannel;
  onCreateChannel.mockImplementation(() => {
    return new Observable(observer => {
      // Now a test can emit new channels whenever and with any data
      emitOnCreateChannel = v => observer.next(v);
    });
  });
  test("works", () => {
    const { getAllByTestId } = render(<Channels />);
    // Expect onCreateChannel to be called ( it's called on mount )
    expect(onCreateChannel.toBeCalled());
    // The list of channels should be empty before data is fetched with models, 
    expect(() => {
      getAllByTestId("Channel Card");
    }).toThrow();

    // Wait for the observer to emit and ui to update
    act(() => {
      emitOnCreateChannel(dataBank.channel());
    });

    // Make sure that the added channel is visible
    expect(getAllByTestId("Channel Card").length).toEqual(1);
  });
});

You can see a lot more of these tests here.

End to End Tests

We will be using Cypress for our E2E tests because of its relatively better development experience (in my opinion) but if you need to run your tests in multiple browsers or don't particularly like using Cypress than testcafe might be a better match for you.

Cypress running tests

Preparing the test environment

We'll mock the whole Amplify API using the Amplify cli's built in mock method.

Make sure the amplify version you have is >= 1.11.0 ( with amplify --version ) and that you java installed (DynamoDBLocal used by the api mock is a java application).

And in an initialized amplify project run : amplify mock api

This will create a replica of your app's cloud environment on your local machine and update the app configuration to point to it (by updating src/aws-exports.js).

After running this command we can start the app (npm run dev) and it will work exactly the same way it did before but will be connected to a local database instead of a remote one.

Installing Cypress with TypeScript support is straight-forward :

  1. Install Cypress and initialize it : yarn add -D cypress && yarn cypress --init
  2. Install add-typescript-to-cypress : yarn add -D @bahmutov/add-typescript-to-cypress
  3. 👍Add typescript tests to the cypress/integration/ directory

Adding Tests

E2E tests should behave like a user going through the app.
We will use @testing-library/cypress To share code (ui selectors) between the Cypress and Jest tests. An example of a cypress test suite making sure that a user can read and edit their profile information will look like this :


// Note that the code for our selectors is almost identical to the selectors used with Jest
// This is thanks to @testing-library/react & @testing-library/cypress 
// Profile selectors
const profile = {
  form: (cypress = cy) => cypress.getByLabelText("Profile Form"),
  submit: () => cy.getByLabelText("Profile Form Submit Button"),
  username: () => cy.getByLabelText("Username"),
  bio: () => cy.getByLabelText("Bio"),
  url: () => cy.getByLabelText("Url")
};

// Header selectors
const header = {
  root: () => cy.getByLabelText("Header Navigation").should("be.visible"),
  me: () =>
    header
      .root()
      .within(() => cy.getByText("My Profile"))
      .should("be.visible"),
  channels: () =>
    header
      .root()
      .within(() => cy.getByText("Channels"))
      .should("be.visible")
};

describe("My Profile", () => {
  beforeEach(() => {
    cy.visit(BASE_URL);
  });
  afterEach(() => {
    // For video to better capture what happened
    cy.wait(1000);
  });
  it("Can visit profile and set information", () => {
    const user = {
      name: "Test username",
      url: "https://test-url.test",
      bio: "Bio Test @ Test BIO"
    };
    header.me().click();
    cy.location("href").should("contain", "/me");
    profile.username().type(`${user.name}{enter}`);
    cy.title().should("contain", `${user.name}'s Profile`);
    profile.bio().type(`${user.bio}{enter}`);
    profile.url().type(`${user.url}`);
    profile.submit().click();

    // Make sure data is persisted between sessions
    cy.reload();
    profile.username().should("contain.value", user.name);
    profile.bio().should("contain.value", user.bio);
    profile.url().should("contain.value", user.url);
  });
});

You can see more TypeScript Cypress tests here.

Adding Test Scripts to package.json

Recap of the scripts used to run our different tests :

{
  "scripts": {
    "test:static": "yarn lint && yarn tsc",
    "test:jest": "yarn jest",
    "test:e2e": "(amplify mock api &) && wait-on http-get://localhost:20002 && kill-port 3000 && (yarn dev &) && wait-on http-get://localhost:3000 && cypress run --env PORT=3000",
    "test:e2e:dev": "(amplify mock api &) && wait-on http-get://localhost:20002 && kill-port 3000 && (yarn dev &) && wait-on http-get://localhost:3000 && cypress open --env PORT=3000",
    "test": "yarn test:static && yarn test:jest"
  },
  "hooks": {
    "pre-commit": "prettier --write \"src/*.ts\" \"src/**/*.ts*\"",
    "pre-push": "yarn test"
  }
}

Running Tests from the Amplify Console on Every Commit

We just need to tell Amplify Console to run our tests before deploying on every commit.

To do that we'll add the following amplify.yml

version: 0.1
frontend:
  phases:
    preBuild:
      commands:
        - yarn install
    build:
      commands:
        # This makes sure that the commit is not deployed if the tests fail.
        - yarn run test && yarn run build
  artifacts:
    baseDirectory: build
    files:
      - "**/*"
  cache:
    paths:
      - node_modules/**/*

Wrapping up

We've added, static, unit, integration and end to end tests to an existing chat app that uses Amplify API and ran them using git hooks before committing and pushing our code and on the cloud before deploying with Amplify Console.

If you want to dive deeper make sure to clone the codebase and experiment with the Jest and Cypress tests locally.

Cheers !

Posted on by:

rakannimer profile

Rakan Nimer

@rakannimer

Software Engineer @ Amazon Web Services 🐈 💻 🎶 📗

Discussion

pic
Editor guide
 

hi, thank you for compiling all this together
but it seems to be missing the amplify console configuration for running the e2e tests. more specifically the mock api part (cause all the rest is well documented)
and if you know of a way to clear the DB before running the e2e tests it will be great :)
thank you

 

thanks for this great setup explanation! Was wondering how do you integrate authentication in your cypress tests? The example you use would be a perfect use case.
Currently I'm stuck trying to figure out how to do my integration tests using amplify and cypress. If I am always using the online DB is no problem but ideally I'd like to take advantage of the cy.server and cy.route for keeping control of the data. I have found some workarounds online but not a clean solution yet. Do you have an opinion on how to go about it? Currently I'm starting to consider to run custom scripts to populate the online DB and then do a cleanup afterwards, but sadly that would be against the point of cypress.