Testing your application has indisputable long-term benefits. However, when starting a new project, it can sometimes seem difficult or feel like it’s slowing us down. The longer you wait to take it seriously, the harder it becomes to set up. Wouldn’t it be great to have a fast, frictionless way of adding tests to resolve this dilemma?
Different companies approach testing in different ways. For example, some would have remote QA teams doing all the work manually, some would explore a fully automated approach, or some would record their tests from the browser. There are lots of benefits in the latter approach, but one disadvantage is that you’re always reliant on DOM artefacts, which makes the tests hard to maintain. Test maintenance becomes a known serious issue as the project grows and evolves. Most of these tools are framework-agnostic, which is nice, but we believe there’s benefit in leveraging a specific framework to allow for further automation for the tests to remain maintainable.
Writing tests can feel repetitive, which is often a sign there are parts of it we can automate. Simulating user action sequences by writing long test files doesn’t feel intuitive. Why can’t we just record the journeys and verify that our code changes haven’t broken the flow?
At Prodo, one of our key principles is “make the right thing to do the easiest thing to do”. So we implemented something like that in the initial prototype for our framework. Re-implementing and improving it in Prodo is currently a work in progress. Feedback is more than welcome — please let us know what you think about it.
Stories
In the initial prototype for Prodo, we created a concept of “stories”. It was inspired by Storybook, which is great for visualising tests, but still requires a bit of manual effort to use, and is further complicated if you combine it with frameworks such as Redux.
In Prodo, a story is basically your app with a specified state, and optionally a sequence of actions that brought it there. It’s useful for quickly visualising what your users are likely to see and experience.
For example, in a Todo list you might have stories such as “Empty list” or “List with many items”. In real world applications, common basic stories might be “Logged out” and “Logged in”. You could also have stories per component. For instance, a Todo list item could be “Done”, “Not done”, or “Being edited”. Seeing these side by side can help you ensure your code changes are not breaking the user experience.
You could view, create and update these stories in our developer tools. Alternatively, you could write them as code in your editor, if you prefer.
Testing appearance
“Static” stories (which have state, but no action sequence) can then easily be tested for some basic requirements: does the story render, i.e. not throw an error? Is the (html or png) snapshot of the story still the same as before?
You could easily generate those from the developer tools by ticking a box:
Testing behaviour
“Dynamic” stories, which consist of an initial state and a sequence of actions that then leads to a final state, are a bit more interesting. With these, you could test user flows and verify that actions still result in the same final state, even if you’re changing the underlying code. We called this a “state comparison” test.
To demonstrate, I’ve saved a story of a user adding four todo list items and ticking off one, and here’s what replaying the actions looks like:
In addition to replaying the whole story, there is the option of time travelling between actions and replay from a chosen point:
Let’s say I’m now working on my Todo list app, and I’ve accidentally modified my newTodo action code to add all the items in uppercase. My state comparison test will now fail, and when I start investigating and replay the story action sequence, I will quickly see why:
You could also integrate these tests with your CI. We’ve toyed with the idea of building a GitHub PR bot that would show you the before and after.
Generating actual code
One downside of browser based tests is that they can be quite fragile. For example, if you change the class name or some text inside a button, it can easily break the logic. This is one of the reasons why our goal is to record tests using the devtools and then generate maintainable and stable test code. Since it’ll be in TypeScript, that will help you flag issues and fix tests when you refactor your code. With readable code files it’ll be easy for developers to extend the tests and add more complex logic.
In our prototype, we generated JSON objects for this purpose. However, we realized this had some downsides, such as the fact that we couldn't use TypeScript to catch issues in the tests. Here is an example of a story which adds an item to the ToDo list:
In the official version, we are planning to generate Jest files, which can be typed and run as easily as any other tests. And here's what the generated test code might look like:
What’s next?
In the coming months, we are planning to release similar features in Prodo with a more intuitive interface and an improved user experience. If you liked any of the features in particular, you can join our Slack community to let us know and help us prioritise accordingly. You can also check out our open source GitHub repo (consider giving it a star if you like the direction we’re taking).
Top comments (0)