For a better reading experience, you can read this article on my blog at philscode.com.
The Scenario 🖼️
So you've successfully released a dozen popular JavaScript games over the past few years and just as you're about to kick back, enjoy the weekend and let the revenue flow in, a new Tweet appears in your feed.
Not great news to hear on a Friday. Okay, I'll just check how many of my games are affected...
😲😲😲😰😰
No need to panic. I'll just update my packages to the latest version and deploy them.
✋ Wait, just a second!
What about all those scenarios in each game? I haven't played these games in years and there are hundreds of test scenarios. I could rely on the Unit Testing and hire QA staff to manually test all my games, though that will cost me all the hard earned cash my games are generating.
Manual testing is never good, and absolutely not scalable. There must be a better way to handle this!
The Truth 🧑⚖️
The above scenario may seem far-fetched but a very similar email found its way to the inbox of Protopop Games (on a Friday no less!). See the below Tweet.
As mentioned in the Tweet above, Apple has a new App Store Improvement Plan that essentially means they are removing applications that no longer receive updates. For some applications this does indeed make sense however, for games this does not always make sense.
At the end of the day, it does not matter what you or I think - if Apple, Google, Steam or any platform want to enforce new rules and make you update all of your applications they can do just that. Our best defence against these scenarios? Robust pipelines that not only Unit test but also E2E test our games.
The Solution 💡
To demonstrate this I am going to develop a simple JavaScript based game using Phaser and E2E test the game using some advanced techniques of Cypress. There will be no arbitrary waiting involved - meaning more reliable, predictable tests.
In a nutshell, these E2E tests are going to interact with the game, take a screenshot and compare the image to a good known baseline.
For a quick glance, check below to see a few of the tests.
😎 Pretty cool, right?
These tests don't just take full screenshots and compare but rather sections of the Canvas to compare against a known baseline. You definitely don't want to take too many full screenshots as your tests will become very brittle very quickly.
Some images are also mocked for specific tests however, it is a good idea to mock all images with a transparent version in order to allow you to test in isolation.
For example, the Player and the Zombies both animate which means taking a screenshot over and over would potentially result in a different outcome. This would be way too unreliable and result in flaky testing so instead, the spitesheets are mocked with a solid colour. See the comparison below along with the code to mock the image.
Cypress snippet to replace the Player with a custom image:
cy.intercept("/assets/img/player.png", { fixture: "mock-player.png" });
Yep, it's that simple!
Conclusion
In conclusion...
- Have maintained robust pipelines that not only deploy but also rigorously test your applications.
When screenshot testing...
- Avoid too many large screenshots.
- Test in isolation by mocking images that are not part of the test (using transparent versions).
- Mock unpredictable spritesheets with solid references.
- Never use explicit waiting but rather poll for a specific outcome.
Feel free to reach out to me on Twitter - I look forward to hearing from you!
I had a lot of fun putting this together and was able to do so quickly with the help of Yannick's boilerplate, Kenny's Assets, Phaser, Cypress and cypress-plugin-snapshots.
If you liked this article check out the in-depth article I wrote on Visual Testing for Software Development.
Bonus: Play the Game 🎮
Click here to play the final result. Enjoy! 🎢.
As a challenge comment your first high score attempt below!
Top comments (0)