Hello again! This week, I am covering the basics of automated testing of Ruby on Rails applications.Testing is a huge topic, and there is a lot that I won’t cover, but for those out there who are unfamiliar with the idea of automating tests, I hope that this short article will help you to see the value of testing and inspire you to start writing automated tests too.
Why Automate Tests?
The simplest and most obvious reason that automated testing is beneficial is that it enables developers to guarantee that their code works on some level without having to open up a web browser and step through the application by hand.
Automated tests help ensure a baseline level of code quality by ensuring that all pieces of the application work as intended and work together as designed. There is no substitute for testing an application by hand, but if you want to know whether you set up your associations correctly, automated tests can save you the trouble of opening up the rails console and setting up the environment by hand.
In most cases, a suite of automated tests can run through more tests in less time than a human can by hand, saving time in the initial development stage as well as being able to detect broken code before it becomes a problem. Finally, automated tests streamline the debugging process by helping developers to narrow down where possible points of failure are, enabling developers to discover which parts of their code base function properly and which parts do not.
Principles for Writing Good Tests
Although tests are critical for maintaining high quality code, the fact that there are tests for the code does not prove that the code is good.
The first step to writing good tests is to write code that is testable. Testable code is predictable code. Design methods and functions to work in such a way that given the same inputs, they will always return the same outputs.
When designing methods or functions, try to use pure functions to achieve maximum predictability. A pure function is one which when given the same inputs, always returns the same outputs and has no side effects. For example f(x) is a pure function if every time x = 6 the output of f(x) is 12. On the other hand, if f(x) writes something to a file before returning 12, then it has a side effect and is not a pure function.
Pure functions are ideal, but they are not always workable. Recognize when a pure function it is possible to use a pure function and when it is not.
The most important attribute of testable code is that it is generally predictable. The most basic way a function is predictable is that it returns a known output assuming all the inputs are known.
Writing Rails Tests
Ruby on Rails comes built in with test integration using a framework called Minitest. I decided to stick with Minitest for this project instead of RSpec to reduce dependencies and the amount of configuration required, though writing tests using RSpec is a similar process.
Rails generates the starting files when generating models, so if you followed along with last week’s article, then you should already have the project structure set up to write tests.
Fixtures are a key part of writing automated tests with Minitest. Fixtures give developers access to the models stored in the test database.
To write a fixture, all that is necessary is to open the appropriate fixture for the model you want to add and add the new fixture using YAML format. Fixtures can be found in
tests/fixtures/<modelname>.yml. Each fixture has a name followed by a list of attributes declared in the model’s schema. Each attribute is a key-value pair, the key is the name of the attribute, and the value of the pair is the value of the attribute. In YAML, key-value pairs are separated by colons.
In the tests, fixtures are given a method corresponding to the name of the model. The
User model fixtures are accessed using the
users method provided by the fixtures library. To get a specific fixture, pass the name of the fixture as a symbol, so to get a user fixture named “demo”, you would write something like:
user = users(:demo). We’ll need various fixtures in testing.
Writing the tests
Let’s start writing tests by writing tests to confirm that our validations and associations are set up properly. I won’t detail all the tests I’ve written because there are many, but if you are interested, you can find them all here.
The first test is going to make sure that valid objects are saved as expected:
# test/models/user_test.rb test "Saves valid objects" do user = User.new( username: "jon", name: "Jon doe", email: "email@example.com", password: "nobody knows" ) assert_equal true, user.save end
At the top, the test line declares a new test and gives it an identifying description. The most important line in the test is the
assert_equal line. Assert equal means that the test suite will fail if the second argument does not equal the first. The first argument of
assert_equal is the expected outcome of the assert, and the second argument is the result that is being tested. In our case, we expect
user.save to return true because that means that the new user object has been saved successfully. If the new user were invalid,
user.save would return false instead. There are many kinds of assertions that Minitest has built in. What they are and what they do can be found here.
It is a good idea when writing tests to write a test that you know will fail before writing a test that passes, ensuring that you know that your new test case tests what you intend it to test. Writing a failing test first enables you to see that your code is running. A common practice in Test Driven Development is to write the tests before actually writing the code. This proves that the test works and also that the code works once the test passes. I don’t have time to go into depth in Test Driven Development, but a quick Google search will quickly give you useful articles detailing why and how to do test driven development.
Once you’ve written your tests, run them by running:
Setting up Travis CI
Travis is a continuous integration/continuous deployment service that has tight integration with GitHub. Continuous integration is the process of combining local changes in an application with the master repository on a continual basis. Consistently running tests when new changes are added enables developers to catch breaks in code before they become a serious problem.
Setting up Travis causes the tests to be run on every commit, giving real time updates if anything fails.
Travis has a free open source option, and developers can sign up using their GitHub.
Once signed in, click the plus button next to the My repositories tab, and find repo for the forum app. Select it and then click activate to make Travis track it.
In order to have Travis do anything with the repo, though, we’ll need to set up a
.travis.yml in the root project directory.
In our case it should look like this:
language: ruby rvm: - 2.7 # Everything after this point is necessary for Travis to work with Postgres services: - postgresql before_script: - psql -c 'create database travis_ci_test;' -U postgres
Because we’re using Postgres as a database engine, we have to change the
database.yml file to tell Rails which user to use for testing.
config/database.yml and change the test section to the following:
# config/database.yml Test: <<: *default database: travis_ci_test adapter: postgresql
That should be all that it takes to set up Travis!
If you take a look at the tests for the user model You will see that I wrote tests to make sure the format validations worked:
test "will not save a user with an invalid username" do user = User.new( username: 'jon++++. cool', name: 'jon doe', email: 'firstname.lastname@example.org', password: 'nobody knows' ) assert_equal false, user.save end test "will not save a user with an invalid email address" do user = User.new( username: 'jon', name: 'jon doe', email: 'hello@email@example.com', password: 'nobody knows' ) assert_equal false, user.save end
Here, I made sure that all the data is valid except the attributes I want to test for to eliminate the possibility that another attribute would cause it to fail. If you simply copy and paste these tests and then run the suite, you will find that these two tests fail. In other words, by running tests, we were able to determine that the validations for these attributes do not work correctly: they validate things that they should not!
After doing some digging, I discovered that the reason these tests fail is that the
format validation will consider the attribute to be valid if the regex matches anywhere in the attribute, and that these two will match parts of the attributes, but not the whole attribute.
In order to solve this problem, I wrote custom validations which you can find here.
By using tests, I was able to discover a flaw in the code that had passed completely unnoticed in the last post!
I hope that you found this article instructive. If you have any questions about anything, feel free to ask!
Top comments (0)