DEV Community

Ian Johnson
Ian Johnson

Posted on • Originally published at tacoda.github.io on

Mocking the File

Last time, we discovered that we needed a more robust testing object for our file dependency.

Why? Because in our test we are reading from our fake todo.txt file in order to write to our fake done.txt file. But we never defined write on our FakeFile class. We have a few options here.

First, we could just add another method stub to our FakeFile. But, as discussed before, this becomes hard to scale as we grow our test suite. What do I mean when I say method stub? I'm referring to this:

class FakeFile:
    def write(self, message):
        pass
Enter fullscreen mode Exit fullscreen mode

Notice that the write method will override the behavior of writing a real file. It just doesn't do anything. In this way, write is a method stub. It's a method that we stub out for our testing purposes that does nothing. What's the point? So we don't have to write to a real file.

But we've seen this is not enough. Instead of adding another method for read, let's mock the file. How is mocking different? Mocking will allow us to define custom behavior. This allows us to test more effectively, because we can easily setup the environment necessary before running the test.

Is this overkill? This is a lot of work to fake a file.

This may seem like a lot of work, but it's actually making our tests, and our code, more predictable and easier to change. If you can get away without a mock, you usually want to do so. The thing that makes a file a good candidate to mock is that it is I/O. Good candidates to mock in your tests usually lie at the boundaries of your domain code. When you see I/O, this is a dead giveaway.

Here are more examples of useful things to mock when testing:

  • Files
  • Databases
  • Network Traffic
  • APIs and Services

If I fake all of my assets, how do I know my code will work?

Because fake data supports real tests. Real tests exercise real code. The focus of your tests should be your domain code. If you depend on files for your implmentation, then you are suddenly testing two things: your domain code and files.

Adding a Mock

Add the pytest-mock library.

poetry add pytest-mock
Enter fullscreen mode Exit fullscreen mode

Import everything into the test file.

from unittest.mock import Mock
Enter fullscreen mode Exit fullscreen mode

Normally, we would do something like the following to use Mock.

@pytest.fixture
def todo_file(mocker):
    mock = Mock()
    mocker.patch('open', return_value=mock)
    return mock
Enter fullscreen mode Exit fullscreen mode

However, this will return an error. Because we are mocking the built-in function open, we need to use the library a little different. As a matter of fact, it has a specific API for doing this.

First, we update the import to get the functions we need.

from unittest.mock import mock_open, patch
Enter fullscreen mode Exit fullscreen mode

Now let's update the fixture.

@pytest.fixture
def todo_file():
    mock = mock_open()
    return patch('{}.open'.format(__name__), mock, create=True)
Enter fullscreen mode Exit fullscreen mode

Run the tests.

poetry run pytest
# tests/test_task.py .FF
Enter fullscreen mode Exit fullscreen mode

We end up with two failures, but that's okay. Look at the messages.

FAILED tests/test_task.py::test_adding_task - AttributeError: '_patch' object has no attribute 'write'
FAILED tests/test_task.py::test_marking_task_done - AssertionError: assert 'Buy Bread' in 'Error: todo #1 does not exist.\n'
Enter fullscreen mode Exit fullscreen mode

Here we have two issues. First, we are not mocking the write method. First, let's focus on only the test_adding_task test. Here's a quick test:

def test_adding_task(todo_file, capsys):
    with todo_file:
        with open('todo.txt', 'a') as f:
            task = Task(f, f)
            task.add("Buy Bread")
    captured = capsys.readouterr()
    assert "Buy Bread" in captured.out
Enter fullscreen mode Exit fullscreen mode

Notice how ugly things have gotten in our test? This is a code smell. Why did this happen? Because adding a task doesn't really belong to a Task, but a TaskList. As a consequence, there is a lot of setup that we have to do in the test to make the test use the correct file. Really, this test should be testing a TaskList.

To show that this is actually an organization problem, and not a lexical one, let's update the test for listing tasks:

def test_listing_tasks(todo_file, capsys):
    with todo_file:
        with open('todo.txt', 'a') as f:
            task = Task(f, f)
            task.ls()
    captured = capsys.readouterr()
    assert "[1] Buy Milk" in captured.out
Enter fullscreen mode Exit fullscreen mode

Again, notice the extra boilerplate. We could DRY up the duplication, but both of these tests are telling us to test this functionality at a higher level. So let's do that.

Moving Behavior to the TaskList

Make test test file.

touch tests/test_task_list.py
Enter fullscreen mode Exit fullscreen mode

tests/test_task_list.py

import pytest
from unittest.mock import mock_open, patch

@pytest.fixture
def todo_file():
    mock = mock_open()
    return patch('{}.open'.format(__name__), mock, create=True)

def test_adding_task_to_list(todo_file, capsys):
    with todo_file:
        with open('todo.txt', 'a') as todo:
            with open('done.txt', 'a') as done:
                task_list = TaskList(todo, done)
                task_list.add("Buy Bread")
    captured = capsys.readouterr()
    assert "Buy Bread" in captured.out
Enter fullscreen mode Exit fullscreen mode

Let's run only this test suite:

poetry run pytest tests/test_task_list.py
# FAILED tests/test_task_list.py::test_adding_task_to_list - NameError: name 'TaskList' is not defined
Enter fullscreen mode Exit fullscreen mode

We did not yet define the class, so let's do that.

touch todo/task_list.py
Enter fullscreen mode Exit fullscreen mode

todo/task_list.py

class TaskList:
    pass
Enter fullscreen mode Exit fullscreen mode

And then we add an import in the test.

from todo.task_list import TaskList
Enter fullscreen mode Exit fullscreen mode

Let's run tests again.

poetry run pytest tests/test_task_list.py
# FAILED tests/test_task_list.py::test_adding_task_to_list - TypeError: TaskList() takes no arguments
Enter fullscreen mode Exit fullscreen mode

Progress! Now the test is failing because TaskList doesn't accept the correct number of arguments. Let's fix that.

todo/task_list.py

class TaskList:
    def __init__(self, todo_file, done_file):
        pass
Enter fullscreen mode Exit fullscreen mode

Let's run tests again.

poetry run pytest tests/test_task_list.py
# FAILED tests/test_task_list.py::test_adding_task_to_list - AttributeError: 'TaskList' object has no attribute 'add'
Enter fullscreen mode Exit fullscreen mode

Now we are failing because we do not have an add method implemented. Let's add that.

todo/task_list.py

class TaskList:
    def __init__(self, todo_file, done_file):
        pass

    def add(self, task_description):
        pass
Enter fullscreen mode Exit fullscreen mode

Let's run tests again.

poetry run pytest tests/test_task_list.py
# FAILED tests/test_task_list.py::test_adding_task_to_list - AssertionError: assert 'Buy Bread' in ''
Enter fullscreen mode Exit fullscreen mode

Now we are failing because the assertion is not true. That is, the test is properly failing. So let's move the test to Green. To do this, let's embrace TDD and just make it pass.

todo/task_list.py

class TaskList:
    def __init__(self, todo_file, done_file):
        pass

    def add(self, task_description):
        print('Buy Bread')
Enter fullscreen mode Exit fullscreen mode

Let's run tests again.

poetry run pytest tests/test_task_list.py
# tests/test_task_list.py .
Enter fullscreen mode Exit fullscreen mode

And we are Green! Now, we should refactor, but we'll hold off for now. We have work to do in the TaskList first.

Let's move the last relevant test to test_task_list.py.

tests/test_task_list.py

def test_adding_task_to_list(todo_file, capsys):
    with todo_file:
        with open('todo.txt', 'a') as todo:
            with open('done.txt', 'a') as done:
                task_list = TaskList(todo, done)
                task_list.add("Buy Bread")
    captured = capsys.readouterr()
    assert "Buy Bread" in captured.out


def test_listing_tasks(todo_file, capsys):
    with todo_file:
        with open('todo.txt', 'a') as todo:
            with open('done.txt', 'a') as done:
                task_list = TaskList(todo, done)
                task_list.ls()
    captured = capsys.readouterr()
    assert "[1] Buy Milk" in captured.out
Enter fullscreen mode Exit fullscreen mode

We have only moved these tests. They are essentially the same. They are just using TaskList instead of Task. We will refactor to clean these up soon. At least now they are in the proper place.

todo/task_list.py

class TaskList:
    def __init__(self, todo_file, done_file):
        pass

    def add(self, task_description):
        print('Buy Bread')

    def ls(self):
        pass
Enter fullscreen mode Exit fullscreen mode

Run those tests!

poetry run pytest tests/test_task_list.py
# tests/test_task_list.py .F
# FAILED tests/test_task_list.py::test_listing_tasks - AssertionError: assert '[1] Buy Milk' in ''
Enter fullscreen mode Exit fullscreen mode

Same thing here. We just need to update the result. Let's make this one go Green!

todo/task_list.py

class TaskList:
    def __init__(self, todo_file, done_file):
        pass

    def add(self, task_description):
        print('Buy Bread')

    def ls(self):
        print('[1] Buy Milk')
Enter fullscreen mode Exit fullscreen mode
poetry run pytest tests/test_task_list.py
# tests/test_task_list.py .. 
Enter fullscreen mode Exit fullscreen mode

Awesome! We now have green tests, but they are cosmetic. So, next up we will connect those tests to real code and deal with more mocking and test organization.

Key Takeaways

  • Mock dependencies at the boundaries of your domain code
  • If tests start to get complex, ask yourself if it belongs there
  • Don't be afraid to refactor tests -- they are code too
  • Fake data supports real tests

Top comments (0)