After a config full of "code holes" and discussion about what, if anything, we needed to add to our original stack, we were ready to start our first user story: the Landing Page.
Our initial page is designed to show the navbar along with a list of all the fruits in our database. The user story is pretty simple, done in the "As, wants to, so that" format: As a user, I want to visit the landing page so that I can see a list of all current fruits. We have a context (user), an action (visiting the landing page), and a value (seeing the current fruits).
Acceptance criteria are few and equally simple: if there are no fruits, user should should see an "Add Fruits" message; if there are fruits, the user should see the fruits. (In retrospect, error handling should also have been an AC: if the page fails to load, an error message is displayed.)
Fruits currently have just an id, name, and description--pretty simple data. Now the navbar is a separate card, so we're just focusing on the listing of fruit.
Next step: task it out. What do we need to do and what files do we need to touch to fulfill the ACs and finish the card?
With the database up, we need a fruit model (our backend is object-oriented), fruit repository to handle mapping of SQL to the model, fruit service to handle business logic, and fruit controller to provide actual endpoints and handle calls. We'll have to create directories and folders for all of that in the same package. But first things first: the tests.
The primary goal of this project is to create a simple CRUD app using TDD, or test drive development. Here's what this means in practice:
- We write tests first Tests are the first lines of code we write whenever we start a new piece of functionality. Hence . . .
-
Tests define production code They define what code is written in the first place. They drive the code. For example: we know our fruit cart service needs to return a fruit item with a name "apple" when the
getAllFruits()
method is called. So we write a test that callsgetAllFruits()
, with the result expected to include the string "apple". This allows us to keep results and expectations in mind, and, ultimately become more value driven. After all, if you don't know what you want your code to do, why write it in the first place? - Tests are just as important as production code We spend equal time writing good tests and good code. They're not an after thought, and they allow us to write cleaner, more focused code in an incremental manner called . . .
- Baby steps We take it one step at a time. If it's easiest to make the service pass the above test by hardcoding a return value of a string "apple", then that's what we do. The test will pass, and we can move on to the next step: refactoring.
- Red, Green, Refactor Ah, the core of TDD. Write your test, make it fail (red). Write your code, and, in the easiest way possible, make it pass (green). This initially may mean hardcoding, just like in the example above. Then refactor your code. Hardcoding values is a code smell, so in this case, you would refactor it out. Every time you make a change that could alter the return value, run the tests.
- Change either your test or code, not both Now tests can be refactored just as much as code can. But you don't want to make changes to both, run your tests, and watch them fail. Then you won't know what actually failed: the newly refactored code or the newly refactored tests. Furthermore, you may actually be changing what behavior your tests are expecting, making previously solid code fail.
-
The Testing Pyramid We'll be following the classic testing pyramid.
-
Unit tests will make up the bulk of our test suite These tests run against discrete methods and have concrete expectations. For example, if I run the
getAllFruits()
method that gets all fruit information from the database, I should have a solid expectation of its return value (eg. an array of fruit objects, one of which contains the name "apple"). Their advantages are that they are fast and small: when they fail, they fail quickly, so we get near instant feedback (assuming we run our tests regularly); and when they fail, they fail for a single piece of functionality so it's quite easy to target what line of code failed. Plus, they take fewer resources to run. JUnit with the Hamcrest library will be our friends here for the backend, along with Jest and Enzyme for the React frontend. - Integration tests form our middle ranks These are slightly more expensive in terms of computing resources, but they test the connections between the moving parts. For instance, our controller tests are actually integration tests--when we hit an endpoint, the controller calls the service which calls the repository which calls the database to get the fruits. Several methods across multiple classes are called here, and we want to make sure everything works. These are larger, require mocking and stubbing (which we will return to in a different post), and span across multiple classes. They make sure everything works together, but when they fail, they take longer to do so, and it can be hard to tell what, exactly, is failing. We'll add Mockito to JUnit for these tests.
- Functional or UI tests sit at the top They test everything--user interface, database, services, etc--to make sure it all works together correctly. For instance, when a user lands on our page, they should see all the fruits listed. That's the front and backend pieces all working together for a single result. These tests take longer and may require a zombie web browser to be spun up and simulate user interactions. When they fail, it can be extremely difficult to tell why. Selenium will be our tool of choice to automate these.
-
Unit tests will make up the bulk of our test suite These tests run against discrete methods and have concrete expectations. For example, if I run the
Those are the rules! Let the (fruity) games begin!
Top comments (0)