<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Corey Scott</title>
    <description>The latest articles on DEV Community by Corey Scott (@corsc).</description>
    <link>https://dev.to/corsc</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/corsc"/>
    <language>en</language>
    <item>
      <title>The What, When, Why, and How of Testing (Part 2)</title>
      <dc:creator>Corey Scott</dc:creator>
      <pubDate>Wed, 28 Dec 2022 03:52:15 +0000</pubDate>
      <link>https://dev.to/corsc/the-what-when-why-and-how-of-testing-part-2-4dhc</link>
      <guid>https://dev.to/corsc/the-what-when-why-and-how-of-testing-part-2-4dhc</guid>
      <description>&lt;p&gt;(Authors Note: The following is an extract from the Advanced Unit Testing Techniques chapter of my upcoming book: &lt;a href="https://github.com/corsc/Beyond-Effective-Go#section-2-striving-for-high-quality-code"&gt;Beyond Effective Go – Part 2 – Striving for High-Quality Code&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;In the &lt;a href="https://dev.to/corsc/the-what-when-why-and-how-of-testing-2cl7"&gt;previous post&lt;/a&gt;, we examined the Why and When of testing in this post, we will build on that foundation and look at How much we should be testing.&lt;/p&gt;

&lt;h2&gt;
  
  
  How much should we test?
&lt;/h2&gt;

&lt;p&gt;We should test just enough and no more. While we want to write tests to work faster, writing and maintaining tests, have a cost. If we have too many tests, then their costs can outweigh the value they bring.&lt;/p&gt;

&lt;p&gt;If you pushed me for a test coverage number, I’d say anything over 70%. &lt;/p&gt;

&lt;p&gt;There are two reasons for this: &lt;/p&gt;

&lt;p&gt;First, test coverage is measured by lines of code. It does not matter how many lines of code we have or how many of those lines are run during our tests, what matters is behavior coverage. For each of our code’s behaviors, we should have one or more tests that confirm it.&lt;/p&gt;

&lt;p&gt;Second, there are often times when we have code that cannot reasonably be tested, and any attempt to do so would damage the quality of the code and result in test-induced damage. Consider the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func GetUserAPI(resp http.ResponseWriter, req *http.Request) {
    ID := getID(req)

    user := loadUser(ID)

    payload, err := json.Marshal(user)
    if err != nil {
        resp.WriteHeader(http.StatusInternalServerError)
        return
    }

    _, err = resp.Write(payload)
    if err != nil {
        resp.WriteHeader(http.StatusInternalServerError)
        return
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Testing every single line of this code is somewhere between extremely hard and impossible. The &lt;code&gt;json.Marshal()&lt;/code&gt; and &lt;code&gt;http.ResponseWriter.Write()&lt;/code&gt; both return errors, but these errors should never happen. We could pass in a mock implementation of &lt;code&gt;http.ResponseWriter&lt;/code&gt; that returns an error, but what would we be testing? We’d be testing the mock and the fact that we handled the error. Both fall into the “too simple to get wrong” category. Additionally, we would end up with a test that returns little value, a mock, and a test that we now have to maintain. This is a form of test-induced damage that we will examine more at the end of this chapter.&lt;/p&gt;

&lt;p&gt;Returning to our question of how much we should test, we should also acknowledge that tests come in many forms and consider how much time we should devote to each form. The most common forms of tests are Unit Tests, User Acceptance Tests (UAT), and End-To-End (E2E) tests. Each of these test forms has different goals, strengths, and weaknesses that we need to be mindful of when using them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Unit Tests
&lt;/h3&gt;

&lt;p&gt;Unit tests aim to confirm the existence of a particular behavior in the unit-under-test. Please note that I am choosing my words very carefully here. A unit is not necessarily a single function; it is not necessarily a single struct; a unit can be these things, but it can also be a collection of structs and functions that collaborate to add a behavior to a module or package.&lt;/p&gt;

&lt;p&gt;Let’s explore an example. Assume we have a bank package responsible for interaction for an API provided by an external company. Inside our bank package is a struct called Account with the following method:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func (a *Account) Transfer(amount int, to string) error
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Without looking at the implementation, we can define our expected behaviors as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When I try to transfer a negative amount, we should receive an error.&lt;/li&gt;
&lt;li&gt;When the API is down, we should receive an error.
We should receive an error when the API returns an unexpected or garbled response.&lt;/li&gt;
&lt;li&gt;When we make a valid request and the API works, we should not receive an error.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We should have at least one unit test for each of these behaviors. By doing so, we ensure that all our intended behaviors are present and document these behaviors for posterity.&lt;/p&gt;

&lt;p&gt;Returning to our definition of “units”, if all of the code required for interacting with our external API existed within a single struct, our unit tests would only involve this single struct, but what happens if our implementation looks like this?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hSsBGzjr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3qnlzs1c8s2rccaskuxy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hSsBGzjr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3qnlzs1c8s2rccaskuxy.png" alt="Class diagram showing an account struct that uses a request encoder and a response decoder" width="469" height="175"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If we defined our units as structs, we would have to test &lt;code&gt;Account&lt;/code&gt;, &lt;code&gt;requestEncoder&lt;/code&gt;, and &lt;code&gt;requestDecoder&lt;/code&gt; separately. This is unnecessary and wastes time and effort. Take another look at our desired behaviors above; request encoding or decoding is not mentioned. This is not because it is not essential; it is because it is a small part of the broader behavior. By acknowledging that our three structs would not exist without each other and they must collaborate to achieve our goal, we should treat them as a single unit.&lt;/p&gt;

&lt;p&gt;With this definition of unit, we can see both the strengths and weaknesses of unit tests. The main strength and weakness of unit tests is their small scope. Because the scope is small, the tests are easy to write and understand. These unit tests will serve to document and enforce the author’s intentions on a small scale. Also, because of this small scope, these tests are fast to execute. Conversely, the test’s small scope is also a weakness as they confirm a unit’s behavior and not the system’s behavior (or features) as a whole. For this, we need to take a system-level perspective.&lt;/p&gt;

&lt;h3&gt;
  
  
  User Acceptance Tests
&lt;/h3&gt;

&lt;p&gt;User acceptance tests focus on confirming that the system behaves as we expect it to. The main difference between unit tests and UAT tests is the scope of the tests. In our unit tests, we focused on an individual unit of the code, perhaps introducing mocks or stubs to isolate that unit from the others. In UAT tests, we are testing most (or all) of the codebase. &lt;/p&gt;

&lt;p&gt;Test scope and isolation are still vital considerations for UAT tests. It is important to remember that we are testing our system in isolation, so our tests should not be reliant on any external systems. I should note I do not mean that these tests must mock databases, filesystems, caches, or any resources that can reasonably be expected to exist in a development environment or CI build slave. But rather any third-party systems. &lt;/p&gt;

&lt;p&gt;You can mock the database and caches, but it often has a terrible cost-to-value ratio.&lt;/p&gt;

&lt;p&gt;UAT scenarios should be constructed from the perspective of the system’s users with minimal understanding of the implementation details. For example, if we had a login API, the scenarios might be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When the database is running correctly, and the username and password are correct, we should receive success.&lt;/li&gt;
&lt;li&gt;When the database is down, then we should receive an error.&lt;/li&gt;
&lt;li&gt;When the database is running correctly, and the username or password is missing, we should receive an error.&lt;/li&gt;
&lt;li&gt;When the database is running correctly, and the username or password is wrong, we should receive an error.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As you can see, the only implementation detail in these scenarios is the existence of the database. As written, our scenarios are somewhat generic, we could tighten them up to match our API contract and include aspects like expected response codes. Doing this would enforce and document our API contract more thoroughly. However, please don’t take this too far as enforce responses to the minute detail; doing this would make our tests brittle and more troublesome to maintain. How you strike this balance between scenario strictness and brittleness will vary from project to project and personal preference. Try to find the minimum you can get away with and then be more strict when bugs or deficiencies are discovered or as the risk and cost of mistakes increases.&lt;/p&gt;

&lt;p&gt;When considering the coverage for UAT tests, it is essential to note that we are not looking from a lines-of-code perspective but rather from a scenario or use-case perspective.&lt;/p&gt;

&lt;p&gt;Moving on to the strengths and weaknesses of UATs. The strength of UATs is that they confirm that the system does what the user expects. &lt;/p&gt;

&lt;p&gt;As we have constructed our UATs to be independent of external systems, this is both a strength and a weakness. It is a strength because our tests are completely independent of external resources and, therefore, are under our control and completely reliable. It is a weakness because we are testing against mocks and not the actual external dependencies. There is, therefore, a risk that our mocks and the external dependency have different behavior. We can address this weakness with End-to-End tests, as we will see in a moment.&lt;/p&gt;

&lt;p&gt;However, the main weakness of UATs is the scope of the tests. Because the scope is broad, it can be time-consuming to locate the underlying cause when there are problems.&lt;/p&gt;

&lt;h3&gt;
  
  
  End-to-End Tests
&lt;/h3&gt;

&lt;p&gt;End-to-End (E2E) tests are essentially UATs performed with all external dependencies. These tests aim to build on the behavior confirmed by the UATs and verify that the system has the desired behaviors when involving all of the external dependencies.&lt;/p&gt;

&lt;p&gt;When constructing our E2E scenarios, we only need to look as far as our UAT scenarios. If we are time-constrained, we can reduce the scenarios to only those that involve these external dependencies.&lt;/p&gt;

&lt;p&gt;The strength of E2E tests is also their weakness; They involve external dependencies. They will confirm our system performs as expected in a production-like environment. Because our tests rely on these external resources, they will only be as reliable as these resources and the test environment. Therefore we may see test failures that are not caused by our code.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Test Pyramid
&lt;/h3&gt;

&lt;p&gt;Now that we have defined the different types of tests and examined their strengths and weaknesses, we can return to our original question: How much should we test? As an industry, we are always time-constrained, and as such, we need to spend our time as efficiently as possible.&lt;/p&gt;

&lt;p&gt;The Test Pyramid is a handy visual mnemonic first introduced by Mike Cohn and often discussed by Martin Fowler and others, whose goal is to remind us where and how we should spend our testing effort. This is my version of the Test Pyramid:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Q2s-T2-_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o7ck8xczlimloj7onoau.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Q2s-T2-_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o7ck8xczlimloj7onoau.png" alt="Test Pyramid with 70% unit tests, 28% UAT, and 2% E2E tests" width="340" height="283"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From this diagram, you can see that I recommend spending most (70%) of our testing effort on unit testing. This is because adding behaviors to our units of code is where we spend most of our programming effort. Unit tests, therefore, support our primary task. The fact that these tests are also the easiest to understand and the cheapest to write and run are pleasant additional benefits.&lt;/p&gt;

&lt;p&gt;I recommend spending most of the remaining effort (28%) verifying that the system has the behaviors that our users expect by adding UAT tests. At the end of the day, adding user-focused behavior is what we are being paid for. Compared to unit tests, UATs tend to have a higher cost of construction, debugging, and maintenance. This is why, despite the fact they are aligned with user value, we should be spending less effort on UAT’s than unit tests.&lt;/p&gt;

&lt;p&gt;Allocating the remaining 2% of our effort allocated to E2E tests may be shocking, but these tests have very high construction and maintenance costs. Additionally, they have a poor signal-to-noise ratio, given that they will often fail for reasons unrelated to our code. In my experience, when a system has sufficient unit and UAT tests, the majority of issues not caught by these tests are configuration or intra-team issues, which are not addressed well by E2E tests. Testing how our system responds to missing or inappropriate configuration should be done at the unit or UAT level; I prefer to test with unit tests and then have the rest of the codebase assume that the config is sane. This will result in less code and a faster system overall. As to intra-team issues, I am afraid I don’t have an automated test for that.&lt;/p&gt;

&lt;p&gt;When issues are caused by external systems not performing as expected, this is not a reason to add more E2E tests. Instead, we should add these failures as mocked responses in our UAT tests. This way, we ensure that our system can account for this unexpected behavior and respond predictably.&lt;/p&gt;

&lt;p&gt;The primary thing to remember about the Test Pyramid is that it is a mnemonic. While I have given you concrete numbers of 70, 28, and 2, the actual percentage of effort depends on your application, your team, and the deployment environment. It is possible to have a successful application without any E2E tests; I have seen many successful applications like this. If you currently have no tests and want to start, start with unit tests. After you are comfortable with testing, adding some UAT tests will be significantly easier and further improve your confidence in your application.&lt;/p&gt;




&lt;p&gt;If you like this content and would like to be notified when there are new posts or would like to be kept informed regarding the upcoming book launch please join my &lt;a href="https://groups.google.com/g/coreys-writing-early-access-team"&gt;Google Group&lt;/a&gt; (very low traffic and no spam).&lt;/p&gt;

</description>
      <category>go</category>
      <category>testing</category>
    </item>
    <item>
      <title>The What, When, Why, and How of Testing</title>
      <dc:creator>Corey Scott</dc:creator>
      <pubDate>Sun, 11 Dec 2022 23:00:36 +0000</pubDate>
      <link>https://dev.to/corsc/the-what-when-why-and-how-of-testing-2cl7</link>
      <guid>https://dev.to/corsc/the-what-when-why-and-how-of-testing-2cl7</guid>
      <description>&lt;p&gt;(Authors Note: The following is an extract from the Advanced Unit Testing Techniques chapter of my upcoming book: &lt;a href="https://github.com/corsc/Beyond-Effective-Go#section-2-striving-for-high-quality-code" rel="noopener noreferrer"&gt;Beyond Effective Go - Part 2 - Striving for High-Quality Code&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;When it comes to testing, the most common misunderstanding is the motivations of testing itself. Some folks see testing as a burden imposed from on high. Some folks see testing, or more specifically, test coverage, as a metric that determines how well they did their job. Sorry, but neither of these is true.&lt;/p&gt;

&lt;p&gt;This series of posts will address these fallacies and give you a different perspective on testing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why do we test?
&lt;/h2&gt;

&lt;p&gt;We write tests to enable us to work faster and more effectively. It seems backward to claim that writing more code can make us go faster but bear with me.&lt;/p&gt;

&lt;p&gt;We write code to add behavior to a system; to prove that behavior has been added, we have to test. We could run a quick manual test; this wouldn't cost us much, and doing so would be immediately valuable. However, this value is short-lived. Once we make additional changes to the code, we can only be 100% sure that our desired behavior exists with more manual testing. &lt;/p&gt;

&lt;p&gt;Contrast this with automated tests. It may initially cost a little more than manually testing to write, but they continue to provide value for as long as they exist. We can and should run these tests constantly as they provide a little value every time we do.&lt;/p&gt;

&lt;p&gt;Because these tests continuously ensure that the system has the desired behavior, we don't need to waste time going back and manually re-test, nor do we run the risk of regression without noticing. &lt;/p&gt;

&lt;p&gt;Automated tests become a safety net that reduces the risk of any additions or refactoring that we may make. Consequently, they also reduce any fear we might have relating to unknown or complex code.&lt;/p&gt;

&lt;p&gt;With sufficient tests in place, our confidence in the code increases to a point where we can make more frequent, faster, and even more adventurous changes, given that the tests will prevent any mistakes.&lt;/p&gt;

&lt;p&gt;And finally, automated tests document the author's intent for the code and are an efficient way for newcomers to learn why a piece of code exists and its behavior. This, in turn, improves the time it takes for newcomers to onboard to the project and reduces the cost to existing developers to explain the code to them.&lt;/p&gt;

&lt;h2&gt;
  
  
  When do we test?
&lt;/h2&gt;

&lt;p&gt;Our industry has spent a lot of energy debating this point, and we have not found the correct answer. As long as you are testing, it doesn't matter if you write the tests before the code or after. &lt;br&gt;
That said, I recommend grabbing a copy of &lt;em&gt;Test Driven Development: By Example&lt;/em&gt; by Kent Beck. The book's ideas are relevant to all forms of testing, and mastering them will make your tests more efficient and effective. While, most of the time, I do not do Test-Driven Development (TDD), you will find many of the intentions, approaches, and tricks I present compatible with Kent's ideas.&lt;/p&gt;

&lt;p&gt;Stay tuned for the upcoming parts of this topic, where we will explore &lt;em&gt;How much should we test? What should we be testing? And What should we not be testing?&lt;/em&gt;&lt;/p&gt;

</description>
      <category>discuss</category>
    </item>
    <item>
      <title>New Book for Experienced Gophers</title>
      <dc:creator>Corey Scott</dc:creator>
      <pubDate>Tue, 18 Oct 2022 22:56:26 +0000</pubDate>
      <link>https://dev.to/corsc/new-book-for-experienced-gophers-12g4</link>
      <guid>https://dev.to/corsc/new-book-for-experienced-gophers-12g4</guid>
      <description>&lt;p&gt;Dear DEV community, I am excited to announce the general release of my second Go book, &lt;a href="https://bit.ly/3EDgitA"&gt;Beyond Effective Go - Achieving High-Performance Code&lt;/a&gt;. This book is aimed at experienced Go developers who want to be more productive and write cleaner, faster, and easier-to-maintain code. &lt;/p&gt;

&lt;p&gt;As part of the release, I am looking for some folks willing to read the book and post an honest review on Amazon. Reviewers will be given a free PDF or ePub copy of the book, my gratitude, and (if interested) access to my private Slack group for direct one-on-one communication.&lt;/p&gt;

&lt;p&gt;If this interests you, please reply to this post with your preferred contact method or reach out via &lt;a href="https://twitter.com/CoreySScott"&gt;Twitter&lt;/a&gt; or &lt;a href="//mailto:books@coreyscott.dev"&gt;email&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;As you may have also noticed, this book is Part 1; Part 2 will be &lt;a href="https://github.com/corsc/Beyond-Effective-Go"&gt;Striving for High-Quality Code&lt;/a&gt; and primarily focuses on quality and productivity. While it is still in progress, I am also looking for folks interested in reading and commenting on the drafts.&lt;/p&gt;

</description>
      <category>go</category>
      <category>books</category>
    </item>
  </channel>
</rss>
