<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Dennis Martinez</title>
    <description>The latest articles on DEV Community by Dennis Martinez (@dennmart).</description>
    <link>https://dev.to/dennmart</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/dennmart"/>
    <language>en</language>
    <item>
      <title>Dead-Simple API Tests With SuperTest, Mocha, and Chai</title>
      <dc:creator>Dennis Martinez</dc:creator>
      <pubDate>Tue, 22 Sep 2020 11:00:00 +0000</pubDate>
      <link>https://dev.to/dennmart/dead-simple-api-tests-with-supertest-mocha-and-chai-4n5d</link>
      <guid>https://dev.to/dennmart/dead-simple-api-tests-with-supertest-mocha-and-chai-4n5d</guid>
      <description>&lt;p&gt;If you have to create automated tests for an API, you will most likely use or explore using &lt;a href="https://www.postman.com/"&gt;Postman&lt;/a&gt;. Postman is possibly the most well-known API development and testing tool out there and with good reason. It's an excellent tool for both developers and testers to create documentation and demonstrate how your application APIs should work.&lt;/p&gt;

&lt;p&gt;Using Postman gives you an excellent starting point for building a test suite to check your API works as expected. However, depending on your test cases and API endpoints, you'll likely run into limitations with Postman:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Because Postman does so much, its interface can feel confusing and somewhat bloated if all you want to do is API testing.&lt;/li&gt;
&lt;li&gt;Once you start testing more than a handful of API endpoints, it can feel a bit messy to organize your different scenarios for each one.&lt;/li&gt;
&lt;li&gt;If you want to use Postman on a continuous integration environment, you'll have to use &lt;a href="https://github.com/postmanlabs/newman"&gt;Newman&lt;/a&gt;, the command-line companion to Postman. While both tools should technically work the same, they're still separate tools, and you might stumble with issues where your test results differ.&lt;/li&gt;
&lt;li&gt;If you have multiple team members collaborating on API testing and documentation, Postman's pricing can get a bit steep for small organizations, since it's a monthly fee per user.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I recently worked on a project that's using Postman for API documentation and testing. The team began experiencing most of these pain points directly, so we set out to look for a better solution to get the team building automated tests as they continued building the API. The team had no dedicated testers, so the development team was responsible for test automation.&lt;/p&gt;

&lt;p&gt;Since the current developers are comfortable with JavaScript, we began looking for JavaScript tools to help with these efforts. After some experimenting, we landed on a lovely combination of tools that made our API testing effortless to build and easy to maintain. After implementing these testing tools, our automation coverage skyrocketed.&lt;/p&gt;

&lt;h2&gt;
  
  
  The JavaScript tools to run your API tests
&lt;/h2&gt;

&lt;p&gt;The application under test was a Node.js application, so we wanted to find testing tools that worked well in that environment. Thankfully, the Node.js ecosystem has no shortage of excellent tools for all your testing needs. You'll find a library or framework to run everything from basic unit tests to end-to-end tests and everything in between.&lt;/p&gt;

&lt;p&gt;With so many choices at our disposal, our focus was to find simple-to-use, battle-tested libraries that have been around for some time. One of the team's desires was to find stable tools that any JavaScript developer could easily pick up. After tinkering around with a few well-known libraries, we found some great libraries that fit the bill.&lt;/p&gt;

&lt;h3&gt;
  
  
  SuperTest
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/visionmedia/supertest"&gt;SuperTest&lt;/a&gt; provides a high-level abstraction for testing HTTP requests - perfect for APIs. If you have a Node.js application that runs an HTTP server (like an Express application), you can make requests using SuperTest directly without needing a running server. One of the nice things about SuperTest is that while it can run tests without any additional tools, it can integrate nicely with other testing frameworks, as you'll see next.&lt;/p&gt;

&lt;h3&gt;
  
  
  Mocha
&lt;/h3&gt;

&lt;p&gt;One of the better-known JavaScript testing frameworks, &lt;a href="https://mochajs.org/"&gt;Mocha&lt;/a&gt; runs on both Node.js and the browser, making it useful for testing asynchronous functionality. One of the cool things about Mocha is it allows you to write your tests in different styles like BDD (&lt;code&gt;it&lt;/code&gt;, &lt;code&gt;describe&lt;/code&gt;, etc.) and TDD (&lt;code&gt;suite&lt;/code&gt;, &lt;code&gt;test&lt;/code&gt;, etc.). Mocha fits in nicely with SuperTest, helping you organize your tests in your team's preferred way.&lt;/p&gt;

&lt;h3&gt;
  
  
  Chai
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.chaijs.com/"&gt;Chai&lt;/a&gt; is an assertion library that you can pair with other testing frameworks like Mocha. While not strictly necessary for writing a test suite, it provides a more expressive and readable style for your tests. Like Mocha, Chai allows you to choose BDD-style (&lt;code&gt;expect&lt;/code&gt;) or TDD-style (&lt;code&gt;assert&lt;/code&gt;) assertions so that you can combine the library with most frameworks without any clashes.&lt;/p&gt;

&lt;p&gt;Using these three tools, you can create a fast, stable, and maintainable automated test suite for your APIs with little effort.&lt;/p&gt;

&lt;h2&gt;
  
  
  Putting these tools into play
&lt;/h2&gt;

&lt;p&gt;To demonstrate how SuperTest, Mocha, and Chai work together, we'll use these tools to automate a few tests for an application called Airport Gap. The Airport Gap application provides a RESTful API to help others use it to improve their API automation testing skills.&lt;/p&gt;

&lt;p&gt;Keep in mind that the Airport Gap application is not a Node.js application, so this article won't show how you can use these testing tools to integrate directly with Node.js. However, you can still use them to build tests for any accessible API. This article will create the tests in a separate code repository, but if you have a Node.js application, these tools will work best with your test code alongside the app.&lt;/p&gt;

&lt;p&gt;First, create a new project inside an empty directory and initialize it by running &lt;code&gt;npm init -y&lt;/code&gt; to create a default &lt;code&gt;package.json&lt;/code&gt; file. For now, you don't have to edit this file. With the project initialized, you can set up the latest versions of SuperTest, Mocha, and Chai libraries with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install --save supertest mocha chai
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;That's all you need to get started with creating automated tests for your API. Let's start by creating your first API test for the Airport Gap application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://airportgap.dev-tester.com/docs"&gt;The Airport Gap documentation&lt;/a&gt; shows all available endpoints you can use for your tests. Let's start with the &lt;a href="https://airportgap.dev-tester.com/docs#api_ref_get_airports"&gt;endpoint&lt;/a&gt; that returns all available airports, &lt;code&gt;GET /airports&lt;/code&gt;. This endpoint returns a paginated list of 30 airports at a time, so a quick way to verify that this works is to create a test that calls the endpoint and returns a list of 30 results.&lt;/p&gt;

&lt;p&gt;Create a new file inside the project directory called &lt;code&gt;airports.test.js&lt;/code&gt;, which you'll use to write your test code. You can name this test file anything you prefer, but including &lt;code&gt;.test.js&lt;/code&gt; as part of the filename makes it easier to execute the tests as the test suite expands. In the new file, let's write our first API test. Here's the code, and we'll explain what's going on after:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;request&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;supertest&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://airportgap.dev-tester.com/api&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;expect&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;chai&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="nx"&gt;describe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;GET /airports&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;it&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;returns all airports, limited to 30 per page&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="kd"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/airports&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="nx"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;to&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eql&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;to&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eql&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;If you're familiar with JavaScript, this test will be readable even if you've never used any of the libraries in the project.&lt;/p&gt;

&lt;p&gt;First, the test loads the SuperTest library and assigns it to the variable &lt;code&gt;request&lt;/code&gt;. The SuperTest library returns a function which you can use to initialize an HTTP server. You can also pass a string with the URL of the host you want to use if you're not working directly with a Node.js application, which is what we're doing in this article.&lt;/p&gt;

&lt;p&gt;Notice that the specified host is the API's base URL, including the &lt;code&gt;/api&lt;/code&gt; subdirectory. Using the base URL allows you to make requests to your API endpoints without needing to write the entire URL every time, as you'll see later when we use SuperTest inside our test scenario.&lt;/p&gt;

&lt;p&gt;The next library loaded comes from Chai. Since Chai allows you to use &lt;a href="https://www.chaijs.com/guide/styles/"&gt;both TDD and BDD assertion styles&lt;/a&gt;, you need to specify which one you want to use. For these examples, we're going with the BDD style, using the &lt;code&gt;expect&lt;/code&gt; interface. If you prefer the &lt;code&gt;should&lt;/code&gt; BDD interface or &lt;code&gt;assert&lt;/code&gt; with the TDD style, you can easily switch using Chai. It's one reason why we chose the library since it accommodates different tastes for any team.&lt;/p&gt;

&lt;p&gt;After loading the required libraries, now you'll get into the heart of your test scenarios. Following the BDD style, the test uses Mocha's &lt;code&gt;describe&lt;/code&gt; interface to group your test scenarios. The &lt;code&gt;describe&lt;/code&gt; function accepts a string as a description of the tests and a function to define your test cases. Like Chai, you can use the &lt;a href="https://mochajs.org/#interfaces"&gt;TDD interface&lt;/a&gt; instead if that's your preference. You don't have to load any Mocha libraries, since we'll use Mocha's runner to execute the tests.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;it&lt;/code&gt; function from Mocha is the place to define a single test scenario. Like the &lt;code&gt;describe&lt;/code&gt; function, the first argument is a string to describe the test case, and the second argument is a function to write the code for your test steps. Notice that we're using an &lt;a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/async_function"&gt;asynchronous function&lt;/a&gt; as the second argument for &lt;code&gt;it&lt;/code&gt;. Inside the test, you'll make requests using SuperTest, which returns a promise. Using &lt;code&gt;async&lt;/code&gt; and &lt;code&gt;await&lt;/code&gt; allows you to resolve the promise to get the API response more cleanly instead of resolving the promise through chaining.&lt;/p&gt;

&lt;p&gt;The test scenario has two steps. First, you use SuperTest's &lt;code&gt;request&lt;/code&gt; function to call the API using the &lt;code&gt;get&lt;/code&gt; function. This function requires at least one parameter - the URL for your request. Since we initialized the &lt;code&gt;request&lt;/code&gt; function with our base URL for the Airport Gap API, it's unnecessary to write the entire URL when making requests. All you need is the endpoint, and SuperTest automatically appends it to your base URL.&lt;/p&gt;

&lt;p&gt;As mentioned, the &lt;code&gt;get&lt;/code&gt; function returns a promise, so to resolve it cleanly, you can use the &lt;code&gt;await&lt;/code&gt; keyword. SuperTest makes a request to your host and endpoint, and saves the response in the &lt;code&gt;response&lt;/code&gt; variable, which you'll use to run the test's assertions. SuperTest fetches lots of information from the API request, like the body, headers, status codes, and &lt;a href="https://visionmedia.github.io/superagent/#response-properties"&gt;much more&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;With the API response in hand, you can finally make your assertions to ensure that the API works as expected. The test uses Chai with the &lt;code&gt;expect&lt;/code&gt; function and the &lt;code&gt;to&lt;/code&gt; and &lt;code&gt;eql&lt;/code&gt; chained methods to construct your assertion. &lt;a href="https://www.chaijs.com/api/bdd/"&gt;Chai has tons of methods for building assertions&lt;/a&gt;, and it's worthwhile to read which ones are available to help you create your tests as needed.&lt;/p&gt;

&lt;p&gt;This test contains two assertions. First, the test verifies if the API request's status code was 200 - meaning that the request was successful - using &lt;code&gt;response.status&lt;/code&gt;. The next assertion looks at the response body (&lt;code&gt;response.body&lt;/code&gt;) and checks if the &lt;code&gt;data&lt;/code&gt; key contains 30 items. SuperTest is smart enough to check the content type from the response and appropriately parses the information into a JavaScript object. It makes verifying your JSON APIs much easier to do since you don't have to worry about parsing the response.&lt;/p&gt;

&lt;p&gt;The test is all set up and ready to execute. To run your tests using Mocha, you can use the &lt;code&gt;mocha&lt;/code&gt; executable included when installing the package. The easiest way to use it is with the &lt;a href="https://nodejs.dev/learn/the-npx-nodejs-package-runner"&gt;&lt;code&gt;npx&lt;/code&gt; command&lt;/a&gt;, which will find the executable inside your project. Open your terminal and run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npx mocha airports.test.js
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;If everything works as expected, Mocha will execute your tests and show your test results. The default reporter shows the description of your tests, grouped by the &lt;code&gt;describe&lt;/code&gt; method, and displays the results and the execution time for each test.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--37wgu8hd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/2020/09/api_testing_1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--37wgu8hd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/2020/09/api_testing_1.png" alt="Dead-Simple API Tests With SuperTest, Mocha, and Chai"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You've written and executed your first API test using SuperTest, Mocha, and Chai! In less than ten lines of code (not counting blank lines), you already have an automated test to verify an API request that you can re-run at any time. It can't get any simpler than that.&lt;/p&gt;

&lt;h2&gt;
  
  
  Running POST request tests
&lt;/h2&gt;

&lt;p&gt;Let's write another test, this time checking how a &lt;code&gt;POST&lt;/code&gt; request to the API works. The &lt;code&gt;POST /airports/distance&lt;/code&gt; &lt;a href="https://airportgap.dev-tester.com/docs#api_ref_post_airports_distance"&gt;endpoint&lt;/a&gt;allows you to send two airport codes, and it returns the distance between them in different units of length. Let's see how SuperTest handles the request. Under the existing test in &lt;code&gt;airports.test.js&lt;/code&gt;, create a new test case:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;describe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;POST /airports/distance&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;it&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;calculates the distance between two airports&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;request&lt;/span&gt;
      &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/airports/distance&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;send&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;from&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;KIX&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;to&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;SFO&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

    &lt;span class="nx"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;to&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eql&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;attributes&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;attributes&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nx"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;attributes&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;to&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;include&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;keys&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;kilometers&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;miles&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;nautical_miles&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;attributes&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;kilometers&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;to&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eql&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;8692.066508240026&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;attributes&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;miles&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;to&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eql&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;5397.239853492001&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;attributes&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;nautical_miles&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;to&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eql&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;4690.070954910584&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This test is a bit longer than the first example, but most of the way we use the SuperTest, Mocha, and Chai libraries are similar, so we won't go into detail for this test. Let's focus on the parts that did change.&lt;/p&gt;

&lt;p&gt;The first difference is the way you need to make the request to the API. Since this endpoint is a &lt;code&gt;POST&lt;/code&gt; request, you'll use the &lt;code&gt;post&lt;/code&gt; function. The function works the same as &lt;code&gt;get&lt;/code&gt;, and you only need to specify the endpoint for the API. However, you can chain the &lt;code&gt;send&lt;/code&gt; function to your request to submit any required parameters. Since we're testing a JSON API, you can use a regular JavaScript object with your parameters, and SuperTest sends the correct request body.&lt;/p&gt;

&lt;p&gt;Another difference is one of the assertions made to verify that the API response contains specific keys. Here, we're using Chai's &lt;code&gt;include&lt;/code&gt; and &lt;code&gt;keys&lt;/code&gt; methods to confirm that the response includes the keys with the calculated distances. You can check the entirety of the API response body, but we're just going to do some spot-checks for purposes of this article. We also perform validations on this test's actual values, but these are also for demonstration purposes. You might not want to run these kinds of assertions if your API data can easily change.&lt;/p&gt;

&lt;p&gt;Now that you've seen the changes in these tests, it's time to execute them to make sure everything's working as expected. You can run the tests the same way as before:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npx mocha airports.test.js
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now you have two test scenarios, and if everything is correct, you'll have two successful test results for different API requests and endpoints.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2qVSiD0_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/2020/09/api_testing_2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2qVSiD0_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/2020/09/api_testing_2.png" alt="Dead-Simple API Tests With SuperTest, Mocha, and Chai"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing authenticated endpoints
&lt;/h2&gt;

&lt;p&gt;The examples shown so far run tests against public API endpoints. What if you have an API that requires authentication? The Airport Gap API has some endpoints that require authentication. These protected endpoints require an API token passed as a header. For instance, one endpoint that requires authentication is the &lt;code&gt;POST /favorites&lt;/code&gt; &lt;a href="https://airportgap.dev-tester.com/docs#api_ref_post_favorites"&gt;API endpoint&lt;/a&gt;. This endpoint allows an Airport Gap user to save their favorite airports to their account to look up later.&lt;/p&gt;

&lt;p&gt;Let's begin creating a few tests to validate this behavior. First, we'll cover the test case to verify that the &lt;code&gt;POST /favorites&lt;/code&gt; endpoint doesn't allow access without a token. After verifying that the Airport Gap API won't allow access, we'll write a test that accesses the same endpoint, this time with an authentication token.&lt;/p&gt;

&lt;p&gt;To keep the test suite organized, create a new file in the project directory called &lt;code&gt;favorites.test.js&lt;/code&gt;. Inside this new file, let's first write the test scenario to ensure that an unauthenticated user can't access this endpoint:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;request&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;supertest&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://airportgap.dev-tester.com/api&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;expect&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;chai&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="nx"&gt;describe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;POST /favorites&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;it&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;requires authentication&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/favorites&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;send&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;airport_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;JFK&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;note&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;My usual layover when visiting family&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;

    &lt;span class="nx"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;to&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eql&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;401&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;By now, the test structure should be familiar to you. We're loading up the SuperTest and Chai libraries, create a new test group and set up a test scenario to validate that the endpoint requires authentication. The &lt;code&gt;POST /favorites&lt;/code&gt; endpoint requires the &lt;code&gt;airport_id&lt;/code&gt; parameter and also accepts an optional &lt;code&gt;note&lt;/code&gt; parameter, both of which we'll use in our request. When making a request to a protected endpoint in the Airport Gap API without a valid token, the API returns a &lt;code&gt;401&lt;/code&gt; response, which is what we're checking here.&lt;/p&gt;

&lt;p&gt;Run this new test scenario to make sure it's working as expected:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npx mocha favorites.test.js
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You'll see the now-familiar results for this test case:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZcnjzNVc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/2020/09/api_testing_3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZcnjzNVc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/2020/09/api_testing_3.png" alt="Dead-Simple API Tests With SuperTest, Mocha, and Chai"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that you verified how the API works without authentication, let's go through a longer flow to make similar requests with an authentication token. If you have an Airport Gap account, you can find your API token in your account page and use it directly in your tests either by setting it in the code or through an environment variable. We'll use &lt;a href="https://www.twilio.com/blog/2017/01/how-to-set-environment-variables.html"&gt;an environment variable&lt;/a&gt; to keep sensitive keys out of the codebase.&lt;/p&gt;

&lt;p&gt;The next example follows an end-to-end flow that uses multiple authenticated API endpoints. The test starts by creating a new favorite airport in the user's account. Then, it updates the newly-created record through an API request and validates the data returned. Finally, the test will delete the record, and we'll validate that it's not found anymore.&lt;/p&gt;

&lt;p&gt;In the &lt;code&gt;favorites.test.js&lt;/code&gt; file, add your new test case under the existing scenario:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;it&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;allows an user to save and delete their favorite airports&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// Check that a user can create a favorite.&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;postResponse&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;request&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/favorites&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="kd"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Authorization&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;`Bearer token=&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;AIRPORT_GAP_TOKEN&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;send&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;airport_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;JFK&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;note&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;My usual layover when visiting family&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;

  &lt;span class="nx"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;postResponse&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;to&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eql&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;201&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;postResponse&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;attributes&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;airport&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;to&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eql&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;John F Kennedy International Airport&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;postResponse&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;attributes&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;note&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;to&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eql&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;My usual layover when visiting family&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;favoriteId&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;postResponse&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="c1"&gt;// Check that a user can update the note of the created favorite.&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;putResponse&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;request&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;put&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`/favorites/&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;favoriteId&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="kd"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Authorization&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;`Bearer token=&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;AIRPORT_GAP_TOKEN&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;send&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;note&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;My usual layover when visiting family and friends&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;

  &lt;span class="nx"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;putResponse&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;to&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eql&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;putResponse&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;attributes&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;note&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;to&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eql&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;My usual layover when visiting family and friends&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="c1"&gt;// Check that a user can delete the created favorite.&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;deleteResponse&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;request&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;delete&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`/favorites/&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;favoriteId&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="kd"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Authorization&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;`Bearer token=&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;AIRPORT_GAP_TOKEN&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="nx"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;deleteResponse&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;to&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eql&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;204&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="c1"&gt;// Verify that the record was deleted.&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;getResponse&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;request&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="kd"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`/favorites/&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;favoriteId&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="kd"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Authorization&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;`Bearer token=&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;AIRPORT_GAP_TOKEN&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="nx"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;getResponse&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;to&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eql&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;404&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The API requests made using the &lt;code&gt;request&lt;/code&gt; function all look the same, except for a new portion we haven't used previously. To send the authentication token as part of the request as a header, you can chain the &lt;code&gt;set&lt;/code&gt; function to your request. This function uses two parameters. The first parameter is the name of the request header, and the second parameter is the value you want to send to the server for that header. For the Airport Gap API, it expects to find the &lt;code&gt;Authorization&lt;/code&gt; header with the value of &lt;code&gt;Bearer token=&amp;lt;token&amp;gt;&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;After setting up this end-to-end test, let's execute it and see how it goes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iZjapZIq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/2020/09/api_testing_4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iZjapZIq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/2020/09/api_testing_4.png" alt="Dead-Simple API Tests With SuperTest, Mocha, and Chai"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This test can give you a clearer idea of how potent SuperTest is for making HTTP requests of all kinds. Here, you see the different HTTP methods you can use, and how chaining different methods like &lt;code&gt;send&lt;/code&gt; and &lt;code&gt;set&lt;/code&gt; allows you to pass along all the data your API requests need. This test can be improved in a few ways, like cleaning up the account favorites if there's an error in the middle of the execution, but we'll leave it as an exercise to the reader.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cleaning things up
&lt;/h2&gt;

&lt;p&gt;Although the current tests are relatively short and straightforward, you can begin to take some steps in organizing and cleaning up the test suite now. It's a good practice to try to get some organization going in your codebase before it spirals out of control.&lt;/p&gt;

&lt;p&gt;The first thing you might have noticed is that we have some duplication creeping in. We have two separate files with the same setup to load the libraries. For these basic examples, it's not a big deal. But imagine you continue expanding this test suite and have a few more files. If you have to change the setup, like using a different base URL for the API, you'll have to go into each one and adjust it manually. It'll be nice to have it in one place.&lt;/p&gt;

&lt;p&gt;You can begin organizing your test setup with a configuration file that you can place in the root of your project directory. The configuration file can export some of the common functionality used throughout your test suite, which you can include where needed. That way, you can keep some of your setup and configuration in a single place.&lt;/p&gt;

&lt;p&gt;To do this, start by creating a new file called &lt;code&gt;config.js&lt;/code&gt; inside of your project directory. Inside this file, you can move the common setup used in each test suite and export these functions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;request&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;supertest&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://airportgap.dev-tester.com/api&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;expect&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;chai&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;exports&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now you can replace the setup at the beginning of both test files with this configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;expect&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./config&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Your tests should work the same with this change, and your test suite becomes more maintainable by having the basic setup consolidated in a single place. If you need to set up additional libraries or configure the existing functions differently, you only need to do them once in the configuration file.&lt;/p&gt;

&lt;p&gt;One last thing to do is to make it easier to execute your tests from the command line. Typing &lt;code&gt;npx mocha *.test.js&lt;/code&gt; is simple to do, but we can make it easier by adding a quick command to execute your tests. Open the &lt;code&gt;package.json&lt;/code&gt; file and find the &lt;code&gt;scripts&lt;/code&gt; key. By default, it includes a &lt;code&gt;test&lt;/code&gt; command, which doesn't do anything. Replace the value of the &lt;code&gt;test&lt;/code&gt; key with your Mocha command (the &lt;code&gt;npx&lt;/code&gt; command is no longer necessary):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"scripts"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"test"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"mocha *.test.js"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;The&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;rest&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;of&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;your&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;configuration&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;remains&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;the&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;same.&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;With this change, all you need to execute your tests is to run the &lt;code&gt;npm test&lt;/code&gt; command. While this change doesn't save a ton of time now, it helps in other ways. Most JavaScript projects use the &lt;code&gt;npm test&lt;/code&gt; command as a standard way to execute tests regardless of the testing tools used, so anyone joining your team can get up and running quickly. Another benefit is that it keeps your test command the same if you have to include additional command-line flags in the future.&lt;/p&gt;

&lt;p&gt;If you want to check out the source code for the project shown in this article, it's available on GitHub: &lt;a href="https://github.com/dennmart/dead_simple_api_testing"&gt;https://github.com/dennmart/dead_simple_api_testing&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;With lots of web applications relying on APIs to work, you need to make sure these systems are always working as expected. A fast and stable automated test suite will ensure that the APIs that power the essential parts of your business continue to work as they grow and expand.&lt;/p&gt;

&lt;p&gt;If your team uses JavaScript - which is likely since it's one of the most-used programming languages nowadays - you won't have to step away from your existing toolset and environment to create your tests. You can find plenty of testing frameworks and libraries to build your test automation for your APIs.&lt;/p&gt;

&lt;p&gt;In this article, you saw the combination of three tools to allow you to build a robust automated test suite for APIs quickly. SuperTest enables you to make any HTTP requests with ease. The Mocha testing framework organizes and runs your tests in the way your team prefers, whether its TDD or BDD style. Chai's assertions fit in nicely with Mocha to validate your API responses. All three together combine to create a maintainable and speedy test suite.&lt;/p&gt;

&lt;p&gt;These aren't the only tools you can use, though. As mentioned in this article, you have plenty of options to choose from if you want to build your test automation around JavaScript. If you don't like Mocha, you have similar frameworks like &lt;a href="https://jestjs.io/"&gt;Jest&lt;/a&gt; or &lt;a href="https://jasmine.github.io/"&gt;Jasmine&lt;/a&gt;. If Chai isn't your cup of tea (pun intended), other assertion libraries like &lt;a href="http://shouldjs.github.io/"&gt;should.js&lt;/a&gt; or &lt;a href="https://unexpected.js.org/"&gt;unexpected&lt;/a&gt; work equally well.&lt;/p&gt;

&lt;p&gt;API testing doesn't have to be complicated. After all, the only thing APIs do is receive a request and send back a response. With a few tools in place, you can create a simple yet powerful test suite to make sure your APIs are as reliable as possible to keep your applications running smoothly.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;How do you and your team handle API testing? What issues or pain points have you ran into? Let me know by leaving your comments below!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>testing</category>
      <category>tutorial</category>
      <category>api</category>
    </item>
    <item>
      <title>How Can You Tell If Your Automated Tests Are Any Good?</title>
      <dc:creator>Dennis Martinez</dc:creator>
      <pubDate>Tue, 15 Sep 2020 11:00:00 +0000</pubDate>
      <link>https://dev.to/dennmart/how-can-you-tell-if-your-automated-tests-are-any-good-4k6l</link>
      <guid>https://dev.to/dennmart/how-can-you-tell-if-your-automated-tests-are-any-good-4k6l</guid>
      <description>&lt;p&gt;While browsing the &lt;a href="https://club.ministryoftesting.com/"&gt;Ministry of Testing forums&lt;/a&gt; the other day, I stumbled upon a thread that caught my attention. The thread's title was, &lt;em&gt;"How do you tell how good an Automation implementation is?"&lt;/em&gt; As someone interested in checking out how others handle their test automation, I was interested in seeing what others had to say about this topic.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://club.ministryoftesting.com/t/how-do-you-tell-how-good-an-automation-implementation-is/38146"&gt;The thread&lt;/a&gt; did not disappoint in terms of the quantity and quality of replies. Lots of experienced testers from all over chimed in and gave their thoughts about this question. Many talked about their personal experiences with other organizations when working on automation. Some offered hands-on practical advice, while others provided a more theoretical point of view. The thread contained a nice mix of useful feedback, and we can learn a few guiding principles for our implementations.&lt;/p&gt;

&lt;p&gt;Most of the responses were great, and I encourage everyone to read through the thread. It'll likely get you thinking. As I read through every answer, I began noticing that many of the replies shared some common themes between the different testers who took the time to share their thoughts. It feels like there's a shared agreement in the test automation world on what makes an automation implementation effective.&lt;/p&gt;

&lt;p&gt;The forum thread reminded me a lot about my personal experiences, and I saw a lot of my own thoughts scattered throughout the words written by other testers. Here are some of the main takeaways I got out of the entire discussion, and where many ideas and feelings overlapped.&lt;/p&gt;

&lt;h2&gt;
  
  
  It's hard to tell what good implementation is, but it's easy to spot a bad one
&lt;/h2&gt;

&lt;p&gt;As evidenced by the question from the forum thread's original poster, it's tough to know when your automated test implementation is any good. You might have some thoughts about what makes any testing "good", but it's not something you can quantify with hard evidence.&lt;/p&gt;

&lt;p&gt;Although it's difficult to recognize a good test implementation, it's super-easy to spot a bad one. You can see it coming from a mile away. We all know the signs of a bad setup. It might be that the tests are so flaky you never know when they'll pass or fail, or the tests leave considerable gaps in coverage. Slow tests that take forever to execute are also a sign.&lt;/p&gt;

&lt;p&gt;You can come up with plenty of reasons why a test suite isn't all that great. However, many people use these reasons to gauge their implementation incorrectly. You might think that if your automation efforts don't have any "bad" signs, then it must be good. &lt;em&gt;Not so fast.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Even so-called "good" signs of test automation are unreliable indicators for the health of your implementation. For instance, if you have a stable test suite that doesn't fail, it doesn't mean you're testing the right things. You might have tests that don't fail, but they're not catching regressions either. Another example is when you have tons of coverage for the application under test. Lots of coverage might mean that the team cared more about metrics instead of writing an efficient and effective test suite.&lt;/p&gt;

&lt;p&gt;It's simple to fix the issues that slow down your progress; it's also simple to get misled by what looks good on the surface. Handle the apparent signs of bad automation implementation, but don't ignore the good parts either.&lt;/p&gt;

&lt;h2&gt;
  
  
  Code matters just as much as what you're testing
&lt;/h2&gt;

&lt;p&gt;The vast majority of testing teams often care only about the end result of their work. Typically, the attention lands on whether the tests pass, [and if the tests help the rest of the team catch or prevent issues that break the application.&lt;/p&gt;

&lt;p&gt;Of course, we need to show that our efforts pay off for everyone involved in the product. If your tests don't help with the product's quality, they're practically useless. Unfortunately, paying attention only to what's visible can lead to you neglecting what's driving these results - your actual test code.&lt;/p&gt;

&lt;p&gt;No matter how diligent you are in caring for your codebase, every team eventually reaches a point where things don't perform as well as they used to. Everyone has to tend to their past work for refactoring or even deleting code that has outlived its usefulness.&lt;/p&gt;

&lt;p&gt;Having a fast and stable test suite is excellent, but you also need to ensure that you can keep those tests running in optimal condition for the long haul. The time you spend maintaining tests is an essential factor for a solid test automation implementation. If you have to spend hours or days wrestling with your codebase to modify anything, your implementation will stagnate and eventually stop being useful.&lt;/p&gt;

&lt;p&gt;Every organization and team has different time and budget constraints for what they can do with their testing efforts. However, making sure your codebase allows the team to build and grow the test suite rapidly and with few issues will pay off tenfold in increased quality.&lt;/p&gt;

&lt;h2&gt;
  
  
  "Good" is a team effort
&lt;/h2&gt;

&lt;p&gt;The thread on the Ministry of Testing forum has plenty of excellent comments and suggestions about distinguishing a good test automation implementation from a bad one. It has lots of different strategies and points of view, which are great to learn from and use for your work.&lt;/p&gt;

&lt;p&gt;After reading through the entire thread, I noticed a common thread uniting every response. Although most of the answers offered something specific from the person responding to the thread, my main takeaway from the discussion is that &lt;em&gt;everyone has their particular version of what a good test automation implementation is&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Every person and every team will have their unique definition of what's considered good and what isn't, and it will vary greatly depending on who you ask. If you ask this question to ten different testers, you'll get at least eight different responses. It wouldn't surprise me if you had ten entirely different answers.&lt;/p&gt;

&lt;p&gt;Everyone's circumstance is unique, so it's not uncommon to have different priorities based on the information we have in our hands at any given time. What I consider a good implementation of a test suite (like clean and maintainable code) might register a low-priority item for you and your team. It might not even be on the "good list" for my own team or a particular project we're working on.&lt;/p&gt;

&lt;p&gt;If you're spending too much time pondering if your test automation is any good, you shouldn't make this decision on your own. Bring this question up to the rest of your team and see what discussion brings to the table. One of the posters of the forum thread put it best:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"The only real indication that I have is if the team is satisfied with the automation, then it's probably good. It's at least good enough."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;If you work on implementing test automation for your company, chances are you're wondering if what you're doing is right. As shown by the question posed in the Ministry of Testing forum, you're not alone. It's not always a negative thing, either - It's great to think of ways to improve your work.&lt;/p&gt;

&lt;p&gt;Everyone has their thoughts and opinions about this question, but you can pull out a few themes from the responses to help guide your decisions.&lt;/p&gt;

&lt;p&gt;One of the first things to realize is that it's almost impossible to know what a good test automation implementation looks like. It's easy to see a lousy test suite: slow tests that often fail, tests that don't cover anything useful, etc. But don't let that fool you - what looks good on the surface might cover potential issues that don't serve the team.&lt;/p&gt;

&lt;p&gt;Something you can check to determine the quality of your implementation is if you built the underlying codebase for the long haul. Code maintainability and simplicity go a long way in a good test suite. It can be the difference between a long-lasting stable test suite and one that disappears because no one wants to touch the code.&lt;/p&gt;

&lt;p&gt;Finally, remember that figuring out this process isn't an individual exercise. Everyone has their definition of "good", and it can differ by team, person, or project. It's best to take the opinions of those around you and your current circumstances and mix them into your definition of "good" for where you are at any given time.&lt;/p&gt;

&lt;p&gt;It's okay to check what other testers are doing, and use their experiences to mold yours. But in the end, it all lies with you and your team. If it's good enough for you, that's all that matters.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;What ways do you and your team use to determine if your test automation is good? Let me know in the comments section below!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>testing</category>
      <category>qa</category>
      <category>qualityassurance</category>
      <category>softwaretesting</category>
    </item>
    <item>
      <title>Avoid These 3 Mistakes With Your End-To-End Tests</title>
      <dc:creator>Dennis Martinez</dc:creator>
      <pubDate>Tue, 08 Sep 2020 11:00:00 +0000</pubDate>
      <link>https://dev.to/dennmart/avoid-these-3-mistakes-with-your-end-to-end-tests-62p</link>
      <guid>https://dev.to/dennmart/avoid-these-3-mistakes-with-your-end-to-end-tests-62p</guid>
      <description>&lt;p&gt;I've recently been thinking a lot about all of the projects I've worked on throughout my career as a developer focused on testing. I've been working on web applications for over 15 years, and while technology has moved forward since the days when I began, a lot of the underlying architecture for most of the work I've done has been the same.&lt;/p&gt;

&lt;p&gt;When you work on the same tech stack in different projects, you'll tend to find similar libraries and frameworks in use. For example, you'll often find RSpec in use for Ruby on Rails applications, or Jest for JavaScript applications. Some teams follow standard practices with these tools, and anyone familiar with them can jump in quickly.&lt;/p&gt;

&lt;p&gt;This behavior also extends to teams that are either new to test automation or don't dedicate too many resources to maintaining their existing test suite. These teams often make mistakes with their automated testing that show up again and again throughout different organizations.&lt;/p&gt;

&lt;p&gt;This article goes through three of the most common mistakes I've seen across multiple teams when building and maintaining an end-to-end test suite for their application.&lt;/p&gt;

&lt;h2&gt;
  
  
  1) Testers write too many tests
&lt;/h2&gt;

&lt;p&gt;I don't know about you, but the first time I discovered automating end-to-end tests, I felt like I unlocked a magical skill that would save me an incredible amount of time throughout the workweek. &lt;em&gt;"You mean I can write some code and it'll do my testing work for me? Sign me up!"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Whenever someone learns the wonders of test automation, their first inclination often is to test absolutely everything that they can. It's a noble goal, but it's also one that leads to wasted time and tons of frustration down the road. If you add long-running end-to-end tests to the mix, that problem can quickly spiral out of control before you know it.&lt;/p&gt;

&lt;p&gt;It's great to write end-to-end tests to verify that a particular workflow behaves as expected from start to finish. The problem happens when you want to verify &lt;em&gt;all&lt;/em&gt; of your workflows. These tests are often slow, and if you write too many of them, it'll slow the entire team down to a crawl. Eventually, you'll end up with a test suite that no one wants to run because it takes forever to complete.&lt;/p&gt;

&lt;p&gt;Reliability and maintainability are also essential factors to a healthy codebase. The more end-to-end tests you have, the higher the risk of your test code becoming brittle and unmaintainable. Since these tests cover lots of ground, it's not unusual for them to break due to any minor change somewhere in the system. That means you'll have to update your tests more often. In the end, you'll have to spend more time making sure your test suite is working and easy to update.&lt;/p&gt;

&lt;p&gt;The solution to minimizing these risks is eliminating tests that provide little benefit while keeping your test suite lean. Most teams want to automate as much as possible, but end-to-end testing isn't meant for complete coverage of your application. Focus on writing test cases for what matters the most. Using end-to-end tests to automate your app's most critical sections allows you to reap the most benefit out of your time by preventing slow, unstable, and messy test suites.&lt;/p&gt;

&lt;h2&gt;
  
  
  2) Testers won't ask other team members for help
&lt;/h2&gt;

&lt;p&gt;Failure to communicate between teams is one of the leading causes that stop projects dead in their tracks. You can't expect a team to put in quality work when no one talks to each other. Unfortunately, it seems to happen quite frequently with testers. This problem isn't one-sided, though. Either the testing team doesn't talk with others outside of the group, or testers don't receive any information from the rest of the organization.&lt;/p&gt;

&lt;p&gt;Any good testing team knows that quality is a whole team effort. While a QA team can have the sole responsibility of writing automated end-to-end tests, it doesn't mean they can or should do everything on their own. Testers can't live in a silo. They need outside assistance to perform at their best. Testers have plenty of opportunities to bring in other team members to help with quality across the organization.&lt;/p&gt;

&lt;p&gt;The first group you can call in for support are the developers on your team. They'll have all the technical knowledge behind what you're testing and can help make your life easier by making the app more testable. For instance, they can help improve how you can identify page elements on the website for your tests or do reviews to improve your code.&lt;/p&gt;

&lt;p&gt;Another group you can tap for help are the folks responsible for DevOps. With end-to-end tests, you'll most likely need separate environments and services for testing purposes. These folks can help set up these systems to make it easier for you to execute your tests without disrupting the work of others. DevOps can also improve the efficiency of your tests by observing how the system under test performs during your test runs and improving any bottlenecks along the way.&lt;/p&gt;

&lt;p&gt;It doesn't stop at with the technical support either. Non-technical members of the team have invaluable information that can guide your testing efforts in different ways. Product managers have better insight into the application's current usage, which can help you shape your test plan. Customer support can shed insight on areas where real-world users have often stumbled upon bugs. These are just a few examples you can find throughout any company.&lt;/p&gt;

&lt;p&gt;You can't do everything by yourself, and you shouldn't be expected to, either. Don't isolate yourself or be shy, and ask around for help. It'll ensure your end-to-end tests are both useful and exactly what you need to increase the quality of everyone's work.&lt;/p&gt;

&lt;h2&gt;
  
  
  3) Testers don't fully understand how the application under test works
&lt;/h2&gt;

&lt;p&gt;As testers, it's easy to place our focus on testing alone. After all, it's the work that you're responsible for completing. No one expects you to start doing software development or answer customer support requests along with your work. However, having a one-track mind when it comes to quality often leads to subpar results because you're missing the details that help build a stable test foundation.&lt;/p&gt;

&lt;p&gt;This issue often manifests itself in the choice of tools you and your team make at the beginning of your work on the test suite. Without having a firm grasp on how the application under test does its magic, you're at risk of choosing the wrong tools for the job. You can probably get your tests working with an inadequate tool, but chances are it will hurt the test suite's stability and long-term maintainability.&lt;/p&gt;

&lt;p&gt;For instance, using a heavy, full-featured end-to-end testing framework might be excessive for an app with few interconnecting pieces, leaving you with a slow and bloated test suite. Likewise, using a simple integration testing tool for an app with a complex backend architecture won't provide the coverage you need, missing significant gaps that an end-to-end framework can handle with ease.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev-tester.com/dont-start-your-automation-strategy-with-tools/"&gt;I recently covered this topic in more detail&lt;/a&gt;, which will help you avoid the pitfalls of choosing the wrong tools when starting with test automation.&lt;/p&gt;

&lt;p&gt;Not understanding how the application under test works also ties into the previous point about talking with other team members. You can waste tons of time trying to automate some tests because you're unaware of different services or potential technical limitations that can hinder your testing efforts.&lt;/p&gt;

&lt;p&gt;For example, an application can have an asynchronous job processing system running in the background to process certain information. If you didn't talk with the development team, you might have been unaware of how the system processes the data. You may have spent countless hours wondering why your automated tests for that section didn't work properly without knowing this.&lt;/p&gt;

&lt;p&gt;You don't need to be an expert on the complete technology stack that runs the applications you're testing. But having some knowledge around how the system works will improve your testing efforts. You'll know which tools are more appropriate, and you'll learn how to tackle different areas according to how the system works.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;When you work on different end-to-end test automation projects, you might spot a couple of mistakes that happen more than once. These issues most often pop up with teams that are just starting with test automation or treat it as an afterthought.&lt;/p&gt;

&lt;p&gt;The first issue is when testers want to automate all the things and write too many tests. Knowing the power of automation can become addictive. It's easy to go overboard with this power and try to cover every nook and cranny in your application. While it often comes from good intentions, it leads to buggy and sluggish test suites.&lt;/p&gt;

&lt;p&gt;Another issue that pops up is testers isolating themselves from the rest of the team. They don't take time to get insight from others, missing out on beneficial details that can shape their test plans. Everyone working on the same project, from developers to system administrators to project managers, has insight that you won't get in your day-to-day assignments.&lt;/p&gt;

&lt;p&gt;One of the most prevalent problems happens when testers focus too much on automation and too little on the system that they're testing. They know how the application works on a functional level, but they're unaware of what lies beneath the surface. Because of this lack of knowledge, testers waste time using inefficient tools and running into roadblocks that others could have cleared.&lt;/p&gt;

&lt;p&gt;Thankfully, solving these issues doesn't take a lot of effort if you catch them early. Keep your end-to-end tests lean and focused on handling what's essential and leave the rest to other forms of testing. Ask for help from the rest of the team and use their feedback to improve what you test and how you're testing. Finally, take the time to understand the applications you're working with - a little time spent up-front will save you lots of time down the road.&lt;/p&gt;

&lt;p&gt;Even when using the same libraries and frameworks, every team works in their way. You'll encounter different problems from time to time. As long as you become aware of the obstacles ahead and can work towards resolving the issues, you'll continue boosting your skills and continue on your path of being the best tester you can be.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Have you encountered one or more of these common mistakes in your testing efforts? Are there other mistakes you've seen more than once in your testing work? Leave a comment below and share your experiences!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>testing</category>
      <category>qa</category>
      <category>qualityassurance</category>
      <category>softwaretesting</category>
    </item>
    <item>
      <title>Continuous Testing with Travis CI and LambdaTest</title>
      <dc:creator>Dennis Martinez</dc:creator>
      <pubDate>Tue, 01 Sep 2020 09:10:00 +0000</pubDate>
      <link>https://dev.to/dennmart/continuous-testing-with-travis-ci-and-lambdatest-1hkj</link>
      <guid>https://dev.to/dennmart/continuous-testing-with-travis-ci-and-lambdatest-1hkj</guid>
      <description>&lt;p&gt;When you build any software program, one of your goals is to have as many people using it without any issues. If your app has too many problems that impede the vital functionality you want for your purposes, no one's going to stick around. That's why testing and test automation are essential parts of a product's lifecycle.&lt;/p&gt;

&lt;p&gt;With web applications, testing can get complicated quickly. Often, it starts with a few manual tests in a specific environment. It allows your team to get a head-start when you decide to move to automated tests. Once you have some automation going on, your testing efforts expand to cover the most-used platforms. Test automation allows you to validate your functionality across different environments with ease.&lt;/p&gt;

&lt;p&gt;However, even if you cover the main browsers and devices in use these days, this amount of testing isn't enough. Once your app is out there in the real world, you'll find a seemingly infinite number of combinations of platforms. You not only have to manage different versions of the web browsers you test your app against, but you'll also have to deal with them in varying combinations of devices and operating systems.&lt;/p&gt;

&lt;p&gt;Most organizations use virtualization to spin up the environments that they need for testing purposes. These days it's dead-simple to run virtualized systems. However, you'll need to maintain these systems, and it's often a pain to manage when you scale up. Also, virtualization is not a substitute for real hardware. You might run into scenarios where you'll need the real thing to replicate a problem and fix it.&lt;/p&gt;

&lt;p&gt;Instead of wasting time and money handling different environments to test your web application, you can use &lt;a href="https://www.lambdatest.com/"&gt;&lt;strong&gt;LambdaTest&lt;/strong&gt;&lt;/a&gt; to address your cross-browser test automation needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  LambdaTest - The most powerful cross-browser testing tool online
&lt;/h2&gt;

&lt;p&gt;LambdaTest is a cloud-based testing platform that allows teams to test websites or mobile applications in different environments, using both desktop browsers and browsers on mobile devices. With just a few clicks and some minor configuration, you can spin up any operating system and browser combination to have a usable system to run your tests.&lt;/p&gt;

&lt;p&gt;You can access any operating system from Windows XP to Windows 10 and all Mac OS X editions since 2011. You can also use any version of most-used web browsers from the past 15 years. Whether you need to ensure that your application runs in older environments or on new systems, LambdaTest can handle it for you.&lt;/p&gt;

&lt;p&gt;Here are a few of the things you can do with LambdaTest to boost your test automation efforts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Perform manual and automated tests on your web application, using both virtualized environments and real hardware devices.&lt;/li&gt;
&lt;li&gt;Automatically generate screenshots of your application across multiple systems to perform visual regression testing.&lt;/li&gt;
&lt;li&gt;Validate the responsiveness of your web app across multiple iOS and Android devices.&lt;/li&gt;
&lt;li&gt;Run your tests on a scalable &lt;a href="https://www.lambdatest.com/selenium-automation"&gt;Selenium Grid&lt;/a&gt; system with access to over 2000 desktop and mobile browsers and operating systems.&lt;/li&gt;
&lt;li&gt;Test from more than 27 countries to ensure everyone across the globe can reach your application.&lt;/li&gt;
&lt;li&gt;Integrate with many third-party applications like Jira, TestRail, and Slack to keep your testing efforts in sync across your organization.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;LambdaTest provides lots of other excellent tools to help with your testing organization. The &lt;a href="https://www.lambdatest.com/lt-browser"&gt;LT Browser&lt;/a&gt; aids with responsive testing during development. There's an issue tracker to keep track of what needs fixing inside the service. If you use Google Chrome, a &lt;a href="https://www.lambdatest.com/chrome-extension"&gt;Chrome extension&lt;/a&gt; quickly sends screenshots of your app during manual testing.&lt;/p&gt;

&lt;p&gt;I've used similar cloud-testing services in the past, but most of them only offer a fraction of the tools that LambdaTest provides. It's a complete package that covers almost all of your testing needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  LambdaTest in action
&lt;/h2&gt;

&lt;p&gt;In this article, I want to demonstrate how LambdaTest works by running a continuous testing workflow for an existing web application. For this demonstration, I'll use an automated end-to-end test suite built with the &lt;a href="https://devexpress.github.io/testcafe/"&gt;TestCafe&lt;/a&gt; testing framework. Although LambdaTest's infrastructure is built on Selenium Grid, we can also use other testing frameworks on the platform for test automation, even if they don't use Selenium under the hood.&lt;/p&gt;

&lt;p&gt;The rest of the article assumes you have an active LambdaTest account set up. If you don't have an account and want to follow along, you can &lt;a href="https://accounts.lambdatest.com/register"&gt;register for free&lt;/a&gt;. LambdaTest has a &lt;a href="https://www.lambdatest.com/pricing"&gt;free tier&lt;/a&gt; with limited access that allows you to use all of their services. It's perfect for helping you get started.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setting up TestCafe and LambdaTest
&lt;/h3&gt;

&lt;p&gt;The test suite we'll use in this article comes from my book, &lt;a href="https://testingwithtestcafe.com/"&gt;End-to-End Testing with TestCafe&lt;/a&gt;. It contains 11 end-to-end tests covering different functionality for the book's companion web application, &lt;a href="https://teamyap.app/"&gt;TeamYap&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Setting up an existing TestCafe test suite to use LambdaTest's automation services is straightforward. The LambdaTest team maintains a &lt;a href="https://www.npmjs.com/package/testcafe-browser-provider-lambdatest"&gt;TestCafe plugin&lt;/a&gt; that allows you to run your TestCafe tests on the LambdaTest platform. To install the plugin, run the following command inside your test project directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install &lt;/span&gt;testcafe-browser-provider-lambdatest
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;All the configuration for the plugin is handled by setting up environment variables. Using environment variables allows you the flexibility to set up your automated test runs differently, depending on the environment, without the need to modify any files. Most of LambdaTest's settings have sensible defaults, and you probably won't need to change them. You do need to set up two environment variables that are required for the plugin to work:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;LT_USERNAME&lt;/code&gt; - Your LambdaTest username, automatically generated for you after signing up.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;LT_ACCESS_KEY&lt;/code&gt; - Your LambdaTest account's secret access key.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Your LambdaTest username and access are available in your &lt;a href="https://accounts.lambdatest.com/detail/profile"&gt;account profile settings&lt;/a&gt;. Once you have this information, you can set the required environment variables with the following commands in your preferred terminal on Mac OS X and Linux:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;LT_USERNAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;LambdaTest Username&amp;gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;LT_ACCESS_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;LambdaTest Password&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;For Windows systems, use the following commands through the Command Prompt or PowerShell:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;set &lt;/span&gt;&lt;span class="nv"&gt;LT_USERNAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;LambdaTest Username&amp;gt;
&lt;span class="nb"&gt;set &lt;/span&gt;&lt;span class="nv"&gt;LT_ACCESS_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;LambdaTest Password&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;To verify that your LambdaTest credentials are set up correctly, you can check the list of available browsers and operating systems provided by the LambdaTest service through TestCafe using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;testcafe &lt;span class="nt"&gt;--list-browsers&lt;/span&gt; lambdatest
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;If your credentials are correct, this command prints a long list of browser aliases that you can use to run your TestCafe tests on LambdaTest. The browser aliases are namespaced with &lt;code&gt;lambdatest&lt;/code&gt;, and contain the combination of the browser and operating system for both desktop and mobile devices. For instance, the browser alias &lt;code&gt;"lambdatest:Firefox@78.0:Windows 10"&lt;/code&gt; is for a Windows 10 system using Firefox 78, and &lt;code&gt;"lambdatest:Safari@13.0:MacOS Catalina"&lt;/code&gt; is for a Mac OS X Catalina (10.15) system using Safari 13. If you don't see a list of browser aliases, double-check your credentials and set them appropriately.&lt;/p&gt;

&lt;h3&gt;
  
  
  Running tests on LambdaTest
&lt;/h3&gt;

&lt;p&gt;Now that you have LambdaTest and TestCafe set up, all that's left to do is execute your tests on the service. Thanks to TestCafe's plugin system, the command to perform this action is identical to running your tests locally:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;testcafe &amp;lt;LambdaTest browser &lt;span class="nb"&gt;alias&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &amp;lt;&lt;span class="nb"&gt;test &lt;/span&gt;files&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;For example, if you want to run all of the tests on the project on Mac OS X Catalina using an instance of Microsoft Edge version 84, you can use the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;testcafe &lt;span class="s2"&gt;"lambdatest:MicrosoftEdge@84.0:MacOS Catalina"&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt;_test.js
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This command automatically sets up a tunnel between your system and LambdaTest, and triggers your test using the specified environment on the cloud. TestCafe receives the results for each test as they complete. The reporter plugin also shows a link to the LambdaTest session so that you can keep track of the test run on the service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VJ1JLMx---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/2020/08/lambdatest_testcafe_run.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VJ1JLMx---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/2020/08/lambdatest_testcafe_run.png" alt="Continuous Testing with Travis CI and LambdaTest"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While the tests run, you can check the LambdaTest dashboard to see your test runs and current sessions' stats.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VDKp2JGd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/2020/08/lambdatest_dashboard.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VDKp2JGd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/2020/08/lambdatest_dashboard.png" alt="Continuous Testing with Travis CI and LambdaTest"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When the test run finishes, LambdaTest closes the session and keeps a record of your test run. In the Automation section of your LambdaTest account, you'll see a timeline of your builds and automated test runs. Here, you can access all of the test run information, including a video recording of the test run on LambdaTest's system.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RvQeGALz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/2020/08/lambdatest_automation_logs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RvQeGALz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/2020/08/lambdatest_automation_logs.png" alt="Continuous Testing with Travis CI and LambdaTest"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When you use the TestCafe plugin to run your tests without modifying any additional settings, it will use the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The build name used is "Untitled".&lt;/li&gt;
&lt;li&gt;The test run name used is automatically generated by TestCafe.&lt;/li&gt;
&lt;li&gt;The screen resolution is 1024x768 pixels.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As mentioned earlier, you can modify LambdaTest's default settings through environment variables:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;LT_BUILD&lt;/code&gt; - The name of the build to use for this automated test run.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;LT_TEST_NAME&lt;/code&gt; - The name you want to use to identify your test run.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;LT_RESOLUTION&lt;/code&gt; - The screen resolution for the LambdaTest environment, defined by width and height in pixels (like "1600x1200", for instance).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can also modify other settings, including logging JavaScript console messages, record network packets, and disable video recording through other environment variables. See &lt;a href="https://www.lambdatest.com/support/docs/npm-plugin-for-testcafe-integration-with-lambdatest/"&gt;LambdaTest's TestCafe plugin documentation&lt;/a&gt; for more details, and &lt;a href="https://www.lambdatest.com/support/docs/selenium-automation-capabilities/"&gt;LambdaTest's Selenium documentation&lt;/a&gt; for information on the platform's capabilities.&lt;/p&gt;

&lt;h3&gt;
  
  
  Continuous testing with LambdaTest and Travis CI
&lt;/h3&gt;

&lt;p&gt;It's pretty nice having your TestCafe tests running on the cloud with LambdaTest. The LambdaTest plugin makes the entire process effortless. However, we can take it a step further by setting up continuous integration to automatically run your tests. If your project receives an update, your CI service can run your tests and alert you immediately if the changes cause any regressions.&lt;/p&gt;

&lt;p&gt;To show how continuous testing works, we'll use one of the most popular continuous integration services: &lt;a href="https://travis-ci.org/"&gt;&lt;strong&gt;Travis CI&lt;/strong&gt;&lt;/a&gt;. Travis CI is a hosted continuous integration service that syncs with your code repository to get your test automation up and running in no time. This article will show you how to connect your TestCafe code repository to TravisCI and set it up to run your tests on LambdaTest automatically.&lt;/p&gt;

&lt;p&gt;TravisCI is &lt;a href="https://travis-ci.com/plans"&gt;free for open-source projects&lt;/a&gt;, and &lt;a href="https://travis-ci.com/signup"&gt;signing up&lt;/a&gt; is as simple as logging in with your preferred source code management provider. TravisCI supports &lt;a href="https://github.com/"&gt;GitHub&lt;/a&gt;, &lt;a href="https://about.gitlab.com/"&gt;GitLab&lt;/a&gt;, &lt;a href="https://bitbucket.org/"&gt;Bitbucket&lt;/a&gt;, and &lt;a href="https://www.assembla.com/home"&gt;Assembla&lt;/a&gt;, so if your project's code is on one of these platforms, you can automatically connect it to TravisCI.&lt;/p&gt;

&lt;p&gt;After signing up with your source code management provider, TravisCI syncs up with your account to fetch your available code repositories. You can visit your account settings to search for the repo you want to connect with TravisCI and click on the toggle switch next to the repo name. Turning the toggle switch to "on" sets up your code repo with a webhook that notifies TravisCI of any changes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GJJZuS16--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/2020/08/travis_ci_repo_setup.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GJJZuS16--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/2020/08/travis_ci_repo_setup.png" alt="Continuous Testing with Travis CI and LambdaTest"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before you can use TravisCI to build your project, you need a bit of configuration. First, you need to set up the environment variables that LambdaTest needs, as discussed earlier in this article. TravisCI makes it easy to set up environment variables securely.&lt;/p&gt;

&lt;p&gt;When you turn on the project inside TravisCI, click on the "Settings" button next to the toggle switch. You can set up the environment variables your project needs to run the build properly among the project settings. Here, you can set up the LambdaTest environment variables. Also, the TestCafe test suite uses a few environment variables for the account credentials throughout the test scenarios, so we need to set them here.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FZZoCLjz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/2020/08/travis_ci_environment_variables.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FZZoCLjz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/2020/08/travis_ci_environment_variables.png" alt="Continuous Testing with Travis CI and LambdaTest"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The final step in the process is to configure your project to tell TravisCI what it needs to do for every build. You can manage this configuration through a build configuration file called &lt;code&gt;.travis.yml&lt;/code&gt; file, placed inside your code repository. In this file, you need to specify the language support for your project, at a minimum. Depending on the language, TravisCI will run a few commands by default, but you have control over the entire process if needed.&lt;/p&gt;

&lt;p&gt;For our project, TestCafe is a testing tool that uses Node.js under the hood, so we can set our build configuration to use a Node.js platform when running builds on TravisCI. It's also a good practice to specify at least one Node.js version to ensure that your tests run on a known working version. Inside of the TestCafe test suite project, you can create the &lt;code&gt;.travis.yml&lt;/code&gt; file in the root of the project with this configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;language&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;node_js&lt;/span&gt;
&lt;span class="na"&gt;node_js&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="m"&gt;12&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;By defining &lt;code&gt;node_js&lt;/code&gt; as the language for this project, TravisCI automatically runs the following commands:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;npm install&lt;/code&gt; to install all of the project's dependencies.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;npm test&lt;/code&gt; to run your tests automatically.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The project already has a valid &lt;code&gt;package.json&lt;/code&gt; file containing the dependencies needed to run the TestCafe test suite on LambdaTest. The file also has the &lt;code&gt;test&lt;/code&gt; script with the same command used earlier, so no further configuration is necessary with TravisCI. Here's the completed &lt;code&gt;package.json&lt;/code&gt; file for reference:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"end_to_end_testing_with_testcafe"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1.0.0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"description"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Demo repo showing how to run TestCafe tests using LambdaTest and TravisCI"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"main"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"login_test.js"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"scripts"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"test"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"testcafe 'lambdatest:MicrosoftEdge@84.0:MacOS Catalina' *_test.js"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"repository"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"git"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"url"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"git+https://github.com/dennmart/lambdatest_travisci_testcafe.git"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"author"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Dennis Martinez"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"license"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"MIT"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"bugs"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"url"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://github.com/dennmart/lambdatest_travisci_testcafe/issues"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"homepage"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://github.com/dennmart/lambdatest_travisci_testcafe#readme"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"dependencies"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"testcafe"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"^1.8.7"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"testcafe-browser-provider-lambdatest"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"^2.0.2"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;With the build configuration file in the project and TravisCI connected to the code repository, your tests will now run after every repo change. To test this out, push the &lt;code&gt;.travis.yml&lt;/code&gt; file to the remote repository. If everything is set up correctly, TravisCI will automatically trigger a build after a couple of seconds.&lt;/p&gt;

&lt;p&gt;The build starts by setting up the environment variables, installing all project dependencies, and runs the test script. With the environment variables in place, your tests will run as they did before. The LambdaTest plugin creates a tunnel to its service and runs the test in the specified operating system and browser.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qUH1jGc6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/2020/08/lambdatest_travis_ci_results_1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qUH1jGc6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/2020/08/lambdatest_travis_ci_results_1.png" alt="Continuous Testing with Travis CI and LambdaTest"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cmwsrPF---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/2020/08/lambdatest_travis_ci_results_2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cmwsrPF---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/2020/08/lambdatest_travis_ci_results_2.png" alt="Continuous Testing with Travis CI and LambdaTest"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With some minor setup, you can leverage the power of LambdaTest to execute your TestCafe tests in any environment, using TravisCI to run them for you automatically.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Testing your web application on a combination of operating systems, web browsers, and devices can be a hassle. It's costly and time-consuming to maintain different environments to ensure that your app works. With LambdaTest, it doesn't have to be a pain.&lt;/p&gt;

&lt;p&gt;LambdaTest provides an easy yet powerful way to test any web application in a variety of setups. It provides all the tools to ensure your app works as intended, from spinning up different operating systems and browsers on-demand to performing other testing duties like responsive and visual testing.&lt;/p&gt;

&lt;p&gt;This article shows how simple it is to set up a continuous testing workflow with just about any modern environment, thanks to LambdaTest and TravisCI. The examples shown here are for running a TestCafe suite, and LambdaTest provides a dead-simple way to run your tests on the cloud. If you don't use TestCafe, LambdaTest also supports lots of other &lt;a href="https://www.lambdatest.com/support/docs/supported-languages-and-frameworks/"&gt;languages and frameworks&lt;/a&gt;, so chances are you'll have support for your existing test suite.&lt;/p&gt;

&lt;p&gt;These days, you have plenty of tools at your disposal to make your test automation journey easier. LambdaTest is one of the most robust and powerful services available to help you tackle any potential issues before your users encounter them. It'll improve the quality of your application with little extra effort.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Have you used LambdaTest with your projects or in your organization? Share your experiences in the comments below!&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This post is sponsored by&lt;/em&gt; &lt;a href="https://www.lambdatest.com/"&gt;&lt;strong&gt;&lt;em&gt;LambdaTest&lt;/em&gt;&lt;/strong&gt;&lt;/a&gt;&lt;em&gt;, a cloud-based cross-browser testing tool to perform manual or automated testing on over 2000 browsers online. All reviews and opinions expressed in this article are my own and based on my personal experience with the service.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>testing</category>
      <category>qa</category>
      <category>devops</category>
      <category>softwaretesting</category>
    </item>
    <item>
      <title>Don't Start Your Automation Strategy With Tools</title>
      <dc:creator>Dennis Martinez</dc:creator>
      <pubDate>Tue, 25 Aug 2020 11:00:00 +0000</pubDate>
      <link>https://dev.to/dennmart/don-t-start-your-automation-strategy-with-tools-59p2</link>
      <guid>https://dev.to/dennmart/don-t-start-your-automation-strategy-with-tools-59p2</guid>
      <description>&lt;p&gt;&lt;em&gt;"I started learning about Selenium but don't know what I'm doing."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;"Someone recommended I use TestCafe, but it's not easy to use for testing my API."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;"My team jumped into using Cypress for our UI tests, but we're a bit stuck after realizing our app needs testing some functionality on multiple browser tabs."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Every week when I scan around different websites and forums around the Internet, I see automation testers making comments like these. Often, these comments come from people who are new to test automation. Most of this discussion often centers around one thing and one thing only: &lt;em&gt;test automation tools&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Whenever somebody drops a post in a forum asking what test automation tool they should use for their work, tons of opinions start flooding in. If you've ever searched for "best automation tools" in any search engine, you'll find a seemingly infinite amount of lists and articles containing hundreds of options at your disposal.&lt;/p&gt;

&lt;p&gt;It's overwhelming and confusing for someone new to automation, having to wade through so many comments and opinions, trying to make sense with how to proceed on their journey. Most of you have gone through this experience when starting. I know I did. I still do whenever I have to expand my skills to automate testing on different platforms.&lt;/p&gt;

&lt;p&gt;Most of us tend to reach for tools first when doing something new. It happens not just in test automation, but almost every facet in our lives. When someone wants to create a podcast, they often research microphones before recording something with their laptop's built-in mic or even their smartphone. People wishing to learn to speak a new language buy tons of books and other materials before saying their first word. And automation testers usually ask which programming language or software to use before figuring out what they need to test.&lt;/p&gt;

&lt;p&gt;Of course, you need tools to accomplish whatever you set out to do. You can't record a podcast without a mic, say something in a different language without reading or hearing it somewhere, or write an automated test without knowing how to code or use a codeless automation tool. However, starting your test automation efforts with tooling is a big mistake I've seen repeatedly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why jumping in with tools is a huge mistake
&lt;/h2&gt;

&lt;p&gt;There's nothing wrong with sampling and experimenting with different test automation tools for your goals. As mentioned, you'll eventually need to use something to get your automation strategy moving forward. The main problem lies with how quickly some people jump into using any tool for their work.&lt;/p&gt;

&lt;p&gt;A typical mistake I've seen by newer automation testers is when they start using a test automation tool. The tool works for a few of their basic scenarios, and they instantly go all in for the long haul. By itself, it isn't a problem if the tool covers all of the defined test cases. But usually, they'll realize the tool they chose won't work for everything they need.&lt;/p&gt;

&lt;p&gt;When this happens, one of two things happens: the tester begins to try to make the tool fit their workflow to avoid sunk costs, or they have to backtrack and find a different tool. If they stick with their chosen tool, it'll lead to a brittle test suite and challenging to maintain. Otherwise, they'll have to go back to the drawing board with something different. In either scenario, they'll waste lots of time and effort.&lt;/p&gt;

&lt;p&gt;The key here is to find the right tool for the job with your automation strategy before committing. The best way to do that isn't to focus on tools - &lt;strong&gt;it's learning about the application under test first, before anything else.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Learn what you're testing before reaching for tools
&lt;/h2&gt;

&lt;p&gt;When you get distracted by the potential of shiny new test automation tools, you'll often lose sight of what you should spend more time thinking about - making sure you know how the application you're testing works. You'd be surprised at how much you can discover by paying attention to the details that make your app unique.&lt;/p&gt;

&lt;p&gt;To go even further, you should also spend time learning a bit about the underlying tech running the application and how everything works. You don't have to do a deep dive and become an expert on every detail, but knowing what makes the application run will help with your testing efforts. For example, what kind of database does the app use? Does it connect to external services? Does it perform asynchronous tasks in the background? Knowing these details can help shape a better test plan that can consider specific scenarios based on what makes the app work as it does.&lt;/p&gt;

&lt;p&gt;As you understand more about how the application works, you'll begin to think about the higher risk areas that can affect different parts of the app. What areas are most fragile and require more robust test coverage? For instance, if you discover that the app relies on third-party APIs to work correctly, you can make sure to include enough test coverage around this functionality.&lt;/p&gt;

&lt;p&gt;You don't need to handle all of this on your own. The rest of your team is a treasure trove of information to help with your automation efforts. Developers can tell you which areas are most likely to break with new changes or new features that will modify existing functionality. Product managers can provide metrics that indicate which parts of the app most of your customers use daily. Having additional details from other departments can help you prepare better tests and avoid testing areas that don't matter in the bigger picture.&lt;/p&gt;

&lt;h2&gt;
  
  
  The more you know, the less time you'll waste
&lt;/h2&gt;

&lt;p&gt;With some up-front work, you'll have a clear idea about how the app works both on the surface and internally, and you'll have "inside information" from people with in-depth knowledge of the app's current and future state. With all this information in hand, you can begin introducing test automation and tooling into the mix. By this point, you'll know which tools are the right ones for the job, and it'll save you and your team lots of time and effort with your automation work.&lt;/p&gt;

&lt;p&gt;For example, imagine you're testing a web application with a sophisticated user interface or many moving pieces under the hood. You can choose a testing framework like TestCafe or Cypress and plan to write longer end-to-end tests to validate what you need. In another example, maybe you have an app with a simple UI, but the underlying APIs need more attention. An end-to-end testing framework may be overkill. Instead, you can use tools more suitable for API testing like Postman or SoapUI.&lt;/p&gt;

&lt;p&gt;If you had immediately jumped in to test automation without knowing how your application works, you would have likely chosen the wrong tools for the task at hand. Focusing on API testing when it's best to ensure a complex user interface works well won't help your efforts. Only having unit tests when an application runs with different interconnecting systems like message queues, asynchronous background task runners, or separate microservices is also a waste of time.&lt;/p&gt;

&lt;p&gt;Taking the approach of learning about what you're testing will take more time for you to get started. In today's fast-paced work environment, it may create disagreements in teams that want to get things done right now. But the time you spend learning now will save a lot of effort and frustration in the long run by focusing on testing the right things while avoiding spending too much time on what doesn't matter. Quality is a trait that builds and compounds over time, not something that's rushed for a quick result.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;One of the first things that automation testers tend to do, especially those new to test automation, is to find tools to start creating a test suite. However, it's better to learn about what you're testing before you learn what you'll use to test.&lt;/p&gt;

&lt;p&gt;Before reaching for the tools, start by learning all you can about how your application under test works. You have to learn how the average end-user interacts with the app to spot which areas are prime candidates for testing. This information will give you a starting point for your test plan.&lt;/p&gt;

&lt;p&gt;Beyond the user interface, you should also take the time to learn the underlying components that make the app run smoothly. You don't have to know how everything works, but being aware of what's happening beneath the surface will give you more information on what to test.&lt;/p&gt;

&lt;p&gt;Finally, take the time to talk with others outside of the testing team. Others who have more hands-on experience with the application under test can provide details no one else can give. If you stay inside your bubble, you'll miss out on this invaluable insight that can make your testing efforts easier.&lt;/p&gt;

&lt;p&gt;You'll need tools eventually, but jumping in with tools first can lead you to pick something that you'll end up fighting against the whole way. Once you know what to test and which areas to automate, it becomes a lot easier to figure out which tools to use and how to implement them into your workflow.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;What's your strategy when starting your automation testing efforts in a new project? Share your experiences in the comments below!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>testing</category>
      <category>qa</category>
      <category>qualityassurance</category>
      <category>softwaretesting</category>
    </item>
    <item>
      <title>How to Send Your TestCafe Test Results to TestRail</title>
      <dc:creator>Dennis Martinez</dc:creator>
      <pubDate>Tue, 18 Aug 2020 11:00:00 +0000</pubDate>
      <link>https://dev.to/dennmart/how-to-send-your-testcafe-test-results-to-testrail-228j</link>
      <guid>https://dev.to/dennmart/how-to-send-your-testcafe-test-results-to-testrail-228j</guid>
      <description>&lt;p&gt;If you're working among a group of testers at a company, your team is most likely using a test case management tool. These tools are indispensable for testing teams. A useful test case management tool provides a single source for the organization's test plans, allows the team to collaborate with creating and maintaining test cases, and see reports of your test runs.&lt;/p&gt;

&lt;p&gt;In my experience, these tools work great with manual testing scenarios, as they provide testers with all they need to do their work. However, it's a bit tricky to get these systems in sync with automated testing. Sometimes the test case management tool doesn't provide a simple way to integrate external tools. Other times, your test automation framework makes it difficult to use its results elsewhere.&lt;/p&gt;

&lt;p&gt;Thankfully, most modern test case management tools provide many ways to hook up third-party tools, such as test automation frameworks. For instance, a web-based test case management tool can have an API to create a new test run, aggregate your test results, and close the run. This way, any system can manage this information without the need to do it manually.&lt;/p&gt;

&lt;p&gt;One such tool with a robust API is &lt;a href="https://www.gurock.com/testrail/"&gt;TestRail&lt;/a&gt;. It's one of the most popular test case management tools out there. Given its popularity, chances are your preferred automated testing framework has a library or plugin that sends your test run details to TestRail.&lt;/p&gt;

&lt;p&gt;In this article, I'll show you how to use TestCafe to log your test runs in TestRail automatically.&lt;/p&gt;

&lt;h2&gt;
  
  
  The need to track automated tests in TestRail
&lt;/h2&gt;

&lt;p&gt;In my previous job, the testing team used TestRail to keep track of their testing work. Their test plans and cases were detailed in the system, giving the rest of the team visibility into how they ran tests for our projects. It also helped non-QA team members like developers and project managers identify areas of risk, helping&lt;/p&gt;

&lt;p&gt;Initially, QA performed most of their tests manually, and they would use TestRail as their guide. As the company grew and we had more projects to work on, this approach didn't scale very well. Some projects began to suffer, quality-wise, because there wasn't much time for the team to do what they needed. That's where we started to introduce more automated tests into the mix.&lt;/p&gt;

&lt;p&gt;When the team began to write end-to-end tests with &lt;a href="https://devexpress.github.io/testcafe/"&gt;TestCafe&lt;/a&gt;, we wanted to acknowledge those automated tests alongside the team's manual testing efforts. At first, the team went through the results for the automated test suite and manually marked the results for each test case in TestRail. Obviously, doing this isn't the best use of a tester's time.&lt;/p&gt;

&lt;p&gt;As more test automation got introduced to our projects, the testing team wanted to find a way to keep TestRail synchronized with our automated test runs since it took too much time to keep this information up to date on all relevant systems. We noticed that someone created a TestCafe reporter called &lt;a href="https://github.com/jiteshsojitra/testcafe-reporter-html-testrail"&gt;testcafe-reporter-html-testrail&lt;/a&gt; that handles this for you.&lt;/p&gt;

&lt;p&gt;After getting it set up, it worked exactly for what we needed. The reporter automatically opened a new test run on TestRail and sent over the results for our test cases. It worked great for the testers, who needed to focus on manual testing or verifying reported failures.&lt;/p&gt;

&lt;h2&gt;
  
  
  Making sure TestRail is ready to receive your TestCafe test results
&lt;/h2&gt;

&lt;p&gt;Before showing how the integration between TestRail and TestCafe works, let's make sure TestRail is set up correctly. In the examples for this article, I'll use a TestRail Cloud account, which I set up for the TeamYap application test plan. The TestRail account has a single repository for TeamYap, and I added different sections for each test case similar to the structure for the automated tests.&lt;/p&gt;

&lt;p&gt;Keep in mind that the sections and test case names don't matter with the integration between TestRail and TestCafe. The main thing you'll need is the &lt;em&gt;Test Case ID&lt;/em&gt;, which you'll see in use later in this article. The important thing is that you have enough test cases written in TestRail to match with the automated tests written with TestCafe.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--guaDBqgk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/2020/08/testrail_test_cases.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--guaDBqgk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/2020/08/testrail_test_cases.png" alt="How to Send Your TestCafe Test Results to TestRail"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Additionally, you need to enable TestRail's API to allow TestCafe to send its results to your account. This setting is not enabled by default. To enable the TestRail API, log in to your account as an administrator, and go to &lt;em&gt;Administration&lt;/em&gt; (on the upper-right corner on your dashboard), click on &lt;em&gt;Site Settings&lt;/em&gt;, and select the &lt;em&gt;API&lt;/em&gt; section (last icon on the navigation). Check the box labeled "Enable API" and save the settings. You don't need to enable session authentication for the API, so you can leave it unchecked.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mDEOdOvG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/2020/08/testrail_enable_api_settings.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mDEOdOvG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/2020/08/testrail_enable_api_settings.png" alt="How to Send Your TestCafe Test Results to TestRail"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After setting up a project with your test cases and enabling the API, your TestRail account is ready to accept your automated test results. The setup process detailed here are for TestRail Cloud. TestRail also allows you to install an instance of the application to your servers instead of a cloud-based account. I haven't used TestRail Server, but if your organization uses it instead of TestRail Cloud, I assume that the setup process is the same.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up the TestCafe reporter for publishing test runs to TestRail
&lt;/h2&gt;

&lt;p&gt;This article's tests come from a TestCafe test suite repository using the same tests used in my book, &lt;a href="https://testingwithtestcafe.com/"&gt;End-to-End Testing with TestCafe&lt;/a&gt;. This repo contains 11 end-to-end tests covering functionality for the TeamYap application explicitly built to give readers hands-on learning opportunities with TestCafe.&lt;/p&gt;

&lt;p&gt;Installing the TestRail reporter for a TestCafe test suite is simple. As with most TestCafe reporters, it's available as a plugin that you can install using npm, just like any other TestCafe plugin. However, there is a caveat: &lt;em&gt;The&lt;/em&gt; &lt;a href="https://www.npmjs.com/package/testcafe-reporter-html-testrail"&gt;&lt;em&gt;latest version of the reporter plugin&lt;/em&gt;&lt;/a&gt; &lt;em&gt;doesn't work.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I reached out to the plugin owner to ask why the plugin doesn't work. He told me that the current version is meant for his organization's internal use and is currently not working with TestRail anymore, with no plans to fix it. The source code for the latest version of the plugin is private, so I couldn't lend him a hand to correct the problem. I don't understand his reasoning for not forking the project into a private repo or how he gained ownership of the npm repository, but that's a separate topic.&lt;/p&gt;

&lt;p&gt;In the meantime, the workaround is to install an older version of the plugin. If you attempt to use the current version, it prints out a warning message asking you to use version 2.0.6 of the plugin for TestRail functionality. However, any 2.0 version will work. For this project, I'll use &lt;a href="https://www.npmjs.com/package/testcafe-reporter-html-testrail/v/2.0.8"&gt;version 2.0.8&lt;/a&gt; of the plugin, which can be installed with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--save&lt;/span&gt; testcafe-reporter-html-testrail@2.0.8
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Once the reporter plugin is installed for your TestCafe project, you can use the reporter when running your tests. Before you can do that, you still need some additional setup.&lt;/p&gt;

&lt;p&gt;The way the reporter plugin works is by using environment variables to configure the integration with TestRail, and your TestCafe test names need to be formatted in a specific way to link them with a TestRail test case. Below, you'll find the different settings you need to set up the reporter.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using environment variables to configure your TestRail credentials
&lt;/h3&gt;

&lt;p&gt;The reporter plugin uses certain environment variables for the TestRail API credentials. You need to set the following environment variables if you want to send your TestCafe test run results to TestRail.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;TESTRAIL_ENABLE&lt;/code&gt;: This variable accepts a boolean value (&lt;code&gt;true&lt;/code&gt; or &lt;code&gt;false&lt;/code&gt;) to tell the reporter plugin to send your test run results to TestRail. By default, it's set to &lt;code&gt;false&lt;/code&gt;, which is useful to avoid publishing your test results when running tests locally. Usually, this variable is set to &lt;code&gt;true&lt;/code&gt; in a continuous integration environment.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;TESTRAIL_HOST&lt;/code&gt;: In this variable, you need to set up the host of your TestRail instance. For example, the TestRail account used in this article is hosted at &lt;code&gt;https://devtester1.testrail.io/&lt;/code&gt;. Keep in mind that it's important to include the full URL, including the protocol (&lt;code&gt;https://&lt;/code&gt;, in this example).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;TESTRAIL_USER&lt;/code&gt;: Here, you'll specify the username of a TestRail account with access to your project. When the reporter plugin sends the test run results to TestRail, they'll get created under the account specified here.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;TESTRAIL_PASS&lt;/code&gt;: You can use either the TestRail account password for the user specified in the previous environment variable, or use an API key created for the account. It's recommended to use an API key since you can limit access to your TestRail account and easily revoke it.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;PROJECT_NAME&lt;/code&gt;: This variable indicates the TestRail project name containing the test cases for your automated test suite.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;PLAN_NAME&lt;/code&gt;: TestRail uses the concept of &lt;a href="https://www.gurock.com/testrail/docs/user-guide/getting-started/walkthrough#Testplansandconfigurations"&gt;"Test Plans"&lt;/a&gt; to manage multiple test runs if you need to run your tests on different configurations. This variable allows you to specify a plan name. If you don't set a plan name, it will publish your test results under the &lt;code&gt;TestAutomation_1&lt;/code&gt; test plan by default.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you're having issues publishing your TestCafe results to TestRail, verify that you set all of these environment variables correctly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Formatting your TestCafe test descriptions
&lt;/h3&gt;

&lt;p&gt;To link a specific TestCafe test case to a TestRail test case, the reporter plugin requires a specific formatting of the test name in TestCafe:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;test&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;&amp;lt;Test Type&amp;gt; | &amp;lt;Test Name&amp;gt; | &amp;lt;TestRail Test Case ID&amp;gt;&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="nx"&gt;t&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// Your test code goes here as usual.&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Replace the following segments of the example above with your test case details:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;&amp;lt;Test Type&amp;gt;&lt;/code&gt;: The type of test (like "smoke" or "regression").&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;&amp;lt;Test Name&amp;gt;&lt;/code&gt;: The name of your test, a description of what it does.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;&amp;lt;TestRail Test Case ID&amp;gt;&lt;/code&gt;: The test case ID from TestRail that will link with your TestCafe test.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Of those three segments, the most important one is the TestRail Test Case ID - it &lt;strong&gt;must&lt;/strong&gt; match a test case for the project in your TestRail account. The other two segments (test type and test name) don't affect the integration between TestCafe and TestRail. However, you should still ensure this information accurately describes your test.&lt;/p&gt;

&lt;p&gt;Here's an example from the test repo used in this article. This test is a smoke test for verifying that the login functionality works correctly for the TeamYap application, where the TestRail test case ID is "C1":&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;test&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Smoke | User with valid account can log in | C1&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="nx"&gt;t&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// Test code...&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Running tests and sending results to TestRail
&lt;/h3&gt;

&lt;p&gt;With the necessary environment variables set and your test names adequately formatted, all that's left is to run your tests and have the results sent automatically to TestRail.&lt;/p&gt;

&lt;p&gt;The only thing you need to do is to specify the reporter when executing the TestCafe test suite. You can easily do this from the command line:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;testcafe chrome:headless &lt;span class="k"&gt;*&lt;/span&gt;_test.js &lt;span class="nt"&gt;--reporter&lt;/span&gt; html-testrail
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This command will run your tests as you would expect. The output will be different from the default TestCafe reporter, but it will display the results of each test similarly.&lt;/p&gt;

&lt;p&gt;After the test suite completes its execution and the &lt;code&gt;TESTRAIL_ENABLE&lt;/code&gt; environment variable is set to &lt;code&gt;true&lt;/code&gt; for your system, the reporter publishes the results to your TestRail account. If your TestRail account credentials are correct, you'll see the details for the newly created TestRail test run in the output.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8aObnZUq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/2020/08/testrail_testcafe_submit.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8aObnZUq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/2020/08/testrail_testcafe_submit.png" alt="How to Send Your TestCafe Test Results to TestRail"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The project and test plan names used are the ones you set up in the environment variables (as explained above). The name of the test run on TestRail will be a string that includes a timestamp of the test run, and the browser details provided by TestCafe.&lt;/p&gt;

&lt;p&gt;If you go to your TestRail account, you'll see your newly-created test run under your test plans and the results for each test. Each test gets automatically linked to the test case, so over time, you can generate reports, view the history of test results, and export the data for your needs. You can then perform your usual TestRail duties like closing the test plan or re-assigning failing tests to other team members.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tNP4-yzB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/2020/08/testrail_test_run-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tNP4-yzB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/2020/08/testrail_test_run-1.png" alt="How to Send Your TestCafe Test Results to TestRail"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One thing to keep in mind is that you don't need to match the exact number of test cases between TestRail and TestCafe. For example, if you don't format the test name in TestCafe to include a TestRail test case ID, you can still execute it. It'll run, but the result won't show up in the test run on TestRail.&lt;/p&gt;

&lt;p&gt;However, you will need to ensure that any test case IDs in the TestCafe test name match a test case ID on TestRail. For example, if you format a TestCafe test name to use a test case ID of "C1234", you &lt;em&gt;must&lt;/em&gt; have a TestRail test case with that ID. Otherwise, the reporter will raise an error and send incomplete test results to TestRail.&lt;/p&gt;

&lt;h2&gt;
  
  
  This automatic integration isn't for all teams
&lt;/h2&gt;

&lt;p&gt;When the organization I worked for used this approach, it worked well for their needs since they were at the beginning stages of creating the test suite. All they wanted was for the TestCafe test runs to get included in TestRail and avoid having to go into TestRail to manually add the scenarios covered by the project's end-to-end tests.&lt;/p&gt;

&lt;p&gt;However, this automatic integration between TestCafe and TestRail might not work for your team in a few scenarios. Here are a few situations where it's probably best to run your automated tests and update TestRail separately:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You have lots of test cases in TestRail, but few automated tests covering them in TestCafe. Automatically creating test runs in TestRail for just a handful of automated tests isn't worth the effort unless you plan to boost your test automation efforts shortly.&lt;/li&gt;
&lt;li&gt;You need more fine-grained organization around test cases in TestRail, such as setting time estimates and priorities. You can't set any of this information from your TestCafe tests using the reporter plugin, so your team will have to set these by hand.&lt;/li&gt;
&lt;li&gt;You need to run the same tests in different environments that aren't supported or available in TestCafe. Each time you execute your TestCafe tests, it creates a test run under the specified test plan name. If your project needs to run your test cases in an environment TestCafe doesn't support or can't access, you'll need to create these test runs in TestRail manually.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The way the plugin uses the formatting of the test names in TestCafe also isn't ideal. The formatting can look messy since you have to include the type of test and ID along with the description. It's also easy to accidentally format the description incorrectly. Using metadata would be ideal for specifying TestRail related data like the test case ID, assignee, and more. Unfortunately, the reporter plugin only checks the TestCafe test name for its integration.&lt;/p&gt;

&lt;p&gt;It all boils down to how much effort your team needs to maintain your TestRail account and TestCafe test suite. Sometimes these integrations cost more time than you think it could save. Pay attention to whether these approaches make your work easier or if it becomes a burden on your team.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Test case management tools are great to keep you and your team up to date with test plans for your projects. However, getting your test automation integrated with these systems often requires additional work to connect your automated test runs with your defined test cases.&lt;/p&gt;

&lt;p&gt;If you use TestRail to manage your test cases and have a TestCafe test suite, you can easily keep your test case management tool synchronized. Thanks to TestCafe's easy-to-use plugin system, you can add a reporter called testcafe-reporter-html-testrail to handle this integration for you. All you need to do is configure a few environment variables and format your TestCafe test names to include the TestRail test case ID.&lt;/p&gt;

&lt;p&gt;When correctly set up, the reporter plugin automatically submits your TestCafe test run to the configured TestRail account, linking each test to its result. Now you can manage your automated test results in your test case management tool with just some up-front setup.&lt;/p&gt;

&lt;p&gt;While this approach might not work for all testing teams, it does help eliminate the busywork of entering your automated test results into TestRail by hand. If that's all your organization needs, this approach will save you lots of time while keeping your entire team on the same page with your test plans.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;How have you managed your test automation with your test case management tools? Share your experiences in the comments section!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>testing</category>
      <category>qa</category>
      <category>qualityassurance</category>
      <category>softwaretesting</category>
    </item>
    <item>
      <title>Running End-To-End Tests Without Blocking Your Team</title>
      <dc:creator>Dennis Martinez</dc:creator>
      <pubDate>Tue, 11 Aug 2020 11:00:00 +0000</pubDate>
      <link>https://dev.to/dennmart/running-end-to-end-tests-without-blocking-your-team-5h4a</link>
      <guid>https://dev.to/dennmart/running-end-to-end-tests-without-blocking-your-team-5h4a</guid>
      <description>&lt;p&gt;When you spend some time working on test automation, chances are you'll reach a point where your test suite is more of a hassle than a help. It takes longer and longer to execute your tests. The feedback loop between developers committing code and receiving the automated test results grows larger every day. Eventually, your team will begin to ignore your tests because they won't want to wait around.&lt;/p&gt;

&lt;p&gt;If you reach this point, your first inclination might be to take the "scorched earth" approach and start again from scratch. I know I've had plenty of days with that thought. However, you can take steps to salvage the remnants of your test suite and get it back to a state where it doesn't block your team.&lt;/p&gt;

&lt;p&gt;This article details my recent experience with an end-to-end test suite that slowed down the entire team and how we took a different road for executing those tests to keep the team's workflow moving along.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Initial Honeymoon Phase
&lt;/h2&gt;

&lt;p&gt;At the beginning of 2019, the previous company I worked at started to get serious about automating end-to-end tests for our projects. Previous attempts were made to implement automated UI testing, but those efforts never went far. Most projects in our organization relied on manual testing from the in-house QA team.&lt;/p&gt;

&lt;p&gt;The teams for some of those projects had issues finishing their allocated work for the sprint on time. The main problem they faced was a slow regression testing phase. It took the QA team too much time to perform a full regression test, and every time they found a bug, the delays would increase until the team missed the delivery deadline.&lt;/p&gt;

&lt;p&gt;The organization wanted to automate the repetitive work the QA team did for each sprint, giving testers more time to perform higher-value tasks like exploratory testing. We also wanted to get our projects to a point where we could implement continuous delivery. End-to-end testing would help us build confidence in our applications to release new changes to production automatically.&lt;/p&gt;

&lt;p&gt;We got to work to begin the initial implementation for one project. We decided to use the TestCafe testing framework, since the project made extensive use of JavaScript, and we wanted part of the development team to pitch in since they had the product knowledge. The organization also wanted developers to pair with QA team members who wanted to learn more about automated testing.&lt;/p&gt;

&lt;p&gt;After a few weeks, the development and testing teams managed to automate a good chunk of the regression tests, and others helped integrate it into the existing workflow. Whenever the team pushed new changes to the code repository, it would execute the end-to-end tests after running the existing automated test suite, including unit and functional tests.&lt;/p&gt;

&lt;p&gt;From the start, the team started seeing how including these end-to-end tests would help the project. Instead of wasting time on the mind-numbing, repetitive work that would often get pushed to the end of the development cycle, the team had part of it taken care for them. It began to free the testing team to perform other tasks, while developers had more feedback after their changes.&lt;/p&gt;

&lt;p&gt;However, not everything was rosy. We started to feel lots of bumps in this automated testing road.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Hostility Phase
&lt;/h2&gt;

&lt;p&gt;As the team continued to expand the automated test suite and increase the coverage for the application, an all-too-common issue reared its ugly head - the tests were slowing down the entire team.&lt;/p&gt;

&lt;p&gt;End-to-end tests tend to be slow and flaky, and our initial attempts at writing these tests were no exception. Since this was the first time many on the team were doing any test automation, speed and reliability were missing in the test suite. Builds were running at least five times slower than before and would continually fail for no reason.&lt;/p&gt;

&lt;p&gt;One of the mistakes we made as a team was attempting to automate too much, too quickly. In our quest to automate as much of the regression test cases as possible, we also built many extensive tests. These tests performed too many steps to cover as much functionality as possible, which led to slow performance and high flakiness.&lt;/p&gt;

&lt;p&gt;Because of these extensive, unstable tests, the feedback loop between the time developers pushed out a code change, and the notification of the test results increased every day. If you throw in the increase in build failures, that creates an unhappy team - and rightfully so.&lt;/p&gt;

&lt;p&gt;The development team didn't want the end-to-end tests to run after every code commit they pushed to the repository. We changed the workflow to run these tests only when specific branches were updated, like the release candidate branch or the main branch that we used to deploy to production.&lt;/p&gt;

&lt;p&gt;This move helped minimize the build times during development. However, it was merely a placebo because running the tests infrequently created additional problems. Regressions were caught much later in the development cycle - often just before the project's release date. Eventually, it got to a point where the entire team ran into similar delays as they had before implementing the automated test suite.&lt;/p&gt;

&lt;p&gt;Some on the team wanted to cut our losses, scrap the end-to-end tests, and get back to manual testing with additional resources. However, we didn't give up and put our heads together to find a way through.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Adjustment Phase
&lt;/h2&gt;

&lt;p&gt;As mentioned earlier, one of the issues we had with the test suite was that most test cases performed too many test cases. We also noticed that some tests executed almost the same steps every time, changing the data slightly or performing different assertions. These tests felt like duplicate work, so we did our best to trim unnecessary test cases.&lt;/p&gt;

&lt;p&gt;You can only go so far with this approach, depending on your application. In our case, it was a rather complex application with many different scenarios. The QA team had lots of various tests that they felt necessary to run because they had experienced problems in the past when skipping those areas even though it felt repetitive.&lt;/p&gt;

&lt;p&gt;With development wanting the builds to run faster while testing wanted to be thorough, we eventually reached a compromise. Instead of deleting tests, the QA team sat down to classify the end-to-end tests by type and priority. The team tagged each test with a label, like "smoke" or "sanity", and either high, medium, or low priority.&lt;/p&gt;

&lt;p&gt;With this information in our test code, we could set our continuous integration system only to run higher-priority smoke tests after code changes. These tests took only 25% of the time to run compared to running the entire test suite, which was acceptable to avoid blocking the development team for too long.&lt;/p&gt;

&lt;p&gt;For the remainder of the tests, we configured the continuous integration system to trigger a build for running the entire automated test suite at night for most of the team. If the test suite failed, the system sent a notification to the project's Slack channel so the team could catch it when they returned the next business day.&lt;/p&gt;

&lt;p&gt;I found this split to be the best of both worlds. The development team didn't get stuck waiting for the test suite's results after pushing out new code, while the testing team was able to keep the tests they built without sacrificing thoroughness.&lt;/p&gt;

&lt;h2&gt;
  
  
  Much better, but there's still room for improvement
&lt;/h2&gt;

&lt;p&gt;While this worked well, these changes still weren't perfect. We ran into our fair share of issues throughout the project while building the automated test suite.&lt;/p&gt;

&lt;p&gt;Despite splitting up the end-to-end tests and only running a subset throughout the day, the team still had to wait a bit too long sometimes for the tests triggered by their changes to run. In the days leading up to a release, the team's activity tended to spike, and the continuous integration service had to queue up multiple builds at a time. The solution to this problem is often to throw money at the issue, meaning pay to get more build capacity.&lt;/p&gt;

&lt;p&gt;Another issue that surfaced leading up to a deadline was an increase in the frequency of regressions occurring. Of course, it's great that the test suite caught these problems before they shipped to production. However, since many of the regressions were found during the nightly builds, it would disrupt the team's day since they had to deal with it. This problem can get solved by running the full test suite more often, although we struggled to find a good way to balance build times and acceptable feedback loops.&lt;/p&gt;

&lt;p&gt;I also noticed that we needed to be extra-vigilant about how to classify any new tests. As the team built new functionality, they also created new automated test cases. However, many of these new tests got classified as high-priority smoke tests, and it wasn't long before the build times after each commit crept up to unacceptable levels. It's good to occasionally review your existing test suite to either reclassify tests or cull them from if they're no longer necessary or useful.&lt;/p&gt;

&lt;p&gt;Still, even with these occasional troubles, the automated end-to-end test suite massively improved the testing team's efficiency. After a few months since we began implementing test automation to the project, the time used to perform regression testing in each sprint was cut nearly in half, and fewer bugs slipped through the cracks into production.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Automated testing is an excellent way to speed up your team by freeing them from the repetitive nature of regression testing. Instead of taking their time to go through the same test cases repeatedly, automating these steps can let them do other kinds of work to help boost the project's quality.&lt;/p&gt;

&lt;p&gt;However, automation is not a silver bullet, and if you're not careful, you may run into plenty of issues. When starting with test automation, teams tend to want to automate everything through the UI and build lots of end-to-end tests. This tactic isn't sustainable. Eventually, you'll end up with a slow and unreliable test suite that no one on the team wants to use.&lt;/p&gt;

&lt;p&gt;If you reach this point, you don't have to scrap everything and start again. You can take a few steps to change how your automated test suite behaves and avoid slowing down your project and your team.&lt;/p&gt;

&lt;p&gt;A quick thing you can do with your existing test suite is to determine which tests should run frequently and which you can defer at a later time. Not every test should be a high-priority scenario. If you can extract a subset of tests that give you a high degree of confidence that the application is working well, you can set up your workflow to execute them first.&lt;/p&gt;

&lt;p&gt;With the remainder of the tests, you can take advantage of test automation tools like continuous integration systems to run them when it doesn't interrupt the team's workday. Any long-running tests or tasks that you can execute at a time that doesn't block anyone will help avoid any bottlenecks during the development and testing cycles.&lt;/p&gt;

&lt;p&gt;The key to running end-to-end tests is to automate as much as you can to give you and your team the freedom to worry about other important issues for your project. Even if it's not perfect, you'll increase the quality of your applications with automation giving you the time to do your best work.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;How do you deal with long-running builds or test suites that the rest of the team doesn't like to execute? Let me know in the comments section below!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>testing</category>
      <category>qa</category>
      <category>qualityassurance</category>
      <category>softwaretesting</category>
    </item>
    <item>
      <title>Getting Drone CI Up and Running With TestCafe Quickly</title>
      <dc:creator>Dennis Martinez</dc:creator>
      <pubDate>Tue, 21 Jul 2020 11:00:00 +0000</pubDate>
      <link>https://dev.to/dennmart/getting-drone-ci-up-and-running-with-testcafe-quickly-2h3a</link>
      <guid>https://dev.to/dennmart/getting-drone-ci-up-and-running-with-testcafe-quickly-2h3a</guid>
      <description>&lt;p&gt;A few weeks ago, I found myself looking for a continuous integration platform to self-host on my own servers. I have a few smaller projects (which software developer doesn't?) that I don't update frequently but want to ensure they still work whenever they get updated.&lt;/p&gt;

&lt;p&gt;Typically, my continuous integration service of choice is &lt;a href="https://circleci.com/"&gt;CircleCI&lt;/a&gt;. I'm very familiar with how to use it effectively with all sorts of projects. However, I wanted to look for a self-hosted solution for these smaller projects because I had some spare computing power to spin up a CI solution. And while CircleCI has a free plan, I didn't want to rely on having enough build time whenever I focused on these projects in the future.&lt;/p&gt;

&lt;p&gt;The most popular self-hosted CI server out there is &lt;a href="https://www.jenkins.io/"&gt;Jenkins&lt;/a&gt;, and with good reason. It's been around for a long time - I've used its previous incarnation, Hudson, back in 2007 - and it generally works. But I didn't want to use Jenkins for a couple of reasons. The system feels exceptionally dated, as they haven't had any visual changes for a long time. Also, it relies on a plugin system that leads to issues whenever they're updated. I remember Jenkins not being the most comfortable system to maintain.&lt;/p&gt;

&lt;p&gt;Last year I stumbled upon a relatively new continuous integration system called &lt;a href="https://drone.io/"&gt;Drone CI&lt;/a&gt;. It looked interesting, but I didn't have any use for it, so I never got a chance to try it. Now that I wanted to spin up my own CI instance, I gave Drone CI another shot. The service was straightforward to set up and works excellent. I'm pretty happy with the results.&lt;/p&gt;

&lt;p&gt;This week, I wanted to write about the process of setting up an instance of Drone CI using Docker. To show the CI service in use and how easy it is to run your automated tests with it, I set it up to run a TestCafe test suite every time the code repository - hosted on GitHub - gets updated.&lt;/p&gt;

&lt;h2&gt;
  
  
  Overview of Drone CI's architecture
&lt;/h2&gt;

&lt;p&gt;You can configure Drone CI in many different ways, depending on your code environment and preferred architecture. Regardless of your setup, the typical Drone CI setup consists of two main components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;The Drone CI server:&lt;/strong&gt; The server is responsible for configuring your code repositories, handling communication with the repo, and dealing with users for logging into the CI service.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The Drone CI runner:&lt;/strong&gt; The runner polls the Drone CI server frequently to check if there are any new jobs to process. Once the server has a job, the runner handles the execution of your pipeline.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You'll need to run an instance of both the server and the runner for your builds to execute. You'll only need to spin up one instance of the Drone CI server and connect it to your preferred source code management provider. The server will need at least one runner to process the jobs as it receives them. You can also configure multiple runners if you need to run your pipeline in different ways, but this article won't cover that scenario.&lt;/p&gt;

&lt;p&gt;Both the server and runner use Docker images to start their respective instances. As far as I can tell, there's no alternative form of installation for Drone CI. I consider this a benefit since it makes setting up and updating both the server and the runners a breeze.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up the Drone CI server with GitHub
&lt;/h2&gt;

&lt;p&gt;For this article, I'll set up Drone CI to connect to GitHub, since it's where most of my code repositories live. Drone CI also supports most major source code management providers like &lt;a href="https://about.gitlab.com/"&gt;GitLab&lt;/a&gt; and &lt;a href="https://bitbucket.org/product/"&gt;Bitbucket&lt;/a&gt;, as well as some smaller self-hosted Git solutions like &lt;a href="https://gitea.io/en-us/"&gt;Gitea&lt;/a&gt; and &lt;a href="https://gogs.io/"&gt;Gogs&lt;/a&gt;. Check out the &lt;a href="https://docs.drone.io/server/overview/"&gt;Drone CI documentation&lt;/a&gt; to learn how to use these other providers.&lt;/p&gt;

&lt;p&gt;Before setting up the server, you need to first create an oAuth application on the GitHub account containing the repositories you want Drone CI to access. The oAuth application will allow you to sign in using your GitHub account and access your code for your Drone CI instance. Follow &lt;a href="https://developer.github.com/apps/building-oauth-apps/creating-an-oauth-app/"&gt;the instructions on GitHub's Developer documentation&lt;/a&gt; to create a new oAuth application.&lt;/p&gt;

&lt;p&gt;Setting up the oAuth application on GitHub is simple. However, you need to set up the proper URLs to point at your Drone CI server:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Homepage URL:&lt;/strong&gt; This URL lets you access the main Drone CI interface after setting up the server and the runner. You can set it to any fully qualified domain name you'll use for your instance of Drone CI. In the example image below, the Drone CI instance is set up &lt;code&gt;https://ci.dev-tester.com/&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Authorization Callback URL:&lt;/strong&gt; This URL is where GitHub redirects you after authorizing your account to share data with your Drone CI instance. This URL must be the same domain as the Homepage URL with the &lt;code&gt;/login&lt;/code&gt; endpoint appended. In the example below, you can see the Authorization Callback URL is &lt;code&gt;https://ci.dev-tester.com/login&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1accd3FL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/2020/07/testcafe_drone_ci_oauth_setup-2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1accd3FL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/2020/07/testcafe_drone_ci_oauth_setup-2.png" alt="Getting Drone CI Up and Running With TestCafe Quickly"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After your oAuth application gets created, you'll land on the app's settings page on GitHub. Here, you'll see two special keys: &lt;strong&gt;Client ID&lt;/strong&gt; and &lt;strong&gt;Client Secret&lt;/strong&gt;. These keys are used when running the Drone CI server, so make sure you copy them since you'll use them soon.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2h-qBPMx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/2020/07/testcafe_drone_ci_server_oauth.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2h-qBPMx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/2020/07/testcafe_drone_ci_server_oauth.png" alt="Getting Drone CI Up and Running With TestCafe Quickly"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There's one more step needed before running the Drone CI server. For any Drone CI runner to communicate with your server, you need to generate a secret string to serve as authentication between the server and runners. This string essentially tells the Drone CI server that any runner attempting to poll it must have the same secret set up to avoid rogue runners in other networks trying to gain access to your repositories.&lt;/p&gt;

&lt;p&gt;The secret can be any string you want to use, as long as it's randomized and not easy to guess. A quick way to generate a secure string on the command line is with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;openssl rand &lt;span class="nt"&gt;-hex&lt;/span&gt; 16
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Finally, it's time to get your instance of the Drone CI server up and running in your system. As mentioned earlier, Drone CI uses Docker, so all you need is to run the Drone CI server image with a volume to store the server data locally and a few environment variables for configuration.&lt;/p&gt;

&lt;p&gt;The environment variables used to configure your instance of Drone CI vary, depending on your needs. An excellent place to begin is with this example command I used to start my Drone CI server configured with GitHub:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-v&lt;/span&gt; /var/lib/drone:/data &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;DRONE_GITHUB_CLIENT_ID&lt;/span&gt;&lt;span class="o"&gt;={{&lt;/span&gt;GITHUB OAUTH CLIENT ID&lt;span class="o"&gt;}}&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;DRONE_GITHUB_CLIENT_SECRET&lt;/span&gt;&lt;span class="o"&gt;={{&lt;/span&gt;GITHUB OAUTH CLIENT SECRET&lt;span class="o"&gt;}}&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;DRONE_RPC_SECRET&lt;/span&gt;&lt;span class="o"&gt;={{&lt;/span&gt;GENERATED SECRET&lt;span class="o"&gt;}}&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;DRONE_SERVER_HOST&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ci.dev-tester.com&lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;DRONE_SERVER_PROTO&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;https &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;DRONE_TLS_AUTOCERT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;DRONE_USER_FILTER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;dennmart &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-p&lt;/span&gt; 80:80 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-p&lt;/span&gt; 443:443 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--restart&lt;/span&gt; always &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--name&lt;/span&gt; drone &lt;span class="se"&gt;\&lt;/span&gt;
  drone/drone:1
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This Docker command uses the latest 1.x version (1.9.0, as if this writing) of the official Drone CI server images. The command sets up a volume using the &lt;code&gt;/var/lib/drone&lt;/code&gt; directory on your system and maps it to the &lt;code&gt;/data&lt;/code&gt; directory inside of the Drone CI container. It also publishes both port 80 and 443 for accessing the interface through your web browser. The environment variables in the command are used for configuring Drone. I'll go through the ones used in this example command:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;DRONE_GITHUB_CLIENT_ID&lt;/code&gt;: The value of this variable is the Client ID from the GitHub oAuth application you set up earlier.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;DRONE_GITHUB_CLIENT_SECRET&lt;/code&gt;: The value of this variable is the Client Secret from the GitHub oAuth application.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;DRONE_RPC_SECRET&lt;/code&gt;: The value of this variable is the generated secret string mentioned earlier in this article, to allow communication between the server and the runner you'll set later.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;DRONE_SERVER_HOST&lt;/code&gt;: Here, you'll set up the domain of your Drone CI server instance, as established in the GitHub oAuth application. Note that you don't need to specify the protocol; you only need the domain.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;DRONE_SERVER_PROTO&lt;/code&gt;: In this environment variable, you can set whether you want your public-facing instance of Drone CI to use standard HTTP or secure HTTPS connections. Here, I'm using HTTPS to keep the communication secure.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;DRONE_TLS_AUTOCERT&lt;/code&gt;: One useful feature provided by Drone CI is its ability to generate an SSL certificate using Lets Encrypt automatically. Setting this environment variable to &lt;code&gt;true&lt;/code&gt; handles this step for you and configures the Drone CI server to accept secure requests. The certificate generation doesn't occur by default, so it won't do anything unless you use this setting.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;DRONE_USER_FILTER&lt;/code&gt;: By default, Drone CI sets up a public-facing interface that anyone with a GitHub account can access. While a logged-in user can only see their GitHub repos, you might not want others running builds on your systems. This environment variable limits the GitHub users or organizations that can log in to your Drone CI instance. In this example, I'm only granting access to my personal GitHub account.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That's all you need to get a running instance of the Drone CI server. The first time you visit the URL for the server (as set up with the GitHub oAuth application), you'll be redirected to GitHub to give your instance of Drone CI permission for accessing your repos. If you don't get redirected or Drone CI doesn't load in the browser, check the Docker logs in your system by running the command &lt;code&gt;docker logs drone&lt;/code&gt; and verify if any errors occurred during the setup process.&lt;/p&gt;

&lt;p&gt;Once GitHub has permission, you'll get redirected back to your Drone CI server, and you'll get logged in to the main interface. The first time you arrive at this interface, Drone CI spends a couple of minutes synchronizing with your GitHub account to pull the information for your code repositories.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ofIdMbW_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/2020/07/testcafe_drone_ci_main_interface.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ofIdMbW_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/2020/07/testcafe_drone_ci_main_interface.png" alt="Getting Drone CI Up and Running With TestCafe Quickly"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up the Drone CI runner with Docker
&lt;/h2&gt;

&lt;p&gt;After syncing your GitHub information, you'll see all of your repos and configure the ones you want to run your builds. However, the Drone CI server won't do anything by itself. You'll need to run a Drone CI runner instance. As mentioned earlier in this article, the runner is what does the heavy lifting of executing your builds. Without it, the Drone CI server won't have a place to send jobs for execution.&lt;/p&gt;

&lt;p&gt;Drone CI has different kinds of runners for executing your builds. Each runner has its pros and cons, depending on the workload you need to use. The &lt;a href="https://docs.drone.io/runner/overview/"&gt;Drone CI documentation&lt;/a&gt; contains advice for when to use or avoid each runner for your projects, so it's well worth exploring which one suits you the best.&lt;/p&gt;

&lt;p&gt;For this article, the simplest runner to use is the Docker runner, since it'll get set up on the same server. Like setting up the Drone CI server, getting the runner set up consists of a Docker command and environment variables for configuration. Here's the command I used to set up my runner:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-v&lt;/span&gt; /var/run/docker.sock:/var/run/docker.sock &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;DRONE_RPC_PROTO&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;https &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;DRONE_RPC_HOST&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ci.dev-tester.com &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;DRONE_RPC_SECRET&lt;/span&gt;&lt;span class="o"&gt;={{&lt;/span&gt;GENERATED SECRET&lt;span class="o"&gt;}}&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;DRONE_RUNNER_CAPACITY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;DRONE_RUNNER_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;HOSTNAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-p&lt;/span&gt; 3000:3000 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--restart&lt;/span&gt; always &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--name&lt;/span&gt; runner &lt;span class="se"&gt;\&lt;/span&gt;
  drone/drone-runner-docker:1
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The command to set up the Docker runner uses the latest 1.x version (1.4.0, as if this writing) of the official Drone CI Docker Runner image. The command mounts the Docker daemon socket from your system to the container. This volume mount is essential to allow the runner to spin up Docker instances from within the container when running your builds. It also publishes port 3000 for communication between the server and the runner. The environment variables used in this command are the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;DRONE_RPC_PROTO&lt;/code&gt;: This environment variable sets up the protocol used between the Drone CI server and the runner, either secure (&lt;code&gt;https&lt;/code&gt;) or insecure (&lt;code&gt;http&lt;/code&gt;). It's recommended to use &lt;code&gt;https&lt;/code&gt;, especially if setting up the runner on a different server.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;DRONE_RPC_HOST&lt;/code&gt;: The value of this variable is the hostname of the Drone CI server to allow the runner to poll and receive jobs to execute.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;DRONE_RPC_SECRET&lt;/code&gt;: This value is the secret string you generated earlier. &lt;strong&gt;It needs to be the same value as the string used when setting up the Drone CI server.&lt;/strong&gt; Otherwise, the runner can't connect to the server to receive jobs.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;DRONE_RUNNER_CAPACITY&lt;/code&gt;: With this value, you can specify how many concurrent jobs you want the runner to process. This value depends on your server's capacity and the resources used to run your jobs, so you may need to experiment to find a number that works well for you.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;DRONE_RUNNER_NAME&lt;/code&gt;: This variable is an optional setting to give the runner a unique name. The Drone CI server stores this information when it sends a job for processing to know which runner executed the build. Here, it's using the hostname from the system's &lt;code&gt;HOSTNAME&lt;/code&gt; environment variable.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With that command, you have the Drone CI runner set up, ready to process any jobs it fetches from the server. To make sure the runner is configured correctly, check the logs using the docker logs runner command. If you set everything correctly, you'll see the runner start and ping the Drone CI server successfully:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;time="2020-07-10T06:33:08Z" level=info msg="starting the server" addr=":3000"
time="2020-07-10T06:33:08Z" level=info msg="successfully pinged the remote server"
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;If the runner can't connect to the server, make sure the server is running, and the &lt;code&gt;DRONE_RPC_SECRET&lt;/code&gt; environment variable used to start up both the server and runner are the same. Those issues were the main problems I encountered when first setting up Drone CI.&lt;/p&gt;

&lt;p&gt;With the Drone CI server and runner set up, the last step is to connect a code repository and set it up to run your builds after every commit.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up your repository for Drone CI
&lt;/h2&gt;

&lt;p&gt;For this article, I created &lt;a href="https://github.com/dennmart/drone_ci_testcafe_example"&gt;a repository containing a TestCafe test suite&lt;/a&gt; based on the code samples used in my book &lt;a href="https://testingwithtestcafe.com/"&gt;End-to-End Testing with TestCafe&lt;/a&gt;. This test suite includes 11 end-to-end tests covering an application called &lt;a href="https://teamyap.app"&gt;TeamYap&lt;/a&gt;, built to complement the book. I'll use this repository to demonstrate how to connect Drone CI to a GitHub repo and configure the build process to run the tests.&lt;/p&gt;

&lt;p&gt;First, log in to the Drone CI server interface. Once Drone CI pulls in your GitHub repositories, find the repo you want to use and click the "Activate" link. This link takes you to the settings page for the repo where you can connect it with GitHub.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--F22SguJx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/2020/07/testcafe_drone_ci_settings_activate.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--F22SguJx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/2020/07/testcafe_drone_ci_settings_activate.png" alt="Getting Drone CI Up and Running With TestCafe Quickly"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Activating the repo sets up a webhook in the GitHub repo that will send information to Drone CI every time new code gets committed. When Drone CI receives this data, it creates a job for a build and waits for a runner to pick it up.&lt;/p&gt;

&lt;p&gt;After activating the repo, you can spend some time configuring how it will interact with Drone CI. You can set up cron jobs to automatically run builds at predetermined times, or prevent pull requests or forked repos to trigger new builds, among other settings.&lt;/p&gt;

&lt;p&gt;For the TestCafe test suite used in this article, I had to set up a few secrets. Some of the tests use environment variables to avoid putting plain-text passwords in the repository. You can set these secrets through the Drone CI interface, which I can then configure as environment variables when running the build. In this example, I set up two secrets, called &lt;code&gt;admin_password&lt;/code&gt; and &lt;code&gt;regular_password&lt;/code&gt;. You'll see these secrets in use later in this article.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oSAqO14N--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/2020/07/testcafe_drone_ci_secrets.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oSAqO14N--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/2020/07/testcafe_drone_ci_secrets.png" alt="Getting Drone CI Up and Running With TestCafe Quickly"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The last step needed to run your builds on Drone CI is to set up a configuration file inside the code repository to set up a pipeline. A pipeline is the series of steps you want the runner to execute, like running tests or deploying your code to a staging environment. These steps are defined in a YAML file placed in the root of your code repository. By default, these steps should be in a file called &lt;code&gt;.drone.yml&lt;/code&gt;, but you can change this in the settings page on Drone CI.&lt;/p&gt;

&lt;p&gt;Inside the &lt;code&gt;.drone.yml&lt;/code&gt; file, you can begin to set up the steps needed to trigger a build successfully. A high-level overview of the steps used to run TestCafe tests for this example are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pull in the TestCafe Docker image to use as the base for running the tests.&lt;/li&gt;
&lt;li&gt;Set up environment variables from the configured secrets inside Drone CI.&lt;/li&gt;
&lt;li&gt;Execute the tests.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here's the &lt;code&gt;.drone.yml&lt;/code&gt; file used in this repository for running these steps:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pipeline&lt;/span&gt;
&lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;

&lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;testcafe&lt;/span&gt;
  &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;testcafe/testcafe&lt;/span&gt;
  &lt;span class="na"&gt;commands&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/opt/testcafe/docker/testcafe-docker.sh chromium *_test.js&lt;/span&gt;
  &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;ADMIN_PASSWORD&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;from_secret&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;admin_password&lt;/span&gt;
    &lt;span class="na"&gt;REGULAR_PASSWORD&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;from_secret&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;regular_password&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Here's a brief explanation of each key and value in this file:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;kind&lt;/code&gt;: Defines the kind of process Drone CI uses to process the remainder of the YAML in this file. In this case, it tells Drone CI that these instructions are to execute a pipeline of instructions using the &lt;code&gt;pipeline&lt;/code&gt; value.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;type&lt;/code&gt;: Defines what pipeline to use for running the steps defined later in the file. Since we're using a Docker runner, it uses the &lt;code&gt;docker&lt;/code&gt; value.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;name&lt;/code&gt;: Defines the name of the pipeline for identifying when running builds. This example uses &lt;code&gt;default&lt;/code&gt;, but it can be anything.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;steps&lt;/code&gt;: Defines a series of steps for execution in the runner. Inside of this key, you'll have an array of pipeline steps that run serially. This configuration file only has a series of pipeline steps with their configuration settings.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Within the &lt;code&gt;steps&lt;/code&gt; key, you'll find additional configuration settings:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;name&lt;/code&gt;: Defines the name of the pipeline step, which is useful for seeing each step of the process in the Drone CI interface. You can set any identifiable name in this setting.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;image&lt;/code&gt;: Since this is a Docker pipeline, this defines the Docker image you want to use to execute the commands for the step. Drone CI pulls this image automatically and clones your code inside of the container.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;commands&lt;/code&gt;: Defines an array of shell commands that get executed inside of the Docker container. If any command returns a non-zero status after execution, the pipeline step will fail.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;environment&lt;/code&gt;: Allows you to set up any environment variables inside of the Docker container. In this example, the file sets up an environment variable by using the &lt;code&gt;from_secret&lt;/code&gt; key. This key tells Drone CI to fetch the value from the Secrets section in the project's settings.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This YAML file is all that's needed to run the TestCafe tests in Drone CI. Commit this file to the root of your code repository and push it to GitHub. Your Drone CI instance will receive a webhook from GitHub and kick off the build.&lt;/p&gt;

&lt;p&gt;Before wrapping up, I wanted to explain a few issues I had to deal with before getting the test suite to execute successfully.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pitfalls encountered when running TestCafe tests on Drone CI using Docker
&lt;/h2&gt;

&lt;p&gt;For these tests, I opted to use the official TestCafe Docker image, which &lt;a href="https://dev-tester.com/get-a-jump-start-on-your-testing-with-testcafe-and-docker/"&gt;I have written about before&lt;/a&gt;. This Docker image has everything needed to execute a TestCafe test suite, so you don't have to spend time setting up dependencies using a different image.&lt;/p&gt;

&lt;p&gt;When you use the Docker image outside of Drone CI, you simply need to specify a browser (either Chromium or Firefox, which are already set up in the image) and the test files. However, you can't run your tests like this in Drone CI. That's because any commands specified in the &lt;code&gt;commands&lt;/code&gt; section of the &lt;code&gt;.drone.yml&lt;/code&gt; file will override the ENTRYPOINT command specified in the Docker image.&lt;/p&gt;

&lt;p&gt;The TestCafe image contains an entry point that's important for getting tests to run. The entry point executes a script that sets up a display server to allow the browsers to work inside the container and runs the TestCafe executable with a few arguments. Initially, I attempted to execute the test suite using the command &lt;code&gt;testcafe chromium *_test.js&lt;/code&gt; in &lt;code&gt;.drone.yml&lt;/code&gt;, but this overrode the entry point, and the tests couldn't run.&lt;/p&gt;

&lt;p&gt;To get around this, I'm specifying the same script used as the entry point defined in the image's &lt;a href="https://github.com/DevExpress/testcafe/blob/master/docker/Dockerfile"&gt;Dockerfile&lt;/a&gt;, along with the commands to execute the tests. Using the script sets up the Docker container correctly, allowing the tests to run successfully.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GoA-Xn3---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/2020/07/testcafe_drone_ci_success.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GoA-Xn3---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/2020/07/testcafe_drone_ci_success.png" alt="Getting Drone CI Up and Running With TestCafe Quickly"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Using the script for the entry point isn't ideal since its purpose is not immediately apparent. Any developer or tester looking at this file would have to find &lt;a href="https://github.com/DevExpress/testcafe/blob/master/docker/testcafe-docker.sh"&gt;the script in TestCafe's repository&lt;/a&gt; to figure out what it does. Also, if the maintainers of the TestCafe Docker image change this entry point, your builds will begin to fail, and you'll have to update the Drone CI configuration. But for now, this keeps our builds working.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;This article barely scratched the surface of what Drone CI can do. You can run multiple pipelines in your project, like setting up different steps for unit and integration tests in the same build. You can also configure it to automatically deploy your code upon a successful build. The &lt;a href="https://docs.drone.io/"&gt;Drone CI documentation&lt;/a&gt; goes into more detail about all of its functionality.&lt;/p&gt;

&lt;p&gt;In this article, I showed how simple it is to get an instance of Drone CI set up. You learned how to connect your GitHub account to the Drone CI server, and set up a runner to poll and fetch new jobs for execution. It also has a simple example of running end-to-end tests using TestCafe and some tricky areas you may need to navigate to get your builds working.&lt;/p&gt;

&lt;p&gt;Besides how well Drone CI worked for my use case, I was surprised at how quickly I managed to get this system working. Between setting up the server and runner and getting my tests to run, the entire process took me about an hour starting from scratch.&lt;/p&gt;

&lt;p&gt;If you're looking for a self-hosted continuous integration solution, I recommend taking Drone CI for a spin. It covers almost all continuous integration and continuous delivery needs your organization may have, regardless of which source code management provider or server architecture you use.&lt;/p&gt;

</description>
      <category>testing</category>
      <category>devops</category>
      <category>docker</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Stop the QA Gatekeeping Now</title>
      <dc:creator>Dennis Martinez</dc:creator>
      <pubDate>Tue, 14 Jul 2020 11:00:00 +0000</pubDate>
      <link>https://dev.to/dennmart/stop-the-qa-gatekeeping-now-44o7</link>
      <guid>https://dev.to/dennmart/stop-the-qa-gatekeeping-now-44o7</guid>
      <description>&lt;p&gt;Recently on Twitter, I ran across a tweet where someone asked &lt;em&gt;"As a tester, when do you give the approval for release?"&lt;/em&gt; Few questions make me react in a less-than-positive way upon first reading them. This question did precisely that. I felt a bit agitated with this topic because it digs up some unpleasant memories from previous workplaces.&lt;/p&gt;

&lt;p&gt;The most frustrating projects I've worked on as a developer were those with this kind of "QA gatekeeping", where QA decided whether to approve a release or hold it back. Whenever I've been on teams that placed the responsibility for a project's deployment to production solely on the testing team, disputes inevitably happened. Sometimes it happened as the development cycle ended, while other times, it was a silent ticking time bomb that blew up weeks or months down the road.&lt;/p&gt;

&lt;p&gt;Online, I've seen many testers talk about their teams handling the responsibility of project releases, and every time it causes me endless frustration. I don't understand why requiring QA or testers to approve a release is still something that organizations do these days. I'm not diminishing the importance of QA in the release cycle, nor blaming testers for creating friction between different team members. When placed in these situations, everyone does their job the best they could and with good intentions.&lt;/p&gt;

&lt;p&gt;However, putting testers as the group responsible for a release to occur puts the unfair pressure on them. It also creates silos between product, development, and testing - whether intentional or not. In the end, you have constant finger-pointing, erosion of trust, and someone becoming the scapegoat if things go wrong. These issues serve no one in the organization.&lt;/p&gt;

&lt;h2&gt;
  
  
  Anti-patterns of QA gatekeeping
&lt;/h2&gt;

&lt;p&gt;In my experience, I've noticed two typical scenarios emerging in these kinds of projects that caused long-term issues to the teams and the organization as a whole.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scenario #1: QA holds up releases due to stringent bug classification
&lt;/h3&gt;

&lt;p&gt;Some testing teams are more lenient when it comes to classifying defects during the development cycle. The definition of quality seems to vary by team - what's a small issue for a QA group is sometimes a blocker for others. It's a tricky issue to balance since there's no "one size fits all" approach to determining the severity of a bug.&lt;/p&gt;

&lt;p&gt;The issue with this comes when a QA team uses strict bug classification to hold up a release, no questions asked. If the team logged bugs with a specific label, the development team was obligated to fix every single one of them before the code could be deployed. I understand the reason why. After all, we shouldn't deploy code with a visible defect that hurts the product. But this system breaks down when bugs are classified incorrectly.&lt;/p&gt;

&lt;p&gt;I have one frustrating example that comes to mind immediately. I once had a pull request blocked from being merged and deployed because the QA team found what they labeled a high-priority defect in the feature. The defect? The spacing between an image and the text underneath it was off by &lt;em&gt;one pixel&lt;/em&gt; - and &lt;em&gt;only in Internet Explorer 11&lt;/em&gt;. It held up the release of the project until the issue got resolved.&lt;/p&gt;

&lt;p&gt;Of course, the QA team should log the bug as something they found during their testing. But to this day, I don't understand why they placed this issue as a blocker. The problem was so subtle that most people on the team could barely notice it even pointed out. The spacing wasn't breaking any functionality. And according to our analytics, less than 1% of those using our product used Internet Explorer 11.&lt;/p&gt;

&lt;p&gt;This tiny issue caused a lot of back and forth between myself, the tester, and other stakeholders in the project. In the end, the bug was marked as &lt;em&gt;"won't fix"&lt;/em&gt;. Since the team was distributed across the world and working in different time zones, this back and forth ended up delaying the release. Admittedly, it also took plenty of time to repair the trust between everyone involved.&lt;/p&gt;

&lt;p&gt;This example shows that keeping a system where QA can hold up releases because of their bug classification can create situations that harm the overall project. Bugs should be labeled accordingly, but there needs to be some space for re-evaluating these classifications since not everyone sees things the same way.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scenario #2: Bugs still slip to production, and QA gets chewed out for letting it happen
&lt;/h3&gt;

&lt;p&gt;Another scenario I've seen with organizations that require QA to give the green light before deploying is with the inevitable bug getting through the cracks. In the typical release cycle, development spends some portion of their time doing bug fixes and clearing out the QA backlog. When QA doesn't find any blockers, they'll give a thumbs up, and the project goes out to the customers.&lt;/p&gt;

&lt;p&gt;Still, nobody's perfect. Bugs will slip by no matter how much testing is done before a release or how much time developers spend in bug-fixing mode. In organizations with QA gatekeeping, guess who gets the blame when the product team receives a bug report from a user. It'll fail mainly on the testers, every single time.&lt;/p&gt;

&lt;p&gt;These issues happen the most when there's a massive time crunch to develop and release a product. Often, the team is overworked, scope creep gets out of control, and no one has a firm grasp on building quality work. When there's little time to develop a product, quality is almost always the first thing to fly out the window.&lt;/p&gt;

&lt;p&gt;I have been part of teams that have the mindset of building stuff fast and not worry about writing tests because "QA will handle that". I've never believed in that philosophy. As a developer, I've been scolded by team leads because I refused to push a feature I had been developing until I finished writing some automated tests for it. They wanted me to submit my untested code and let QA test it for me.&lt;/p&gt;

&lt;p&gt;On those types of teams, there's a cycle of pushing a ton of work to QA for testing. With a lack of time or resources to fully dedicate to the project, the testers do the best job they can to prevent any deadlines from slipping away. The risk of bugs getting deployed to production increases. Since the project passed last through the testing team, they'll always be the scapegoat, while the development team gets by relatively unscathed. It inevitably kills the team's morale.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to stop the QA gatekeeping now
&lt;/h2&gt;

&lt;p&gt;If you're in a team with one (or both) of these issues, you should try to help your organization as soon as you can, before your project continues sinking further and further into a hole.&lt;/p&gt;

&lt;p&gt;The project teams that I've seen with the most overall success when it comes to delivering quality products on time had a few common traits.&lt;/p&gt;

&lt;h3&gt;
  
  
  They made testing a whole team effort
&lt;/h3&gt;

&lt;p&gt;The most productive teams running projects with the fewest number of bugs or defects that I've been a part of have made everyone responsible for testing. The responsibility of testing still went to QA, because that's their expertise. But the rest of the stakeholders for the project still did their fair share in their own way:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Developers took the time to write automated tests for their code. At the very least, they covered their work with unit tests, but I've also seen a few development teams creating end-to-end tests for new features.&lt;/li&gt;
&lt;li&gt;Product managers and designers frequently checked on staging environments to do acceptance testing for new features and make sure the product looked and functioned as expected.&lt;/li&gt;
&lt;li&gt;The people responsible for DevOps made sure to set proper monitoring and alerts, like setting up continuous integration systems that performed various types of testing when new code gets committed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These teams didn't rely solely on QA to determine whether the project was fit for deployment or if it still needed more work. If any of the teams thought there was something necessary to address before releasing to production, it was discussed between each group. For instance, if QA found an issue, they would talk with product and development to see if it's something to fix now or defer to a later time.&lt;/p&gt;

&lt;p&gt;This approach worked exceptionally well because it fostered discussion across different disciplines. Instead of everyone working in their bubble, they would come together to discuss potential problems before they created further delays. It provided additional context to the issues at hand and cleared up any ambiguity about the severity of a defect (like my off-by-one-pixel issue).&lt;/p&gt;

&lt;p&gt;If your organization feels like separate silos, it's best to bring it to the organization's attention and foster more unity across job functions. Testing is a cornerstone of excellent products, and everyone needs to work together to get there.&lt;/p&gt;

&lt;h3&gt;
  
  
  They tested early and often
&lt;/h3&gt;

&lt;p&gt;Besides having everyone involved in the project to do their part with testing, these productive teams also tested as much as they could, as early as possible. They didn't wait until a pre-determined spot in the project's schedule to begin testing - it was part of their regular routine.&lt;/p&gt;

&lt;p&gt;Making testing a part of everyone's work is easier said than done, but these teams took steps to make it dead-simple to bake in quality from the start by making testing part of the workflow. For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;DevOps and the developers set up systems that automatically generated testing environments when new code or a pull request was committed. This workflow allowed them to create staging servers for new features and help non-technical folks quickly perform acceptance testing.&lt;/li&gt;
&lt;li&gt;Different tests were run at various points throughout the day automatically. When developers committed new code to a branch, it kicked off a process to run a few quick tests. When merging code into the main branch, it ran more thorough tests. At night, a full battery of tests ran and generated reports of the project's current state.&lt;/li&gt;
&lt;li&gt;The QA teams had free reign to run manual and exploratory testing alongside with other responsibilities. This type of testing wasn't a single explicit activity tacked on the timeline. The entire team understood that QA was working alongside everyone else at all times.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With the systems in place to run automated tests and create environments for new features before deploying, testing never had to wait. None of these team members had to overthink about quality during their workday. Whenever they did have to dig in to do testing, they had what they needed to dive in quickly. It helped a lot with minimizing risk, especially at the end of the cycle or sprint.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Some organizations still do "QA gatekeeping", where the QA team is the group responsible for the release of a project. This practice is harmful in many ways, putting unfair pressure on testers, and it promotes a culture of dissent between different teams working on the same project.&lt;/p&gt;

&lt;p&gt;I've seen two anti-patterns occur when organizations have this practice in place. One issue is that projects get delayed due to QA blocking small and sometimes insignificant issues. Another problem is when bugs inevitably slip through to production, QA becomes the scapegoat because they're the ones giving the go-ahead.&lt;/p&gt;

&lt;p&gt;The most productive teams I've observed have a few similarities that have helped them avoid these pitfalls. These teams have everyone working on the project do their part with testing instead of solely placing the responsibility with QA. They also test frequently as early as possible, setting systems in place to make it easy for everyone to do their part in ensuring a quality product.&lt;/p&gt;

&lt;p&gt;Quality is something that works best when it's baked into the team's workflow, involving everyone who's a part of the project. If your organization practices some of the anti-patterns mentioned in this article, bring it to the attention of those who can help you make a change. QA needs to stop being the gatekeeper for your projects to thrive.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Are you involved in a testing team that serves as a gatekeeper? Has it helped your organization, or has it created problems? Share your story by leaving a comment!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>testing</category>
      <category>qa</category>
      <category>qualityassurance</category>
      <category>softwaretesting</category>
    </item>
    <item>
      <title>Now Is the Time for You to Super-Charge Your Skills</title>
      <dc:creator>Dennis Martinez</dc:creator>
      <pubDate>Tue, 07 Jul 2020 11:00:00 +0000</pubDate>
      <link>https://dev.to/dennmart/now-is-the-time-for-you-to-super-charge-your-skills-1jln</link>
      <guid>https://dev.to/dennmart/now-is-the-time-for-you-to-super-charge-your-skills-1jln</guid>
      <description>&lt;p&gt;Every morning when I check my social media feed, I've seen more and more messages like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Sn3H5EGC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/2020/07/linkedin_covid_message.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Sn3H5EGC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/2020/07/linkedin_covid_message.PNG" alt="LinkedIn message screenshot"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Due to the current pandemic affecting everyone globally, developers and testers have seen their jobs disappear overnight when companies have to lay off workers for their businesses to survive.&lt;/p&gt;

&lt;p&gt;Another trend of messages I've noticed come from those who still have their engineering and QA jobs, but have received salary reductions and are starting to look elsewhere. They feel stressed and overworked because they're doing the work of those who got let go while making less money. A developer friend who works in a startup that reduced half of the engineering department told me she's doing the work of at least three of the people who the organization let go.&lt;/p&gt;

&lt;p&gt;The job market is slowly recovering in many places across the globe, but it still seems like people are still in the market for work, unable to find something that fits their talents. With more and more people getting laid off from all types of jobs, it's becoming increasingly difficult to find a suitable position elsewhere, regardless of whether it's your choice or your former employer made the choice for you.&lt;/p&gt;

&lt;p&gt;Everybody's in a different spot in life right now, but in the bigger picture, we're all in this together. We're all on the same boat.&lt;/p&gt;

&lt;p&gt;Still, that doesn't mean you have to stay there.&lt;/p&gt;

&lt;h1&gt;
  
  
  Now is the time for you
&lt;/h1&gt;

&lt;p&gt;Right now, life is distracting. Every day we're reminded of what's happening and how much everything changed in just a few months. Our attention is pulled in a thousand directions - often away from what would benefit us the most. I continuously see messages from people saying that their productivity has tanked since this entire situation erupted.&lt;/p&gt;

&lt;p&gt;Regardless of what's going on in your world, there's one thing that I believe could propel you to come out better than ever out of this current situation - &lt;strong&gt;&lt;em&gt;improving yourself and getting prepared for what's next for you.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;At this exact moment, it's a perfect time to level up your skills. Yes, life is distracting at this moment. But it's also the best time to get things done because &lt;em&gt;everyone else is distracted too&lt;/em&gt;. If you place those distractions to the side and focus on adding something new to your skill set, you'll have little to no interruptions from others. Now is the time for you to do that thing you've thought of doing for so long.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why upgrade your skills during this time?
&lt;/h2&gt;

&lt;p&gt;Taking the time to upgrade your existing skills and add new ones for your career helps you stand out from a crowded field of developers and testers seeking the same positions as you. These days, it's especially true with a highly competitive IT and testing industry and more job seekers for fewer vacancies.&lt;/p&gt;

&lt;p&gt;It's tough for both new testers and developers trying to break in for their first jobs in tech and engineers with years of professional experience under their belts. Many recruiters these days have to sift through hundreds of applications for a single role. In some cases, adding that one extra skill in your resume can be the difference between getting hired or having your resume shoved aside.&lt;/p&gt;

&lt;p&gt;It might feel like your career will never be the same. But there will come a time where things will return to a sense of normalcy. When that time comes, you need to be ready to jump in head-first. It's up to you to demonstrate that whatever development or testing job comes up, you can fill in those shoes with no problems.&lt;/p&gt;

&lt;p&gt;Even if you do have a job at this time, it's still beneficial to improve your skillset, especially if you're seeking a salary increase or a promotion. The common consensus between economists indicates that &lt;a href="https://www.pewresearch.org/fact-tank/2018/08/07/for-most-us-workers-real-wages-have-barely-budged-for-decades/"&gt;wages have stagnated&lt;/a&gt; &lt;a href="https://www.forbes.com/sites/stevedenning/2018/07/26/how-to-fix-stagnant-wages-dump-the-worlds-dumbest-idea/#27962c261abc"&gt;all over the world&lt;/a&gt;. Staying stuck in place limits your earning potential. You can't expect your organization to give you a substantial raise if you don't offer more than you do now, so it's also a great time to boost your abilities for your benefit.&lt;/p&gt;

&lt;h2&gt;
  
  
  So what can you do?
&lt;/h2&gt;

&lt;p&gt;Everyone's path for gaining new skills is different, but I've found that you can start with three steps to get you on the right track.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Focus on you - only you - and figure out what you want
&lt;/h3&gt;

&lt;p&gt;It may seem obvious that knowing what to do is the first step, but it's surprising how many people falter in the beginning. It's not that you don't know what to do. Usually, it's that you have too many ideas on what to do. The choices become overwhelming, and the fear of choosing the wrong thing keeps you from moving forward. Choosing one goal early in the process helps guide you where you want to go. Here are a couple of ideas for developers and testers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If you're a manual tester and want to learn automation, focus on learning the basics of programming or how to use an automation framework.&lt;/li&gt;
&lt;li&gt;If you're looking for a new job, find a gap in your abilities and fill it to expand your employment opportunities.&lt;/li&gt;
&lt;li&gt;If you have a job and want to improve your earning potential, find ways to increase your career value, like becoming ISTQB certified.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Remember that whatever you choose here isn't set in stone. If you feel like the choice you made won't benefit you, you can always choose something else. Just make sure you're not running away at the first sign of struggle because that will keep you in the same place you are now - which I'm guessing you don't want to be in.&lt;/p&gt;

&lt;p&gt;The other crucial element of this step is to focus on what &lt;strong&gt;you&lt;/strong&gt; want. It's easy to fall into the trap of paying attention only to what others expect from you, or what others would think of you. That line of thinking is one of the most dangerous traps you can fall into, and I'd bet it's most responsible for killing people's dreams, more than anything else.&lt;/p&gt;

&lt;p&gt;That's not to say that you need to neglect everyone else around you. You still need input from people who might be affected by your decisions, like your spouse or close family members. But if there's something you feel strongly about doing, you need to focus on yourself first so that you can focus on everyone else better later.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Take one step right now
&lt;/h3&gt;

&lt;p&gt;Whatever the choice you make, think about the one thing you can do &lt;em&gt;right now&lt;/em&gt; that would have the most significant and immediate impact on your goal. The path to every goal has a series of steps, and some of those steps you can take action on at this very moment.&lt;/p&gt;

&lt;p&gt;Acting quickly serves you in many ways:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It'll get you on your way towards your goal now, even if you're still worried about being able to do this thing.&lt;/li&gt;
&lt;li&gt;It'll motivate you to take the next step toward your target because the road ahead will seem less difficult to traverse.&lt;/li&gt;
&lt;li&gt;In some cases, it might give you a sign that you shouldn't do this now, so you can find something else without losing much.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Your first step should be something relatively small that you can do in the next couple of minutes. For example, it could be buying a book or course that gets you started on learning that new skill you want. For job seekers, it could be sending your resume to a few companies you've had your eye on, or doing more research on what you need to find a better position.&lt;/p&gt;

&lt;p&gt;The key is to do something as soon as you can so you can get moving. If you don't start moving, it's really easy to become paralyzed due to inactivity. By taking one step forward, you break that pattern, and it makes the next one a little more attainable.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Work on your goal for 30 days
&lt;/h3&gt;

&lt;p&gt;Taking one step is excellent to start gaining some traction towards your goals. But one step alone won't take you far. You'll need to do more than a single action if you want to acquire new skills. Ideally, it'll be something every day, no matter how big or small.&lt;/p&gt;

&lt;p&gt;An excellent way to get moving and stay moving is to work on your goal every day for the next couple of weeks. As developers and testers, many of us are familiar with sprints at work. Set a personal sprint for yourself. In the next 30 days, carve out some time for leveling up your skills.&lt;/p&gt;

&lt;p&gt;Here are some of the things you can do to improve your development and testing skills every day:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Read a portion of a book or view part of a video course and take notes on what you learn.&lt;/li&gt;
&lt;li&gt;Build an application or set up a test automation framework to showcase your abilities to potential employers.&lt;/li&gt;
&lt;li&gt;Find an open-source project that you can contribute to with bug fixes or writing automated tests.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At this point, be aware of your constraints. If you don't have much time to improve existing skills or obtain new ones, don't set out a plan to work a few hours on it every day. It'll end up with you throwing your hands in the air and giving up quickly. A great strategy that has honestly changed my life is what's known as &lt;em&gt;mini habits&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Instead of thinking you have to spend hours working towards your goals daily, you should think of the &lt;em&gt;least&lt;/em&gt; you can do every day to move forward. If you can do more than the absolute minimum, that's fine, but do something every day. The secret sauce of this strategy is that it helps you avoid the frustration of feeling that you didn't do enough, and instead focuses on celebrating even the tiniest of wins that keep you coming back every single day.&lt;/p&gt;

&lt;p&gt;If this strategy sounds useful for you, I highly recommend reading the books &lt;a href="https://www.amazon.com/Mini-Habits-Smaller-Bigger-Results-ebook/dp/B00HGKNBDK"&gt;&lt;em&gt;Mini Habits: Smaller Habits, Bigger Results&lt;/em&gt;&lt;/a&gt;, and its follow-up, &lt;a href="https://www.amazon.com/Elastic-Habits-Create-Smarter-Adapt-ebook/dp/B08188WBGC"&gt;&lt;em&gt;Elastic Habits: How to Create Smarter Habits That Adapt to Your Day&lt;/em&gt;&lt;/a&gt;, both by Stephen Guise. They transformed the way I approach my work and learning.&lt;/p&gt;

&lt;h2&gt;
  
  
  My story and how this article came about
&lt;/h2&gt;

&lt;p&gt;The reason why I'm writing this is that 2020 has been pretty rough when it comes to my career and my goals. Everything has changed quite a bit for me in the past few months due to the current pandemic and its impact across the world.&lt;/p&gt;

&lt;p&gt;In January 2020, I decided to leave the organization I had worked at for almost five years. During the months leading to that decision, I felt like the role I was in wasn't going to serve my career moving forward. I felt stuck and, despite looking for additional ways to provide value to the company, I didn't feel like any change would happen within the existing structure. It was time for a change.&lt;/p&gt;

&lt;p&gt;I didn't want to jump immediately into another full-time job for another organization. I planned to return to freelancing and consulting, which I had done successfully in 2014 and 2015. In early February, I let my network know that I would become available to help on a contract basis. Almost immediately, I lined up four projects for the upcoming months. I had a plan of action, and the outlook looked positive for my new path.&lt;/p&gt;

&lt;p&gt;In March, the COVID-19 pandemic exploded around the world, and all of those projects I had in my pipeline vanished just as quickly as they came.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;"No problem,"&lt;/em&gt; I thought. &lt;em&gt;"I'll just start looking for other projects."&lt;/em&gt; But that was much easier said than done. It turns out that finding any kind of remote contract work became almost impossible.&lt;/p&gt;

&lt;p&gt;As more people got laid off or furloughed from their jobs, more people began seeking the same work I was hunting. Not only were they looking for those jobs, but they were willing to do it for much less than before they lost their previous jobs. I know the value and experience I can provide to those who hire me, and I'm not willing to slash my rates by over 75% just to land any job, as some responses indicated. I didn't want to be involved in a race to the bottom.&lt;/p&gt;

&lt;p&gt;Still, it was incredibly stressful to find myself out of work for as long as I did. At this point, I fell into a deep funk, quite honestly bordering on depression. I couldn't find any suitable work, and I felt paralyzed about what to do next. I had a ton of ideas floating in my head, and I didn't know where to begin.&lt;/p&gt;

&lt;p&gt;At around this point is when I started taking my own advice. The first step was to focus on myself and what I wanted to do. Did I really want to spend my entire days looking for scraps of work in an increasingly crowded field because I felt like that's what was expected of me?&lt;/p&gt;

&lt;p&gt;Deep down, I knew I didn't want to do that. I didn't want to spend so much time and energy trying to compete with others who undervalue their work significantly. I felt it would take a while before things got to normal with organizations looking to hire developers and testers. That's when I decided to focus entirely on me and do something I wanted to do for a long time - write a book.&lt;/p&gt;

&lt;p&gt;I had the idea of writing a book about TestCafe for a while, but never took the first step towards making it happen. When I decided to focus on me and what I wanted to do, I started to learn what it took to write a book. I never wrote a book before, and it's not as simple as writing a few pages and publishing it somewhere. There was a lot of research on what I would write about and how to go about it.&lt;/p&gt;

&lt;p&gt;I spent some time every day working on something to make the book project come to life. At first, it was researching TestCafe deeper and deciding what to write. I needed to write the book in a way that made sense for both beginners and people familiar with the framework. Next, it was figuring out what tools to use to write and publish the book. Then I began reading a lot about marketing to learn how to spread the word effectively.&lt;/p&gt;

&lt;p&gt;Every single day, I took some kind of action to move forward. It was exhausting, trying to balance the acquisition of new skills, making progress towards the book, keeping Dev Tester going with fresh new articles every week, and keeping my personal life in balance. However, keeping track of what I could do, even if it was a tiny win, steered me toward my goal.&lt;/p&gt;

&lt;p&gt;I'm proud of the results of the work I put in. &lt;a href="https://testingwithtestcafe.com/"&gt;I finished writing my book and will release it soon&lt;/a&gt;, which I hope helps many people who want to learn about TestCafe. I also learned a few new skills that I'll put into use not only for future work but for myself and new products that I'm building soon. This period taught me that I have something to offer and that I can do it independently.&lt;/p&gt;

&lt;p&gt;Three months ago, I was in a rather dark place with my career and didn't know what would come out of it. But deciding on what I wanted to focus on for myself and not for others, taking a few initial steps, and working on the goal every day - it honestly transformed my life.&lt;/p&gt;

&lt;p&gt;I'm not writing this story to boast or impress anyone. I want everyone reading this to know that now is the perfect time to work on yourself. When this crazy, turbulent time passes by, and the world gets back to normal - and it &lt;em&gt;will&lt;/em&gt; get back to normal, despite how things may seem - you'll be farther ahead than everyone else around you. The reason is that you took advantage of this time of crisis to do something about it.&lt;/p&gt;

&lt;p&gt;Start &lt;em&gt;now&lt;/em&gt;.&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>learning</category>
      <category>watercooler</category>
      <category>motivation</category>
    </item>
    <item>
      <title>End-to-End Testing with TestCafe Book Excerpt: Intercepting HTTP Requests</title>
      <dc:creator>Dennis Martinez</dc:creator>
      <pubDate>Tue, 30 Jun 2020 11:00:00 +0000</pubDate>
      <link>https://dev.to/dennmart/end-to-end-testing-with-testcafe-book-excerpt-intercepting-http-requests-ae8</link>
      <guid>https://dev.to/dennmart/end-to-end-testing-with-testcafe-book-excerpt-intercepting-http-requests-ae8</guid>
      <description>&lt;p&gt;&lt;em&gt;This article is an excerpt from one of the chapters of my upcoming book, &lt;strong&gt;End-to-End Testing with TestCafe&lt;/strong&gt;. If you're interested in learning more about the book, visit &lt;a href="https://testingwithtestcafe.com"&gt;https://testingwithtestcafe.com&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;As more web applications shift to using more interactive user experiences, testing will get more complicated with all the moving parts in place. Since end-to-end testing verifies how a system works as a whole, testers also need to ensure that the communication between external services works as expected.&lt;/p&gt;

&lt;p&gt;Usually, these requests happen asynchronously in the background through JavaScript. The client-side portion of the application won't see what's happening until the external service returns a response unless there's some progress indicator programmed in the interface. Even then, there's no guarantee on what's getting returned to the client, making it difficult to perform any assertions on the page.&lt;/p&gt;

&lt;p&gt;Intercepting an HTTP request and its response gives you the ability to take control over what happens during asynchronous calls to other servers. Some of the primary uses for intercepting HTTP requests are:&lt;/p&gt;

&lt;h3&gt;
  
  
  Recording all requests and responses from a remote service
&lt;/h3&gt;

&lt;p&gt;You can open up your preferred browser's developer tools during manual testing and see the different network requests made in the application. This information allows you to see everything related to the request and the response, like the headers sent to the server, and the data returned to the client.&lt;/p&gt;

&lt;p&gt;However, during automated testing, you won't have a chance to check the developer tools to see what's going on during the testing process. TestCafe allows you to record any HTTP requests made during the testing process and verify that the data returned from the server is what you expect.&lt;/p&gt;

&lt;h3&gt;
  
  
  Reliably manage responses from third-party services to ensure repeatable tests
&lt;/h3&gt;

&lt;p&gt;As mentioned, responses coming from an HTTP request are usually out of our control as testers. Some systems provide testing environments, such as a sandbox environment that gives you control over its responses. In most cases, your tests won't have the luxury of specialized testing setups, and you'll have to use a real interface to the service.&lt;/p&gt;

&lt;p&gt;Depending on the service where you make a request, you may not always receive the same data every time, making it tricky to create robust validations in your tests. With TestCafe, you can manipulate a response from an HTTP request to create repeatable tests without worrying about the data returned from a service you don't manage.&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating tests for services that are under development or difficult to replicate
&lt;/h3&gt;

&lt;p&gt;Testing should take place early in the development process to bake in quality from the start. The problem is that when an application is under active development, you'll have rapid changes occurring every day. It's impossible to build a test suite if the service continually changes the way it responds.&lt;/p&gt;

&lt;p&gt;Another issue teams may face, particularly small bootstrapped teams, is not having infrastructure available to set up complex systems for testing purposes. Sometimes an organization doesn't have the resources to set up an environment that's not easy to duplicate on a smaller scale.&lt;/p&gt;

&lt;p&gt;Fortunately, you don't have to wait for the team to finish developing or setting up a system before you can write tests with TestCafe. By creating a mock that simulates the interfaces you need to verify, you can build tests for environments that aren't readily available for use.&lt;/p&gt;

&lt;p&gt;Recording and manipulating external requests have many real-world uses that can save time and help build a reliable and stable test suite. Some examples are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It's not uncommon for developers to accidentally break an application because of a small change in the response for an HTTP request. You can include assertions to check that those requests and responses work as expected, and raise alerts early during testing if there's a change that can potentially break the application.&lt;/li&gt;
&lt;li&gt;If your application performs intensive calculations that take time to execute and complete, it can slow down your test suite or cause your tests to fail due to timeouts. You can bypass these demanding functions with a mock to keep your tests speedy.&lt;/li&gt;
&lt;li&gt;Testing doesn't always cover the happy path. You'll also need to test scenarios that don't occur under regular use, such as a network or database error. Typically, you shouldn't be able to trigger one of these errors on your own. However, you can mock an error response from an HTTP request to cover those scenarios during your test run.&lt;/li&gt;
&lt;li&gt;If you practice test-driven development (TDD), you don't need to wait for systems to get entirely built before creating new tests. Mocking these systems accelerates the process by allowing you to test complex interfaces before they exist.&lt;/li&gt;
&lt;li&gt;Sometimes you won't have access to some services because they contain sensitive information that you don't want to expose during testing. You can bypass any HTTP request made to these servers with a mock to keep your info safe inside of your tests.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now that you understand the reasons you'd want to intercept HTTP requests in your tests, let's take a look at how you can do this in TestCafe. TestCafe has different hooks to grab HTTP requests, depending on whether you need to record or manipulate the responses.&lt;/p&gt;

&lt;h2&gt;
  
  
  Logging HTTP requests
&lt;/h2&gt;

&lt;p&gt;First, let's see how you can log any HTTP request that occurs in the application during testing. TestCafe has a class called &lt;code&gt;RequestLogger&lt;/code&gt; that checks any HTTP request that occurs while the test runs and records both the request the browser makes and the response it receives from the server.&lt;/p&gt;

&lt;p&gt;To log your HTTP requests, you need to import the &lt;code&gt;RequestLogger&lt;/code&gt; constructor to your tests and use it to create an instance of the class.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;RequestLogger&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;testcafe&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;logger&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;RequestLogger&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;RequestLogger&lt;/code&gt; constructor accepts two optional parameters. The first optional parameter is a filter to tell the request logger which HTTP requests it should track. If a filter is not specified, the request logger tracks every HTTP request made during the test. Usually, this behavior isn't beneficial since it logs everything that loads on the page, like images and scripts. You'll want to be more specific about which requests to capture during your test run.&lt;/p&gt;

&lt;p&gt;You can filter which HTTP requests you want to record in different ways:&lt;/p&gt;

&lt;h3&gt;
  
  
  By exact URL
&lt;/h3&gt;

&lt;p&gt;You can pass a string to log any requests sent to a specific URL.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Only logs requests made to this specific URL.&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;logger&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;RequestLogger&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://teamyap.app/api/posts&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Multiple URLs
&lt;/h3&gt;

&lt;p&gt;If you need to track requests to multiple URLs, you can use an array to specify more than one URL.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Only logs requests made to the specific URLs&lt;/span&gt;
&lt;span class="c1"&gt;// defined in the array.&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;logger&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;RequestLogger&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://teamyap.app/api/posts&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://teamyap.app/api/conversations&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;]);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Using regular expressions
&lt;/h3&gt;

&lt;p&gt;You can use regular expressions to match a URL by a pattern. Using regular expressions is useful when the request URL changes depending on the situation, like an ID associated with a resource in the app.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Logs requests made to URLs matching this regular expression&lt;/span&gt;
&lt;span class="c1"&gt;// like "https://teamyap.app/api/posts/12345/comments"&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;logger&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;RequestLogger&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sr"&gt;/&lt;/span&gt;&lt;span class="se"&gt;\/&lt;/span&gt;&lt;span class="sr"&gt;api&lt;/span&gt;&lt;span class="se"&gt;\/&lt;/span&gt;&lt;span class="sr"&gt;posts&lt;/span&gt;&lt;span class="se"&gt;\/(\d&lt;/span&gt;&lt;span class="sr"&gt;+&lt;/span&gt;&lt;span class="se"&gt;)\/&lt;/span&gt;&lt;span class="sr"&gt;comments/&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Filtering AJAX requests or by request method
&lt;/h3&gt;

&lt;p&gt;If you use a string, an array of strings, or a regular expression, the request logger will record all responses made to any matching URL, regardless of whether it's an asynchronous (AJAX) request or the request method (like GET or POST). If you want to get more specific, you can use an object with the following properties:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;url&lt;/code&gt; - The URL you want to log requests from.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;method&lt;/code&gt; - The HTTP method you want to record (GET, POST, PUT, PATCH, or DELETE).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;isAjax&lt;/code&gt; - A boolean flag to only record asynchronous HTTP requests.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Only logs asynchronous POST requests made to&lt;/span&gt;
&lt;span class="c1"&gt;// the specific URL.&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;logger&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;RequestLogger&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://teamyap.app/api/posts&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;method&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;post&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;isAjax&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  More control using a predicate function
&lt;/h3&gt;

&lt;p&gt;If you need to fine-tune the request logger even further, you can use a predicate function. The function takes a request parameter that contains different properties that you can use to match the exact HTTP requests you want to filter:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;url&lt;/code&gt; - The URL you want to log requests from.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;method&lt;/code&gt; - The HTTP method you want to record.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;isAjax&lt;/code&gt; - A boolean flag to only record asynchronous HTTP requests.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;body&lt;/code&gt; - A string containing the body of the request.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;headers&lt;/code&gt; - An object containing the request headers in key-value form.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;userAgent&lt;/code&gt; - A string identifying the user agent making the request.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can run conditional statements on one or more of these properties inside of the predicate function. The request logger will only log requests where all the conditions are true.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Only logs asynchronous PATCH requests made to&lt;/span&gt;
&lt;span class="c1"&gt;// the specific URL that have a header to indicate&lt;/span&gt;
&lt;span class="c1"&gt;// this is a JSON request.&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;logger&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;RequestLogger&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;request&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;url&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://teamyap.app/api/posts&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt;
        &lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;method&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;patch&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt;
        &lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;isAjax&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt;
        &lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Content-Type&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;application/json&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The second optional parameter for a request logger is an object that lets you configure which information you want the logger to capture. By default, the request logger only returns basic details like the request and response timestamps, the request URL and method, and the response status code. If you need more information like the headers and body, you can set it in this optional object with the following boolean properties:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;logRequestHeaders&lt;/code&gt; - Lets you specify if you want the request logger to log the headers made by the HTTP request.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;logRequestBody&lt;/code&gt; - Lets you specify if you want the request logger to log the body of the HTTP request.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;stringifyRequestBody&lt;/code&gt; - If you set &lt;code&gt;logRequestBody&lt;/code&gt; to true, the request body of the request gets recorded as a Node.js Buffer object by default. This option converts the request body to a string. If you set this option to true without setting &lt;code&gt;logRequestBody&lt;/code&gt; to true, TestCafe will throw an error.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;logResponseHeaders&lt;/code&gt; - Lets you specify if you want the request logger to log the headers of the server response.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;logResponseBody&lt;/code&gt; - Lets you specify if you want the request logger to log the body of the server response.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;stringifyResponseBody&lt;/code&gt; - If you set &lt;code&gt;logResponseBody&lt;/code&gt; to true, the response body of the request gets recorded as a Node.js Buffer object by default. This option converts the response body to a string. If you set this option to true without setting &lt;code&gt;logResponseBody&lt;/code&gt; to true, TestCafe will throw an error.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Logs requests made to the specific URL, including&lt;/span&gt;
&lt;span class="c1"&gt;// the response header and body.&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;logger&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;RequestLogger&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://teamyap.app/api/posts&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;logResponseHeaders&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;logResponseBody&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;stringifyResponseBody&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;After creating a request logger instance, you can access additional properties and methods that let you manage any captured requests in your tests. These allow you to look at all requests and responses, run assertions, and clear the logger instance.&lt;/p&gt;

&lt;p&gt;If you want to view all the requests and responses the logger captures in a test, the &lt;code&gt;requests&lt;/code&gt; property returns an array of objects containing details about every intercepted request.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Somewhere inside of your tests after running a few actions.&lt;/span&gt;
&lt;span class="nx"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;requests&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The objects inside of the &lt;code&gt;requests&lt;/code&gt; array contain information about the request and the response from the server. Each object will also include the headers and body for the request and response if you set additional options like &lt;code&gt;logRequestHeaders&lt;/code&gt; or &lt;code&gt;logResponseBody&lt;/code&gt;. Here's an example of a few logged requests:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="p"&gt;[&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;yqw_dSpIl&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;testRunId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;p0P1LuJtg&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;userAgent&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Chrome 83.0.4103.116 / macOS 10.15.5&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;request&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;timestamp&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1592980791729&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;https://teamyap.app/api/posts&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;method&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;get&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="na"&gt;response&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;statusCode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;timestamp&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1592980791879&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;Object&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
      &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;''&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;K79qg9fRi&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;testRunId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;p0P1LuJtg&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;userAgent&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Chrome 83.0.4103.116 / macOS 10.15.5&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;request&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;timestamp&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1592980793340&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;https://teamyap.app/api/posts&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;method&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;post&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="na"&gt;response&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;statusCode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;201&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;timestamp&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1592980793505&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;Object&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
      &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;{"id":1,"body":"Can someone leave a comment?"}}&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You can use the &lt;code&gt;requests&lt;/code&gt; property to run assertions, like verifying the number of requests by checking the length of the array. However, the request logger has additional methods - &lt;code&gt;contains&lt;/code&gt; and &lt;code&gt;count&lt;/code&gt; - to help you validate specific requests, like checking if a request happened or if an exact number of requests occurred during testing.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;contains&lt;/code&gt; method lets you run an assertion by specifying a predicate function, similar to the second optional parameter when creating an instance of a request logger. If the conditions of the function match a logged request, it will return true. Otherwise, it returns false.&lt;/p&gt;

&lt;p&gt;As an example, let's say you have a logger that intercepted the requests shown when we talked about the &lt;code&gt;requests&lt;/code&gt; property. If you want to assert that the logger captured one POST request and received a status code of 201, you can run the following assertion:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;t&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;contains&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;request&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;method&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;post&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt;
      &lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;statusCode&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="mi"&gt;201&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;})).&lt;/span&gt;&lt;span class="nx"&gt;ok&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;count&lt;/code&gt; method works similarly to &lt;code&gt;contains&lt;/code&gt;. You can set a predicate function to set conditions for matching captured requests in the logger. The function then returns the number of requests that match those conditions.&lt;/p&gt;

&lt;p&gt;You can use the &lt;code&gt;count&lt;/code&gt; method to perform assertions to verify that the logger captured a specific amount of requests. For instance, using the same example logger requests as above, if you want to confirm that the test intercepted one GET request from the specified URL, you can use the following assertion:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;t&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;count&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;request&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;url&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://teamyap.app/api/posts&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt;
        &lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;method&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;get&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;})).&lt;/span&gt;&lt;span class="nx"&gt;eql&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;






&lt;h4&gt;
  
  
  Note
&lt;/h4&gt;

&lt;p&gt;The &lt;code&gt;contains&lt;/code&gt; and &lt;code&gt;count&lt;/code&gt; methods both return a Promise object. If you use either of these methods in an assertion, TestCafe uses the Smart Assertion Query Mechanism, as discussed in Chapter 10.&lt;/p&gt;




&lt;p&gt;Finally, if you need to clear the intercepted requests at any point in your tests, you can use the &lt;code&gt;clear&lt;/code&gt; method.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Somewhere inside of your tests after running a few actions.&lt;/span&gt;
&lt;span class="nx"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;clear&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You usually won't need to clear requests captured by the request logger. You can fine-tune which requests to catch when instantiating the request logger object, and methods like &lt;code&gt;contains&lt;/code&gt; and &lt;code&gt;count&lt;/code&gt; help you refine your assertions. However, some applications make unnecessary and redundant requests that make it challenging to run assertions, so the &lt;code&gt;clear&lt;/code&gt; method is useful in those scenarios.&lt;/p&gt;

&lt;h2&gt;
  
  
  Mocking HTTP requests
&lt;/h2&gt;

&lt;p&gt;Besides logging HTTP requests and verifying what it captures, TestCafe also lets you alter the responses for these requests. The &lt;code&gt;RequestMock&lt;/code&gt; class sets up a request mocker object, which you can then use to manipulate requests as needed for your tests. To set up a request mocker, import the &lt;code&gt;RequestMock&lt;/code&gt; constructor method to your tests. Once imported, you can use it to create an instance of the class.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;RequestMock&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;testcafe&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;mock&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;RequestMock&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;However, the object won't do much by itself. Instances of &lt;code&gt;RequestMock&lt;/code&gt; have two required methods that you must chain together to create a mock: &lt;code&gt;onRequestTo&lt;/code&gt; and &lt;code&gt;respond&lt;/code&gt;. Together, these methods form a request mocker that lets you manage how you want any of the application's HTTP requests to respond during testing.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;onRequestTo&lt;/code&gt; method allows you to specify which HTTP request you want to intercept to manage its response during the test run. The argument required by the method is the same as the optional filtering argument used when creating a &lt;code&gt;RequestLogger&lt;/code&gt; instance:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A string containing the exact URL.&lt;/li&gt;
&lt;li&gt;An array of URLs.&lt;/li&gt;
&lt;li&gt;A regular expression.&lt;/li&gt;
&lt;li&gt;An object that lets you specify the URL, method, and if it's an AJAX request.&lt;/li&gt;
&lt;li&gt;A predicate function.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The &lt;code&gt;respond&lt;/code&gt; method allows you to specify what you want to use as the response during testing. With this method, you can manage what the browser receives for your testing needs. You can set the body of the response, its HTTP status code, and set the response headers. This method allows you to set three optional arguments.&lt;/p&gt;

&lt;p&gt;The first optional argument is the mocked body of the response. You can return a string to simulate an HTML response, an object or array for a JSON response, or a function to customize the response body even further. Most applications use the response body to update the page, so setting the body lets you control how it reacts to a request. If not specified, the mock will return an empty HTML response.&lt;/p&gt;

&lt;p&gt;The second optional argument is the numeric HTTP status code of the response. For instance, you can have the request return "200" for a successful response, "404" for a "Page Not Found" response, or "500" to simulate internal server errors. Many asynchronous requests check the status code to indicate if a request was successful or not, so this option may be necessary for your tests. By default, the request mocker will return a status code of "200".&lt;/p&gt;

&lt;p&gt;The third optional argument is an object to set custom headers to the response. Some applications use specific headers for it to work correctly. You can use this argument to set those headers as needed. When this option is empty, TestCafe sets a content-type header according to the first argument:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If the mocked body of the response is an array or object, the value of the content-type header will be &lt;code&gt;application/json&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;If the mocked body of the response is a string, the value of the content-type header will be &lt;code&gt;text/html; charset=utf-8&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here are a few different examples of setting up a request mocker:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Mock any call to the specified URL and return an&lt;/span&gt;
&lt;span class="c1"&gt;// object to simulate a JSON response.&lt;/span&gt;
&lt;span class="nx"&gt;RequestMock&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;onRequestTo&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://teamyap.app/api/posts&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;respond&lt;/span&gt;&lt;span class="p"&gt;([{&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Can someone leave a comment?&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;}]);&lt;/span&gt;

&lt;span class="c1"&gt;// Mock any call to URLs that match the regular expression&lt;/span&gt;
&lt;span class="c1"&gt;// pattern, and return an HTML response with a 404 status code.&lt;/span&gt;
&lt;span class="nx"&gt;RequestMock&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;onRequestTo&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sr"&gt;/&lt;/span&gt;&lt;span class="se"&gt;\/&lt;/span&gt;&lt;span class="sr"&gt;api&lt;/span&gt;&lt;span class="se"&gt;\/&lt;/span&gt;&lt;span class="sr"&gt;posts&lt;/span&gt;&lt;span class="se"&gt;\/(\d&lt;/span&gt;&lt;span class="sr"&gt;+&lt;/span&gt;&lt;span class="se"&gt;)\/&lt;/span&gt;&lt;span class="sr"&gt;comments/&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;respond&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;&amp;lt;div&amp;gt;Not found&amp;lt;/div&amp;gt;&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;404&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// Mock any call to URLs that satisfy the conditions of the&lt;/span&gt;
&lt;span class="c1"&gt;// predicate function, and return an empty HTML response with&lt;/span&gt;
&lt;span class="c1"&gt;// a 201 status code and a custom header.&lt;/span&gt;
&lt;span class="nx"&gt;RequestMock&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;onRequestTo&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;request&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;url&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://teamyap.app/api/posts&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt;
            &lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;method&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;post&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt;
            &lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;isAjax&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;respond&lt;/span&gt;&lt;span class="p"&gt;(,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Content-Length&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;100&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You can also chain multiple &lt;code&gt;onRequestTo&lt;/code&gt; and &lt;code&gt;respond&lt;/code&gt; methods to a single &lt;code&gt;RequestMock&lt;/code&gt; instance to mock more than one request, each with a different response:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Mocks different URLs and returns a different response&lt;/span&gt;
&lt;span class="c1"&gt;// for each chained method.&lt;/span&gt;
&lt;span class="nx"&gt;RequestMock&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;onRequestTo&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://teamyap.app/api/posts&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;method&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;post&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;isAjax&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;respond&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;201&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;onRequestTo&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sr"&gt;/&lt;/span&gt;&lt;span class="se"&gt;\/&lt;/span&gt;&lt;span class="sr"&gt;api&lt;/span&gt;&lt;span class="se"&gt;\/&lt;/span&gt;&lt;span class="sr"&gt;posts&lt;/span&gt;&lt;span class="se"&gt;\/(\d&lt;/span&gt;&lt;span class="sr"&gt;+&lt;/span&gt;&lt;span class="se"&gt;)\/&lt;/span&gt;&lt;span class="sr"&gt;comments/&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;respond&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;&amp;lt;div&amp;gt;Not found&amp;lt;/div&amp;gt;&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;404&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;






&lt;h4&gt;
  
  
  Note
&lt;/h4&gt;

&lt;p&gt;One of the most common uses of setting custom headers when mocking an asynchronous request to an external service is to deal with the Cross-Origin Resource Sharing mechanism, also known as &lt;em&gt;CORS&lt;/em&gt;. &lt;a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS"&gt;CORS&lt;/a&gt; is a security mechanism handled by your web browser that allows one application in a domain to specify who can access its resources.&lt;/p&gt;

&lt;p&gt;On a basic level, the way CORS works is by performing an initial request to the API before our desired API request, known as a &lt;em&gt;preflight request&lt;/em&gt;. The preflight request verifies if the originating server where the request is coming from is allowed to make a cross-domain request. If it is, then it returns a successful response with a few headers that tell the browser it's okay to make further API requests.&lt;/p&gt;

&lt;p&gt;When you want to mock an API request that goes to a different domain, the browser still goes through its standard CORS check, even with a request mocker object. That means you need to set the appropriate headers in your mocks to make the browser think that the CORS check passed.&lt;/p&gt;

&lt;p&gt;The headers that the browser needs to perform a successful CORS check vary, depending on the configuration of the application under test. If you need to set these headers, ask the application developers which headers are required to make cross-origin HTTP requests during testing.&lt;/p&gt;




&lt;h2&gt;
  
  
  Attaching to tests and fixtures
&lt;/h2&gt;

&lt;p&gt;Creating a request logger or mocker instance won't work on its own. You need to tell TestCafe to use these hooks in a specific test or fixture to log and mock HTTP requests using the objects you create in your test suite.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;fixture&lt;/code&gt; and &lt;code&gt;test&lt;/code&gt; functions have a method called &lt;code&gt;requestHooks&lt;/code&gt; where you can attach either an instance of &lt;code&gt;RequestLogger&lt;/code&gt; or &lt;code&gt;RequestMock&lt;/code&gt;. Setting up a request logger or mocker instance in this method will automatically set them up for intercepting HTTP requests.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;logger&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;RequestLogger&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://teamyap.app/api/posts&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;mock&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;RequestMock&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;onRequestTo&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://teamyap.app/api/posts&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;respond&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;201&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// Attaches the request logger to the fixture and logs&lt;/span&gt;
&lt;span class="c1"&gt;// HTTP requests for all tests under the fixture.&lt;/span&gt;
&lt;span class="nx"&gt;fixture&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;My test fixture&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;requestHooks&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// Attaches the request mocker and intercepts and&lt;/span&gt;
&lt;span class="c1"&gt;// mocks HTTP requests for this test only.&lt;/span&gt;
&lt;span class="nx"&gt;test&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;requestHooks&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;mock&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Test to mock HTTP requests&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="nx"&gt;t&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="c1"&gt;// Your test code goes here&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;If you need to attach more than one request hook in a fixture or test, you can pass them in an array or define them as multiple parameters.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Both of these examples attach the request logger and&lt;/span&gt;
&lt;span class="c1"&gt;// mocker to the fixture.&lt;/span&gt;
&lt;span class="nx"&gt;fixture&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;My test fixture&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;requestHooks&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="nx"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;mock&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;

&lt;span class="nx"&gt;fixture&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;My test fixture&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;requestHooks&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;mock&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Once a hook is attached to a fixture or test, TestCafe will intercept HTTP requests as defined in the request logger or mocker objects as soon as you run your tests. You can use a request logger object inside the tests to validate any captured requests, and request mockers will automatically simulate any responses matching the hook.&lt;/p&gt;

&lt;p&gt;In some cases, you may not want to have a request hook intercepting HTTP requests from the start of your test run. For instance, there might be scenarios where it's okay to allow a request to occur without any interference from a request mocker, and after a few actions, you want to begin catching those requests and mock the response.&lt;/p&gt;

&lt;p&gt;You aren't limited to setting a request logger or mocker from the start of your tests. The test controller object has two methods to give you more control over request hooks so you can attach and detach hooks in the middle of a test as needed. The &lt;code&gt;t.addRequestHooks&lt;/code&gt; method attaches a request hook at any point in your test, and the &lt;code&gt;t.removeRequestHooks&lt;/code&gt; method detaches the hook when you no longer want to intercept the requests.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;test&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Test to mock HTTP requests&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="nx"&gt;t&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// Run a few actions without intercepting HTTP requests.&lt;/span&gt;

    &lt;span class="c1"&gt;// Attach a request mocker to begin intercepting requests.&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;t&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;addRequestHooks&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;mock&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="c1"&gt;// Run more actions&lt;/span&gt;

    &lt;span class="c1"&gt;// Detach the mocker to stop intercepting requests.&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;t&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;removeRequestHooks&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;mock&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;






&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vlLhMtrj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/static/testing_with_testcafe_book_cover_small.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vlLhMtrj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/static/testing_with_testcafe_book_cover_small.png" alt="End-to-End Testing with TestCafe"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If you found this article useful, you can pre-order the End-to-End Testing with TestCafe book at &lt;a href="https://testingwithtestcafe.com"&gt;https://testingwithtestcafe.com&lt;/a&gt; and &lt;strong&gt;receive $10 off the book's original price&lt;/strong&gt; when pre-ordering before the expected release date (on or before July 15, 2020).&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If you sign up to the mailing list on the book's website, you'll &lt;strong&gt;receive the first three chapters of the book for free&lt;/strong&gt;. In addition to the book sample, you'll get an exclusive discount to pre-order the book.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>testing</category>
      <category>automation</category>
      <category>testcafe</category>
      <category>qa</category>
    </item>
    <item>
      <title>End-to-End Testing with TestCafe Book Excerpt: Reporters</title>
      <dc:creator>Dennis Martinez</dc:creator>
      <pubDate>Tue, 23 Jun 2020 11:00:00 +0000</pubDate>
      <link>https://dev.to/dennmart/end-to-end-testing-with-testcafe-book-excerpt-reporters-3de8</link>
      <guid>https://dev.to/dennmart/end-to-end-testing-with-testcafe-book-excerpt-reporters-3de8</guid>
      <description>&lt;p&gt;&lt;em&gt;This article is an excerpt from one of the chapters of my upcoming book, &lt;strong&gt;End-to-End Testing with TestCafe&lt;/strong&gt;. If you're interested in learning more about the book, visit &lt;a href="https://testingwithtestcafe.com"&gt;https://testingwithtestcafe.com&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;Earlier in this book, way back in Chapter 2, we briefly touched upon reporters in TestCafe. Reporters are used for displaying the results of your test run using different kinds of formats. So far, you've only seen the default reporter when running your tests, but TestCafe has other reporters available to use if you need to show your test results in a different format.&lt;/p&gt;

&lt;p&gt;TestCafe ships with five different reporters you can begin to use immediately. Each reporter displays the outcome of your test execution in various ways, giving you some options on how you need to manage the results. You can use them for showing the results of each test as they run, including detailed error information if an assertion fails. You can also use the output of a reporter for an external service to process.&lt;/p&gt;

&lt;h2&gt;
  
  
  TestCafe's built-in reporters
&lt;/h2&gt;

&lt;p&gt;Below are the built-in reporters included with TestCafe, created and maintained by the TestCafe team.&lt;/p&gt;

&lt;h3&gt;
  
  
  spec
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;spec&lt;/code&gt; reporter is the default used by TestCafe. This reporter groups your tests by fixture and displays each test scenario's full name under its fixture. At the end of the test run, it shows the number of tests executed, the result of each test (passed, skipped, or failed), and the total time the test execution took. It does not display how long each test took to execute.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dejMPAfB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/2020/06/testcafe_reporter_spec.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dejMPAfB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/2020/06/testcafe_reporter_spec.png" alt="End-to-End Testing with TestCafe Book Excerpt: Reporters"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Configuration examples
&lt;/h4&gt;

&lt;p&gt;The following examples use the &lt;code&gt;spec&lt;/code&gt; reporter when running your tests using the configured browser and test files.&lt;/p&gt;

&lt;h6&gt;
  
  
  Command line setting:
&lt;/h6&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;testcafe chrome &lt;span class="k"&gt;*&lt;/span&gt;_test.js &lt;span class="nt"&gt;-r&lt;/span&gt; spec
testcafe chrome &lt;span class="k"&gt;*&lt;/span&gt;_test.js &lt;span class="nt"&gt;--reporter&lt;/span&gt; spec
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h6&gt;
  
  
  Configuration file setting:
&lt;/h6&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"reporter"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"spec"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  list
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;list&lt;/code&gt; reporter displays both the fixture name and test scenario name as a list, one per line. The output is similar to the &lt;code&gt;spec&lt;/code&gt; reporter, with the main difference being that it doesn't group the tests by its fixture. At the end of the test run, it shows the number of tests executed, the result of each test (passed, skipped, or failed), and the total time the test execution took. It does not display how long each test took to execute.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---6DTs0sg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/2020/06/testcafe_reporter_list.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---6DTs0sg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/2020/06/testcafe_reporter_list.png" alt="End-to-End Testing with TestCafe Book Excerpt: Reporters"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Configuration examples
&lt;/h4&gt;

&lt;p&gt;The following examples use the &lt;code&gt;list&lt;/code&gt; reporter when running your tests using the configured browser and test files.&lt;/p&gt;

&lt;h6&gt;
  
  
  Command line setting:
&lt;/h6&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;testcafe chrome &lt;span class="k"&gt;*&lt;/span&gt;_test.js &lt;span class="nt"&gt;-r&lt;/span&gt; list
testcafe chrome &lt;span class="k"&gt;*&lt;/span&gt;_test.js &lt;span class="nt"&gt;--reporter&lt;/span&gt; list
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h6&gt;
  
  
  Configuration file setting:
&lt;/h6&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"reporter"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"list"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  minimal
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;minimal&lt;/code&gt; reporter only uses a few symbols to display the result of your tests, without showing any fixture or test scenario names. Passing tests are shown as a dot, failing tests with an exclamation point, and skipped tests with a dash. At the end of the test run, it only shows the number of tests executed and the result of each test (passed, skipped, or failed). It does not display how long each test took to run, or the total time the test execution took.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1lK-gyZB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/2020/06/testcafe_reporter_minimal.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1lK-gyZB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/2020/06/testcafe_reporter_minimal.png" alt="End-to-End Testing with TestCafe Book Excerpt: Reporters"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Configuration examples
&lt;/h4&gt;

&lt;p&gt;The following examples use the &lt;code&gt;minimal&lt;/code&gt; reporter when running your tests using the configured browser and test files.&lt;/p&gt;

&lt;h6&gt;
  
  
  Command line setting:
&lt;/h6&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;testcafe chrome &lt;span class="k"&gt;*&lt;/span&gt;_test.js &lt;span class="nt"&gt;-r&lt;/span&gt; minimal
testcafe chrome &lt;span class="k"&gt;*&lt;/span&gt;_test.js &lt;span class="nt"&gt;--reporter&lt;/span&gt; minimal
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h6&gt;
  
  
  Configuration file setting:
&lt;/h6&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"reporter"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"minimal"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  xunit
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;xunit&lt;/code&gt; reporter returns the results of your test run in XML, using the standard xUnit format (more widely known as the &lt;a href="https://help.catchsoftware.com/display/ET/JUnit+Format"&gt;JUnit format&lt;/a&gt;). This reporter is useful when executing your tests in a continuous integration system such as Jenkins or CircleCI. Usually, continuous integration systems can collect, process, and format these XML files into a readable document as part of your test run history.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="cp"&gt;&amp;lt;?xml version="1.0" encoding="UTF-8" ?&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;testsuite&lt;/span&gt; &lt;span class="na"&gt;name=&lt;/span&gt;&lt;span class="s"&gt;"TestCafe Tests: Chrome 83.0.4103.106 / macOS 10.15.5"&lt;/span&gt; &lt;span class="na"&gt;tests=&lt;/span&gt;&lt;span class="s"&gt;"9"&lt;/span&gt; &lt;span class="na"&gt;failures=&lt;/span&gt;&lt;span class="s"&gt;"0"&lt;/span&gt; &lt;span class="na"&gt;skipped=&lt;/span&gt;&lt;span class="s"&gt;"0"&lt;/span&gt; &lt;span class="na"&gt;errors=&lt;/span&gt;&lt;span class="s"&gt;"0"&lt;/span&gt; &lt;span class="na"&gt;time=&lt;/span&gt;&lt;span class="s"&gt;"34.554"&lt;/span&gt; &lt;span class="na"&gt;timestamp=&lt;/span&gt;&lt;span class="s"&gt;"Thu, 18 Jun 2020 05:44:13 GMT"&lt;/span&gt; &lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;testcase&lt;/span&gt; &lt;span class="na"&gt;classname=&lt;/span&gt;&lt;span class="s"&gt;"TeamYap Administrator sections"&lt;/span&gt; &lt;span class="na"&gt;name=&lt;/span&gt;&lt;span class="s"&gt;"TeamYap admin can see admin sections on the sidebar"&lt;/span&gt; &lt;span class="na"&gt;time=&lt;/span&gt;&lt;span class="s"&gt;"5.082"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;/testcase&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;testcase&lt;/span&gt; &lt;span class="na"&gt;classname=&lt;/span&gt;&lt;span class="s"&gt;"TeamYap Administrator sections"&lt;/span&gt; &lt;span class="na"&gt;name=&lt;/span&gt;&lt;span class="s"&gt;"TeamYap user can&amp;amp;#39;t see admin sections on the sidebar"&lt;/span&gt; &lt;span class="na"&gt;time=&lt;/span&gt;&lt;span class="s"&gt;"4.138"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;/testcase&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;testcase&lt;/span&gt; &lt;span class="na"&gt;classname=&lt;/span&gt;&lt;span class="s"&gt;"TeamYap Administrator sections"&lt;/span&gt; &lt;span class="na"&gt;name=&lt;/span&gt;&lt;span class="s"&gt;"TeamYap admin can access organization settings"&lt;/span&gt; &lt;span class="na"&gt;time=&lt;/span&gt;&lt;span class="s"&gt;"1.813"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;/testcase&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;testcase&lt;/span&gt; &lt;span class="na"&gt;classname=&lt;/span&gt;&lt;span class="s"&gt;"TeamYap Administrator sections"&lt;/span&gt; &lt;span class="na"&gt;name=&lt;/span&gt;&lt;span class="s"&gt;"TeamYap admin can add and sort profile questions"&lt;/span&gt; &lt;span class="na"&gt;time=&lt;/span&gt;&lt;span class="s"&gt;"6.337"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;/testcase&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;testcase&lt;/span&gt; &lt;span class="na"&gt;classname=&lt;/span&gt;&lt;span class="s"&gt;"TeamYap Feed"&lt;/span&gt; &lt;span class="na"&gt;name=&lt;/span&gt;&lt;span class="s"&gt;"Logged-in user can create new feed post"&lt;/span&gt; &lt;span class="na"&gt;time=&lt;/span&gt;&lt;span class="s"&gt;"3.091"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;/testcase&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;testcase&lt;/span&gt; &lt;span class="na"&gt;classname=&lt;/span&gt;&lt;span class="s"&gt;"TeamYap Feed"&lt;/span&gt; &lt;span class="na"&gt;name=&lt;/span&gt;&lt;span class="s"&gt;"Logged-in user can comment on feed post"&lt;/span&gt; &lt;span class="na"&gt;time=&lt;/span&gt;&lt;span class="s"&gt;"4.045"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;/testcase&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;testcase&lt;/span&gt; &lt;span class="na"&gt;classname=&lt;/span&gt;&lt;span class="s"&gt;"TeamYap Login"&lt;/span&gt; &lt;span class="na"&gt;name=&lt;/span&gt;&lt;span class="s"&gt;"User with valid account can log in"&lt;/span&gt; &lt;span class="na"&gt;time=&lt;/span&gt;&lt;span class="s"&gt;"2.899"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;/testcase&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;testcase&lt;/span&gt; &lt;span class="na"&gt;classname=&lt;/span&gt;&lt;span class="s"&gt;"TeamYap Responsive Test"&lt;/span&gt; &lt;span class="na"&gt;name=&lt;/span&gt;&lt;span class="s"&gt;"The application hides the sidebar when resizing viewport"&lt;/span&gt; &lt;span class="na"&gt;time=&lt;/span&gt;&lt;span class="s"&gt;"2.919"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;/testcase&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;testcase&lt;/span&gt; &lt;span class="na"&gt;classname=&lt;/span&gt;&lt;span class="s"&gt;"TeamYap Settings"&lt;/span&gt; &lt;span class="na"&gt;name=&lt;/span&gt;&lt;span class="s"&gt;"User can update and delete their profile picture"&lt;/span&gt; &lt;span class="na"&gt;time=&lt;/span&gt;&lt;span class="s"&gt;"4.158"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;/testcase&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/testsuite&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h4&gt;
  
  
  Configuration examples
&lt;/h4&gt;

&lt;p&gt;The following examples use the &lt;code&gt;xunit&lt;/code&gt; reporter when running your tests using the configured browser and test files.&lt;/p&gt;

&lt;h6&gt;
  
  
  Command line setting:
&lt;/h6&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;testcafe chrome &lt;span class="k"&gt;*&lt;/span&gt;_test.js &lt;span class="nt"&gt;-r&lt;/span&gt; xunit
testcafe chrome &lt;span class="k"&gt;*&lt;/span&gt;_test.js &lt;span class="nt"&gt;--reporter&lt;/span&gt; xunit
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h6&gt;
  
  
  Configuration file setting:
&lt;/h6&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"reporter"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"xunit"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  json
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;json&lt;/code&gt; reporter returns the results of your test run in a simple JSON format. Like the &lt;code&gt;xunit&lt;/code&gt; reporter, this reporter works well when you want to process your test results in another service. Since most build systems accept the xUnit / JUnit XML format as a standard, you may not need these reporters for those kinds of environments. However, for custom services, JSON is a flexible alternative. It also contains additional information about your test run, like metadata and screenshot paths.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"startTime"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2020-06-18T05:47:40.385Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"endTime"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2020-06-18T05:48:15.520Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"userAgents"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="s2"&gt;"Chrome 83.0.4103.106 / macOS 10.15.5"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"passed"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;9&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"total"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;9&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"skipped"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"fixtures"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"TeamYap Login"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"path"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/home/dennmart/src/end_to_end_testing_with_testcafe/login_test.js"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"meta"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{},&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"tests"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"User with valid account can log in"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"meta"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{},&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"errs"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[],&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"durationMs"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;3331&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"screenshotPath"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"skipped"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"TeamYap Feed"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"path"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/home/testcafe/end_to_end_testing_with_testcafe/feed_test.js"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"meta"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{},&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"tests"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Logged-in user can create new feed post"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"meta"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{},&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"errs"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[],&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"durationMs"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;3096&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"screenshotPath"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"skipped"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Logged-in user can comment on feed post"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"meta"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{},&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"errs"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[],&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"durationMs"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;4504&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"screenshotPath"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"skipped"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;

    &lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Shortened&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;for&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;brevity...&lt;/span&gt;&lt;span class="w"&gt;

  &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"warnings"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h4&gt;
  
  
  Configuration examples
&lt;/h4&gt;

&lt;p&gt;The following examples use the &lt;code&gt;json&lt;/code&gt; reporter when running your tests using the configured browser and test files.&lt;/p&gt;

&lt;h6&gt;
  
  
  Command line setting:
&lt;/h6&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;testcafe chrome &lt;span class="k"&gt;*&lt;/span&gt;_test.js &lt;span class="nt"&gt;-r&lt;/span&gt; json
testcafe chrome &lt;span class="k"&gt;*&lt;/span&gt;_test.js &lt;span class="nt"&gt;--reporter&lt;/span&gt; json
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h6&gt;
  
  
  Configuration file setting:
&lt;/h6&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"reporter"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"json"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Other ways of using reporters
&lt;/h2&gt;

&lt;p&gt;When running your tests with any reporter, it will output its results in the terminal as standard output (&lt;code&gt;stdout&lt;/code&gt;). During your test suite development, you'll want to see the results as your tests execute, so using the standard output in your terminal works well. But if you need to process the results in a separate service, you'll usually need to store them in a file. TestCafe allows you to save the test results from any reporter to a file by specifying a file path along with the configured reporter.&lt;/p&gt;

&lt;p&gt;In the command line, you can add the file path after the reporter type, separated by a colon. The following command line example shows how to save the results of a test run in a JSON file under the &lt;code&gt;reports&lt;/code&gt; sub-directory in your current project.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;testcafe chrome &lt;span class="k"&gt;*&lt;/span&gt;_test.js &lt;span class="nt"&gt;-r&lt;/span&gt; json:reports/test_results.json
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;If you want to configure the same setting in your &lt;code&gt;.testcaferc.json&lt;/code&gt; configuration file, the &lt;code&gt;reporter&lt;/code&gt; key needs to be an object with a &lt;code&gt;name&lt;/code&gt; and &lt;code&gt;output&lt;/code&gt; property for setting the reporter type and file path, respectively.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"reporter"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"json"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"output"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"reports/test_results.json"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Also, you're not limited to using a single reporter when executing your tests. You can combine multiple reporters for a single test run. The only issue to keep in mind when using more than one reporter is that only one can output the results in the terminal. The rest of the reporters used in the test run need to save the results as a file. If you attempt to output the results of more than one reporter to &lt;code&gt;stdout&lt;/code&gt;, TestCafe will throw an error.&lt;/p&gt;

&lt;p&gt;In the command line, you can specify multiple reporters by separating each reporter type with a comma, keeping in mind that only one reporter can display its results in the terminal. The following command line example shows how to use the &lt;code&gt;minimal&lt;/code&gt; and &lt;code&gt;xunit&lt;/code&gt; reporters in a single test run.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;testcafe chrome &lt;span class="k"&gt;*&lt;/span&gt;_test.js &lt;span class="nt"&gt;-r&lt;/span&gt; minimal,xunit:reports/results.xml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;If you use the configuration file to set up multiple reporters, the &lt;code&gt;reporter&lt;/code&gt; key needs to be an array of objects, each using the &lt;code&gt;name&lt;/code&gt; and &lt;code&gt;output&lt;/code&gt; properties as required.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"reporter"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"minimal"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"xunit"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"output"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"reports/results.xml"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;If these reporters don't serve your needs or you want to extend TestCafe's functionality even further, you can find reporters created by the TestCafe community by searching for the term &lt;code&gt;testcafe-reporter&lt;/code&gt; at &lt;a href="https://www.npmjs.com/"&gt;https://www.npmjs.com/&lt;/a&gt;.  You'll discover reporters that can change the way the test results look or submit the results to external test management tools like TestRail and Jira for managing your test cases automatically.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vlLhMtrj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/static/testing_with_testcafe_book_cover_small.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vlLhMtrj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-tester.com/content/images/static/testing_with_testcafe_book_cover_small.png" alt="End-to-End Testing with TestCafe"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If you found this article useful, visit &lt;a href="https://testingwithtestcafe.com"&gt;https://testingwithtestcafe.com&lt;/a&gt;. If you sign up to the mailing list, you'll &lt;strong&gt;receive the first three chapters of the book for free&lt;/strong&gt;. You'll also receive exclusive updates and be among the first to know when the book is available for purchase for a discount.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>testing</category>
      <category>automation</category>
      <category>testcafe</category>
      <category>qa</category>
    </item>
  </channel>
</rss>
