<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Simme</title>
    <description>The latest articles on DEV Community by Simme (@simme).</description>
    <link>https://dev.to/simme</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/simme"/>
    <language>en</language>
    <item>
      <title>Keyoxide Proof</title>
      <dc:creator>Simme</dc:creator>
      <pubDate>Sun, 21 Jan 2024 12:29:17 +0000</pubDate>
      <link>https://dev.to/simme/keyoxide-proof-m6i</link>
      <guid>https://dev.to/simme/keyoxide-proof-m6i</guid>
      <description>&lt;p&gt;My Keyoxide Proof is aspe:keyoxide.org:LHBS3ZA7NH55JKYJFMMKZTQ3DI.&lt;/p&gt;

&lt;p&gt;With this you should be able to verify the validity of my identity on the &lt;a href="//keyoxide.org"&gt;Keyoxide platform&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>keyoxide</category>
      <category>proof</category>
    </item>
    <item>
      <title>Observability is becoming mission critical, but who watches the watchmen?</title>
      <dc:creator>Simme</dc:creator>
      <pubDate>Wed, 14 Sep 2022 22:18:24 +0000</pubDate>
      <link>https://dev.to/simme/observability-is-becoming-mission-critical-but-who-watches-the-watchmen-5b44</link>
      <guid>https://dev.to/simme/observability-is-becoming-mission-critical-but-who-watches-the-watchmen-5b44</guid>
      <description>&lt;p&gt;&lt;em&gt;Before we get started, I just want to get this out of the way: I work at Canonical, and more specifically, I run the observability product team there, currently doing lots of cool stuff &lt;a href="https://charmhub.io/topics/canonical-observability-stack" rel="noopener noreferrer"&gt;around observability in Juju on both K8s and machines&lt;/a&gt;. In this piece, I'm actively trying to stay neutral, but it is nonetheless information worth disclosing. I'm also hiring, so if you're also super excited about building world-class observability solutions, &lt;a href="https://canonical.com/careers/2166631" rel="noopener noreferrer"&gt;don't be shy - apply!&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The last couple of years, there has been quite a lot of development in the area of lowering the barrier of entry for observability. There are now quite a few, reasonably mature options out there that lets you set up a good monitoring stack either through a few clicks or by a few one-liners in the terminal.&lt;/p&gt;

&lt;p&gt;In the managed open-source space, the most successful one so far probably is &lt;a href="https://grafana.com/products/cloud/" rel="noopener noreferrer"&gt;Grafana Cloud&lt;/a&gt;, but there definitely is no shortage of closed-source vendors providing APM solutions where everything you need to get started is to drop either a single or multiple agents into your cluster or your machine. Even in the case of self-hosted open-source, there are quite a few options available. &lt;a href="https://observatorium.io/" rel="noopener noreferrer"&gt;Observatorium&lt;/a&gt;, &lt;a href="https://opstrace.com/" rel="noopener noreferrer"&gt;OpsTrace&lt;/a&gt; and &lt;a href="https://charmhub.io/topics/canonical-observability-stack" rel="noopener noreferrer"&gt;COS&lt;/a&gt; all provide different degrees of out-of-the-box turn-key experiences, even if the most popular option here remains to roll it yourself, picking the tools you think are the best for the job. &lt;/p&gt;

&lt;p&gt;With the increasing interest in observability as a practice, and decreasing barrier of entry, a lot of organisations will, if they haven't already, find out that observability will become more and more critical as their practices improve, to the point where I would argue that it no longer is icing on the cake, making the work of the SREs easier, but mission-critical for their entire business.&lt;/p&gt;

&lt;h2&gt;
  
  
  Who watches the watchmen?
&lt;/h2&gt;

&lt;p&gt;As this transition in value happens, a new question is starting to gain in importance: who watches the watchmen? Or to put it in words that speak less of my geeky obsession with comics, and more of the topic at hand: what observes the observability stack? How will we be made aware if it is starting to have issues?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7eezek78jfp8ch84vkhy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7eezek78jfp8ch84vkhy.png" alt="Who watches the watchmen?"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A former colleague of mine used to say that "you only get one chance at making a good impression, but for an observability solution, this is especially true". And I truly think they were (are) right! I mean, let's be honest: if you've been burnt by your production monitoring even once, that solution will have a &lt;strong&gt;REALLY&lt;/strong&gt; hard, if not even impossible, time convincing you to trust it again. &lt;/p&gt;

&lt;p&gt;Never getting an expected alert might very well mean your critical business services might end up broken without you knowing. To really twist the knife, also imagine that you've not been alerted that the stacks alerting capabilities are broken, or that no telemetry is being collected anymore. While the value of an observability stack is known, most of us don't really put that much effort into making sure our stack itself stays healthy. What happens, for instance, if our log ingesters suddenly start to choke due to a spike in error logs? If our alerting tool starts to crash loop? Would we even notice? I'll go out on a limb and make an educated guess that for many of us, maybe even the vast majority, the answer would be no.&lt;/p&gt;

&lt;p&gt;What I'm trying to say here is that while the stability of course is important, the solution does not need to be fault free - it can't be really, just as no software can. What it does need to be, however, is capable of letting you know when its starting to misbehave, so you can take proper action early. &lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding the failure modes
&lt;/h2&gt;

&lt;p&gt;Before we go into analysing likely failure modes, I just want to make one thing clear: the visualisation tool, like Grafana, is &lt;strong&gt;not&lt;/strong&gt; something I consider to be a critical part of the stack.  While useful, as long as the rest of our tooling works, we'd always be able to spin up a new visualisation tool somewhere else and connect our datasources to it.  &lt;/p&gt;

&lt;p&gt;To keep it short: as long as our alerting continues to work, and the telemetry signals get collected - we're good.  We should of course monitor our visualisation tool as well, but from a comparative point of view, it's by far the least important one. Instead, let's focus on two really critical, fairly common failure modes:&lt;/p&gt;

&lt;h3&gt;
  
  
  Alerts not firing
&lt;/h3&gt;

&lt;p&gt;Let's say we're running a stack where &lt;a href="https://prometheus.io/docs/alerting/latest/alertmanager/" rel="noopener noreferrer"&gt;Alertmanager&lt;/a&gt; is responsible of alerting. If this (or these) Alertmanager stops working, there will no longer be anything in place to alert us about the fact that it's down (duh). Some would probably argue that this is why you have something like Grafana in place, with dedicated monitoring screens on the wall displaying the state of your solution in realtime. I don't know about you, but I personally forget to look at those screens as soon as I get caught up in something. I also want to be able to get lunch, go to the bathroom every now and then or even refill my cup of coffee.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5rztss1e1it5xzpseh1x.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5rztss1e1it5xzpseh1x.jpeg" alt="keep calm and alert on missing alerts"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The solution to this actually is just as simple as it is elegant! We need to set &lt;strong&gt;something&lt;/strong&gt; up, somewhere else, preferably as far away as possible from the stack itself, that continuously receives an always-triggering alert from Alertmanager. In the absence of such an alert, this something will notify you that Alertmanager isn't checking in as expected.&lt;/p&gt;

&lt;p&gt;As for any specific tool or service to help you with this, it's totally up to you. Popular solutions include &lt;a href="https://deadmanssnitch.com/" rel="noopener noreferrer"&gt;Dead Man's Snitch&lt;/a&gt; , &lt;a href="https://cronitor.io/" rel="noopener noreferrer"&gt;Cronitor&lt;/a&gt; and &lt;a href="https://healthchecks.io/" rel="noopener noreferrer"&gt;Healthchecks.io&lt;/a&gt; with the last one &lt;a href="https://github.com/healthchecks/healthchecks" rel="noopener noreferrer"&gt;being available as open-source&lt;/a&gt; in addition to their managed offering. But in reality, you could very well hack something together yourself that would do the job just fine. The important part here is that it needs to serve as a dead man's switch, immediately firing and alerting if your alerting tool fails to check in.&lt;/p&gt;

&lt;h3&gt;
  
  
  Telemetry missing
&lt;/h3&gt;

&lt;p&gt;We can of course monitor the CPU and memory consumption of our ingesters to give us an early warning of when things are about to go south. We may also monitor and alert on the ingestion rate, using for instance the &lt;code&gt;prometheus_remote_storage_succeeded_samples_total&lt;/code&gt; metric. This metrics however, is leaving us a bit vulnerable, as the ingesters being overloaded naturally also will prevent the very same ingesters to ingest metrics about themselves and their own performance.&lt;/p&gt;

&lt;p&gt;Just as for the previous failure mode,  this one will also require us to alert in the &lt;strong&gt;absence&lt;/strong&gt; of something expected. In this case, rather than the absence of an alert, we want to alert on the absence of telemetry being ingested. PromQL and LogQL both have facilities for this, using &lt;a href="https://charmhub.io/topics/canonical-observability-stack" rel="noopener noreferrer"&gt;&lt;code&gt;absent&lt;/code&gt;&lt;/a&gt; and &lt;a href="https://charmhub.io/topics/canonical-observability-stack" rel="noopener noreferrer"&gt;&lt;code&gt;absent_over_time&lt;/code&gt;&lt;/a&gt;. This will allow us to set up an alert rule that tracks the absence of a metric for a certain time range, and when there no longer is any new data points within that range, the alert will trigger. As for the alerting expression, we could use the ingestion rate metric above, or something even simpler like the &lt;code&gt;up&lt;/code&gt; metric, wrapping it in an &lt;code&gt;absent&lt;/code&gt; function. Anything will do really, as long as it is being ingested regularly.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's next?
&lt;/h2&gt;

&lt;p&gt;This is by no means an exhaustive list of failure modes for an observability stack. It pinpoints two fairly common and fairly critical scenarios that are easily guarded against.&lt;/p&gt;

&lt;p&gt;As your understanding of your observability stack deepens, you'll be able to identify more possible failure modes, and using the telemetry provided by each component of your stack; guard against them. The very same telemetry is not only useful for observing the behaviour of your stack, but also for observing the behaviour of your incident response team. But that will be a topic for some other time.&lt;/p&gt;

</description>
      <category>observability</category>
      <category>monitoring</category>
      <category>sre</category>
      <category>reliability</category>
    </item>
    <item>
      <title>Error Economics - How to avoid breaking the budget</title>
      <dc:creator>Simme</dc:creator>
      <pubDate>Mon, 23 Aug 2021 00:00:00 +0000</pubDate>
      <link>https://dev.to/simme/error-economics-how-to-avoid-breaking-the-budget-3p0g</link>
      <guid>https://dev.to/simme/error-economics-how-to-avoid-breaking-the-budget-3p0g</guid>
      <description>&lt;p&gt;At &lt;a href="https://www.sloconf.com/"&gt;SLOConf 2021&lt;/a&gt; I talked about how we may use error budgets to add pass/fail criterias to reliability tests we run as part of our CI pipelines.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/9Z06PxppYOM"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;As Site Reliability Engineers, one of our primary goals is to reduce manual labor, or toil, to a minimum while at the same time keeping the systems we manage as reliable and available as possible. To be able to do this in a safe way, it's really important that we're able to easily inspect the state of the system.&lt;/p&gt;

&lt;p&gt;To measure whether we're successful in this endeavour, we establish service level agreements (SLA), service level indicators (SLI) and service level objectives (SLO). Traditional monitoring is really helpful in doing this, but it won't allow you to take action until the issue is already present, likely already affecting your users, in prod.&lt;/p&gt;

&lt;p&gt;To be able to take action proactively, we may use something like a load generator or reliability testing tool to simulate load in our system, measuring how it behaves even before we've released anything in production. While we'll never will be able to compensate fully for the fact that it won't be running in production, we can still simulate production-like load, possibly even while injecting real-world turbulence into the system, giving us a pretty good picture of what we can expect in production as well.&lt;/p&gt;

&lt;h2&gt;
  
  
  How do we measure success for site reliability?
&lt;/h2&gt;

&lt;p&gt;When we start out creating these service level artifacts, it's usually tempting to over-engineer them, trying to take every edge case into account. My recommendation is that you try to avoid this to the extent possible.&lt;/p&gt;

&lt;p&gt;Instead, aim for a simple set of indicators and objectives, that are generic enough so that you may use them for multiple systems. You may then expand on them and make them more specific as your understanding of the systems you manage increase over time. Doing this is likely to save you a lot of time, as we otherwise tend to come up with unrealistic or irrelevant measurements or requirements, mainly due to our lack of experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  Service Level Indicators
&lt;/h2&gt;

&lt;p&gt;Service level indicators are quantitative measures of a system's health. To make it easy to remember, we may think of this as what we are measuring. If we, for instance, try to come up with some SLIs for a typical web application, we are likely to end up with things like request duration, uptime, and error rates.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating SLIs and SLOs
&lt;/h2&gt;

&lt;p&gt;What should be included? In most cases, we only want to include valid requests. A good formula to follow when crafting SLIs is available in the &lt;a href="https://sre.google/"&gt;Google SRE docs&lt;/a&gt;. Those state that an SLI equals the amount of good events, divided by the amount of valid events, times a hundred, expressed as percentages.&lt;/p&gt;

&lt;p&gt;As an example: if a user decides to send us a request that is not within the defined constraints of the service, we should of course handle it gracefully, letting the user know the request is unsupported. However, we shouldn't be held responsible for the request not being processed properly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Service Level Objectives
&lt;/h2&gt;

&lt;p&gt;Service level objectives on the other hand, are the targets we set for our SLIs. Think of it as what the measures should show to be OK. For instance, if our SLI is based on request duration, and shows how many percent of all requests are below 500ms, our SLO would express how big a percentage we expect to be below 500ms for our service to be considered to be within the limits.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is an error budget?
&lt;/h2&gt;

&lt;p&gt;An error budget is the remainder of the SLI once the SLO has been applied. For instance, if our SLO is 99.9%, that would mean our error budget is the remaining .1% up to a 100%. To not breach our SLOs, we then need to be able to fit all events that would not adhere to the criteria we set up into that .1%. This includes outages, service degradations and even planned maintenance windows.&lt;/p&gt;

&lt;p&gt;What I'm trying to say is that while it might feel tempting to go for four nines, or even three as your SLO (99.99%, 99,9%), this has astronomic impact on the engineering effort needed. For a downtime/unavailability SLI, a three nine SLO basically means that you can afford as little as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Daily: 1m 26s&lt;/li&gt;
&lt;li&gt;Weekly: 10m 4s&lt;/li&gt;
&lt;li&gt;Monthly: 43m 49s&lt;/li&gt;
&lt;li&gt;Quarterly: 2h 11m 29s&lt;/li&gt;
&lt;li&gt;Yearly: 8h 45m 56s&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For comparing "nines", navigate to &lt;a href="https://uptime.is"&gt;uptime.is&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In my experience, very few systems are critical enough to motivate this level of availability. With an SLO like this, even with rolling restarts and zero downtime deploys, we can't really afford to make any mistakes at all.&lt;/p&gt;

&lt;h2&gt;
  
  
  Burning the budget
&lt;/h2&gt;

&lt;p&gt;When would it be acceptable to burn the budget on purpose? I like to use the following sentence as a rule of thumb:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;It is only acceptable to burn error budget on purpose if the goal of the activity causing the burn is to reduce the burn-rate going forward.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Setting expectations
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Picking our SLIs
&lt;/h3&gt;

&lt;p&gt;In this demo, we'll be testing a made-up online food ordering service called Hipster Pizza. As service level indicators, we'll be using the response time of requests and the HTTP response status success rate.&lt;/p&gt;

&lt;p&gt;&lt;a href="///blog/static/b61c2d22b2b553d2a050bfefec97a066/37e03/hipster-pizza.jpg"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3bmt6Qgc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/b61c2d22b2b553d2a050bfefec97a066/37e03/hipster-pizza.jpg" alt="hipster pizza" title="hipster pizza"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Picking our SLOs
&lt;/h3&gt;

&lt;p&gt;What would be reasonable SLOs for these SLIs? First we got to ask ourselves if we already have commitments to our customers or users in the form of SLAs. If we do, we at the very least need to stay within that.&lt;/p&gt;

&lt;p&gt;However, it's also good to agree on our internal ambitions. And usually, these ambitions turn out to be far less forgiving than whatever we dare to promise our users.&lt;/p&gt;

&lt;p&gt;In this example, we'll use the following SLOs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;95% of all valid requests will have a response time below 300ms&lt;/li&gt;
&lt;li&gt;99.9% of all valid requests will reply with a successful HTTP Response status.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This means that the error budget for response time is 5%, while the error budget for HTTP success is 0.1%.&lt;/p&gt;

&lt;h2&gt;
  
  
  Measuring
&lt;/h2&gt;

&lt;p&gt;To know whether we are able to stay within budget, we need to measure this in production. And we also need to assign a time window to our SLOs. For instance, that the SLO is measured on a month-to-month basis, or a sliding 7-day window.&lt;/p&gt;

&lt;p&gt;We also need to test this somehow continuously to make sure whether a certain change introduces regression, preventing us from hitting our target. This is where k6, or load generators in general, come in.&lt;/p&gt;

&lt;p&gt;Most of the time we only speak about monitoring our SLOs. I would like to propose that we take this one step further. With the traditional approach of monitoring, we're not really equipped to react prior to consuming the budget, especially with the extremely tight budgets we had a look at earlier. Instead we're only going to be alerted once we're already approaching SLO game over.&lt;/p&gt;

&lt;p&gt;Don't get me wrong here, I still believe we need, and should, monitor our production SLOs, but we should also complement this with some kind of indicative testing, allowing us to take action before the budget breach has occured. Possibly even stopping the release altogether until the issue has been resolved.&lt;/p&gt;

&lt;p&gt;By running a test that simulates the traffic and behavior of users in production, we're able to extrapolate the effect a change would have over time and use that as an indicator of how the change would affect production SLOs.&lt;/p&gt;

&lt;p&gt;Before we get into that, however, we also need to talk a bit about scheduled downtime, or maintenance windows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Accounting for scheduled downtime
&lt;/h2&gt;

&lt;p&gt;In a real-world production system, these likely occur all the time. In some cases, this is possible without requiring any downtime whatsoever, but for the vast majority, some downtime every now and then is unavoidable, even with rolling restarts, canaries, feature flags and red-green deployments in place.&lt;/p&gt;

&lt;p&gt;We should put some time into identifying what activities we do that actually require downtime, and account for that in our test. If our SLOs are measured on a month to month basis, and we usually have 10 minutes of downtime every workday, we also need to deduct a corresponding amount from our error budget.&lt;/p&gt;

&lt;p&gt;For a month with 31 one days, 22 of them being workdays, 10 minutes of downtime every workday would mean we have a planned downtime of 220 minutes per 744 hours, or 0.0049%.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;220/(744*60) = 0,0049%

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We'll now adjust the SLOs we use in our test accordingly, prior to calculating the error budget. Heavily simplified, not taking usage volume spread and such into account, this would in our case mean the actual error budgets for our test would be 0,0951% and 4,9951%.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;By using these calculated error budgets, we may then express them as thresholds in our tests, and use them as pass/fail criteria for whether our build was successful or not. And once we have those in our CI workflow, we'll also be able to increase our confidence in product iterations not breaking the error budget.&lt;/p&gt;

&lt;p&gt;Let's have a look at how this could look in k6. k6 is available for free and as open-source. Hooking it up with your pre-existing CI pipelines is usually done without any additional cost or significant time investment.&lt;/p&gt;

&lt;p&gt;If you're using some other load testing tool that also support setting runtime thresholds, this will likely work just as well there. For this demo, we're gonna use this small test script.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import http from 'k6/http'

export const options = {
  vus: 60,
  duration: '30s'
}

export default function() {
  const res = http.get('https://test-api.k6.io')
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So what does this script actually do? For a duration of 30 seconds, it will run 50 virtual users in parallel, all visiting the page &lt;a href="https://test-api.k6.io"&gt;https://test-api.k6.io&lt;/a&gt; as many times as possible. In a real world scenario, this test would most likely be a lot more extensive, and try to mimic a user's interaction with the service we're defining our SLO for.&lt;/p&gt;

&lt;p&gt;Let's run our test and have a look at the stats it returns:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http_req_duration..............: avg=132.05ms min=101.44ms med=127.55ms max=284.2ms p(90)=156.19ms p(95)=165.75ms
http_req_failed................: 0.00% ✓ 0 ✗ 6576

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, we already get all the information we need to be able to make out whether we fulfil our SLOs. Let's also define some thresholds to automatically detect whether our test busted our error budget or not.&lt;/p&gt;

&lt;p&gt;Let's set those as our thresholds in our k6 script.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  export const options = {
    thresholds: {
      http_req_duration: ['p(95.0049)&amp;lt;300'] // 95% below 300ms, accounting for planned downtime
      http_req_failed: ['rate&amp;lt;0.00951'] // 99,99049% successful, accounting for planned downtime
    }
  }

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it! By using your SLOs and SLIs as pass/fail thresholds in your CI workflow you'll be able to increase your confidence in product iterations not breaking the error budget.&lt;/p&gt;

</description>
      <category>reliability</category>
      <category>testing</category>
      <category>performance</category>
      <category>sre</category>
    </item>
    <item>
      <title>Running distributed k6 tests on Kubernetes</title>
      <dc:creator>Simme</dc:creator>
      <pubDate>Thu, 11 Feb 2021 00:00:00 +0000</pubDate>
      <link>https://dev.to/k6/running-distributed-k6-tests-on-kubernetes-1fp7</link>
      <guid>https://dev.to/k6/running-distributed-k6-tests-on-kubernetes-1fp7</guid>
      <description>&lt;blockquote&gt;
&lt;h3&gt;
  
  
  📖What you will learn
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;What the operator pattern is and when it is useful&lt;/li&gt;
&lt;li&gt;Deploying the k6 operator in your kubernetes cluster&lt;/li&gt;
&lt;li&gt;Running a distributed k6 test in your own cluster&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;
  
  
  ⚠️ Experimental
&lt;/h4&gt;

&lt;p&gt;The project used in this article is experimental and changes a lot between commits. Use at your own discretion .&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="/blog/static/49d58b70df40a0fa1aa75dd1f6d1f670/acdd1/operator.png"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fk6.io%2Fblog%2Fstatic%2F49d58b70df40a0fa1aa75dd1f6d1f670%2F7842b%2Foperator.png" title="operator" alt="operator"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;One of the questions we often get in the forum is how to run distributed k6 tests on your own infrastructure. While we believe that &lt;a href="https://k6.io/docs/testing-guides/running-large-tests" rel="noopener noreferrer"&gt;running large load tests&lt;/a&gt; is possible even when running on a single node, we do appreciate that this is something some of our users might want to do.&lt;/p&gt;

&lt;p&gt;There are at least a couple of reasons why you would want to do this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;You run everything else in Kubernetes and would like k6 to be executed in the same fashion as all your other infrastructure components. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You have access to a couple of high-end nodes and want to pool their resources into a large-scale stress test.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You have access to multiple low-end or highly utilized nodes and need to pool their resources to be able to reach your target VU count or Requests per Second (RPS).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;To be able to follow along in this guide, you’ll need access to a Kubernetes cluster, with enough privileges to apply objects.&lt;/p&gt;

&lt;p&gt;You’ll also need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/kubernetes-sigs/kustomize/" rel="noopener noreferrer"&gt;Kustomize&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/" rel="noopener noreferrer"&gt;Kubectl&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.gnu.org/software/make/" rel="noopener noreferrer"&gt;Make&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Kubernetes Operator pattern
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://kubernetes.io/docs/concepts/extend-kubernetes/operator/" rel="noopener noreferrer"&gt;operator pattern&lt;/a&gt; is a way of extending Kubernetes so that you may use &lt;a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/" rel="noopener noreferrer"&gt;custom resources&lt;/a&gt; to manage applications running in the cluster. The pattern aims to automate the tasks that a human operator would usually do, like provisioning new application components, changing the configuration, or resolving problems that occur.&lt;/p&gt;

&lt;p&gt;This is accomplished using custom resources which, for the scope of this article, could be compared to the traditional service requests that you would file to your system operator to get changes applied to the environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="/blog/static/8bc25b5fb3de365092d17de6121c3280/d9c41/pattern.png"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fk6.io%2Fblog%2Fstatic%2F8bc25b5fb3de365092d17de6121c3280%2F7842b%2Fpattern.png" title="operator pattern" alt="operator pattern"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The operator will listen for changes to, or creation of, K6 custom resource objects. Once a change is detected, it will react by modifying the cluster state, spinning up k6 test jobs as needed. It will then use the parallelism argument to figure out how to split the workload between the jobs using &lt;a href="https://k6.io/docs/using-k6/options#execution-segment" rel="noopener noreferrer"&gt;execution segments&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using the k6 operator to run a distributed load test in your Kubernetes cluster
&lt;/h2&gt;

&lt;p&gt;We'll now go through the steps required to deploy, run, and clean up after the k6 operator.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cloning the repository
&lt;/h3&gt;

&lt;p&gt;Before we get started, we need to clone the operator repository from GitHub and navigate to the repository root:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ git clone https://github.com/k6io/operator &amp;amp;&amp;amp; cd operator



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Deploying the operator
&lt;/h3&gt;

&lt;p&gt;Deploying the operator is done by running the command below, with kubectl configured to use the context of the cluster that you want to deploy it to.&lt;/p&gt;

&lt;p&gt;First, make sure you are using the right context:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ kubectl config get-contexts

CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* harley harley harley
          jean jean jean
          ripley ripley ripley



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Then deploy the operator bundle using make. This will also apply the roles, namespaces, bindings and services needed to run the operator.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ make deploy

/Users/simme/.go/bin/controller-gen "crd:trivialVersions=true" rbac:roleName=manager-role webhook paths="./..." output:crd:artifacts:config=config/crd/bases
cd config/manager &amp;amp;&amp;amp; /Users/simme/.go/bin/kustomize edit set image controller=ghcr.io/k6io/operator:latest
/Users/simme/.go/bin/kustomize build config/default | kubectl apply -f -
namespace/k6-operator-system created
customresourcedefinition.apiextensions.k8s.io/k6s.k6.io created
role.rbac.authorization.k8s.io/k6-operator-leader-election-role created
clusterrole.rbac.authorization.k8s.io/k6-operator-manager-role created
clusterrole.rbac.authorization.k8s.io/k6-operator-proxy-role created
clusterrole.rbac.authorization.k8s.io/k6-operator-metrics-reader created
rolebinding.rbac.authorization.k8s.io/k6-operator-leader-election-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/k6-operator-manager-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/k6-operator-proxy-rolebinding created
service/k6-operator-controller-manager-metrics-service created
deployment.apps/k6-operator-controller-manager created



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Writing our test script
&lt;/h3&gt;

&lt;p&gt;Once that is done, we need to create a config map containing the test script. For the operator to pick up our script, we need to name the file &lt;code&gt;test.js&lt;/code&gt;. For this article, we’ll be using the test script below:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

import http from 'k6/http';
import { check } from 'k6';

export let options = {
  stages: [
    { target: 200, duration: '30s' },
    { target: 0, duration: '30s' },
  ],
};

export default function () {
  const result = http.get('https://test-api.k6.io/public/crocodiles/');
  check(result, {
    'http response status code is 200': result.status === 200,
  });
}



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Before we continue, we'll run the script once locally to make sure it works:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ k6 run test.js



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If you’ve never written a k6 test before, we recommend that you start by reading &lt;a href="https://k6.io/docs/getting-started/running-k6" rel="noopener noreferrer"&gt;this getting started article from the documentation&lt;/a&gt;, just to get a feel for how it works.&lt;/p&gt;

&lt;p&gt;Let’s walk through this script and make sure we understand what is happening: We’ve set up two stages that will run for 30 seconds each. The first one will ramp up to linearly to 200 VUs over 30 seconds. The second one will ramp down to 0 again over 30 seconds.&lt;/p&gt;

&lt;p&gt;In this case the operator will tell each test runner to run only a portion of the total VUs. For instance, if the script calls for 40 VUs, and &lt;code&gt;parallelism&lt;/code&gt; is set to 4, the test runners would have 10 VUs each.&lt;/p&gt;

&lt;p&gt;Each VU will then loop over the default function as many times as possible during the execution. It will execute an HTTP GET request against the URL we’ve configured, and make sure that the responds with HTTP Status 200. In a real test, we'd probably throw in a sleep here to emulate the think time of the user, but as the purpose of this article is to run a distributed test with as much throughput as possible, I've deliberately skipped it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deploying our test script
&lt;/h3&gt;

&lt;p&gt;Once the test script is done, we have to deploy it to the kubernetes cluster. We’ll use a &lt;code&gt;ConfigMap&lt;/code&gt; to accomplish this. The name of the map can be whatever you like, but for this demo we'll go with &lt;code&gt;crocodile-stress-test&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;If you want more than one test script available in your cluster, you just repeat this process for each one, giving the maps different names.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ kubectl create configmap crocodile-stress-test --from-file /path/to/our/test.js

configmap/crocodile-stress-test created



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;blockquote&gt;
&lt;h4&gt;
  
  
  ⚠️ Namespaces
&lt;/h4&gt;

&lt;p&gt;For this to work, the k6 custom resource and the config map needs to be deployed in the same namespace.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Let’s have a look at the result:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ kubectl describe configmap crocodile-stress-test

Name: crocodile-stress-test
Namespace: default
Labels: &amp;lt;none&amp;gt;
Annotations: &amp;lt;none&amp;gt;

Data
====
test.js:
----
import http from 'k6/http';
import { check } from 'k6';

export let options = {
  stages: [
    { target: 200, duration: '30s' },
    { target: 0, duration: '30s' },
  ],
};

export default function () {
  const result = http.get('https://test-api.k6.io/public/crocodiles/');
  check(result, {
    'http response status code is 200': result.status === 200,
  });
}

Events: &amp;lt;none&amp;gt;



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The config map contains the content of our test file, labelled as test.js. The operator will later search through our config map for this key, and use its content as the test script.&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating our custom resource (CR)
&lt;/h3&gt;

&lt;p&gt;To communicate with the operator, we’ll use a custom resource called &lt;code&gt;K6&lt;/code&gt;. Custom resources behave just as native kubernetes objects, while being fully customizable. In this case, the data of the custom resource contains all the information necessary for k6 operator to be able to start a distributed load test:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

apiVersion: k6.io/v1alpha1
kind: K6
metadata:
  name: k6-sample
spec:
  parallelism: 4
  script: crocodile-stress-test



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;For Kubernetes to know what to do with this custom resource, we first need to specify what API Version we want to use to interpret its content, in this case &lt;code&gt;k6.io/v1alpha1&lt;/code&gt;. We’ll then set the kind to K6, and give our resource a name.&lt;/p&gt;

&lt;p&gt;As the specification for our custom resource, we now have the option to use a couple of different properties:&lt;/p&gt;

&lt;h4&gt;
  
  
  Parallelism
&lt;/h4&gt;

&lt;p&gt;Configures how many k6 test runner jobs the operator should spawn.&lt;/p&gt;

&lt;h4&gt;
  
  
  Script
&lt;/h4&gt;

&lt;p&gt;The name of the config map containing our &lt;code&gt;script.js&lt;/code&gt; file.&lt;/p&gt;

&lt;h4&gt;
  
  
  Separate
&lt;/h4&gt;

&lt;p&gt;Whether the operator should allow multiple k6 jobs to run concurrently at the same node. The default value for this property is &lt;code&gt;false&lt;/code&gt;, allowing each node to run multiple jobs.&lt;/p&gt;

&lt;h4&gt;
  
  
  Arguments
&lt;/h4&gt;

&lt;p&gt;Allowing you to pass arguments to each k6 job, just as you would from the CLI. For instance &lt;code&gt;--tag testId=crocodile-stress-test-1&lt;/code&gt;,&lt;code&gt;--out cloud&lt;/code&gt;, or &lt;code&gt;—no-connection-reuse&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deploying our Custom Resource
&lt;/h3&gt;

&lt;p&gt;We will now deploy our custom resource using kubectl, and by that, start the test:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ kubectl apply -f /path/to/our/k6/custom-resource.yml

k6.k6.io/k6-sample created



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Once we do this, the k6 operator will pick up the changes and start the execution of the test. This looks somewhat along the lines of what is shown in this diagram:&lt;/p&gt;

&lt;p&gt;&lt;a href="/blog/static/8c12a4c120f2f4feed3d7284df4be089/14945/pattern-k6.png"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fk6.io%2Fblog%2Fstatic%2F8c12a4c120f2f4feed3d7284df4be089%2F14945%2Fpattern-k6.png" title="k6 pattern" alt="k6 pattern"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s make sure everything went as expected:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ kubectl get k6 

NAME AGE
k6-sample 2s

$ kubectl get jobs

NAME COMPLETIONS DURATION AGE
k6-sample-1 0/1 12s 12s
k6-sample-2 0/1 12s 12s
k6-sample-3 0/1 12s 12s
k6-sample-4 0/1 12s 12s

$ kubectl get pods
NAME READY STATUS RESTARTS AGE
k6-sample-3-s7hdk 1/1 Running 0 20s
k6-sample-4-thnpw 1/1 Running 0 20s
k6-sample-2-f9bbj 1/1 Running 0 20s
k6-sample-1-f7ktl 1/1 Running 0 20s



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The pods have now been created and put in a paused state until the operator has made sure they’re all ready to execute the test. Once that’s the case, the operator deploys another job, k6-sample-starter which is responsible for making sure all our runners start execution at the same time.&lt;/p&gt;

&lt;p&gt;Let’s wait a couple of seconds and then list our pods again:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ kubectl get pods

NAME READY STATUS RESTARTS AGE
k6-sample-3-s7hdk 1/1 Running 0 76s
k6-sample-4-thnpw 1/1 Running 0 76s
k6-sample-2-f9bbj 1/1 Running 0 76s
k6-sample-1-f7ktl 1/1 Running 0 76s
k6-sample-starter-scw59 0/1 Completed 0 56s



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;All right! The starter has completed and our tests are hopefully running. To make sure, we may check the logs of one of the jobs:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ kubectl logs k6-sample-1-f7ktl

[...]

Run [100%] paused
default [0%]

Run [100%] paused
default [0%]

running (0m00.7s), 02/50 VUs, 0 complete and 0 interrupted iterations
default [1%] 02/50 VUs 0m00.7s/1m00.0s

running (0m01.7s), 03/50 VUs, 13 complete and 0 interrupted iterations
default [3%] 03/50 VUs 0m01.7s/1m00.0s

running (0m02.7s), 05/50 VUs, 41 complete and 0 interrupted iterations
default [4%] 05/50 VUs 0m02.7s/1m00.0s

[...]



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And with that, our test is running! 🎉 After a couple of minutes, we’re now able to list the jobs again to verify they’ve all completed:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ kubectl get jobs

NAME COMPLETIONS DURATION AGE
k6-sample-starter 1/1 8s 6m2s
k6-sample-3 1/1 96s 6m22s
k6-sample-2 1/1 96s 6m22s
k6-sample-1 1/1 97s 6m22s
k6-sample-4 1/1 97s 6m22s



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Cleaning up
&lt;/h3&gt;

&lt;p&gt;To clean up after a test run, we delete all resources using the same yaml file we used to deploy it:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ kubectl delete -f /path/to/our/k6/custom-resource.yml

k6.k6.io "k6-sample" deleted



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Which deletes all the resources created by the operator as well, as shown below:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ kubectl get jobs
No resources found in default namespace.

$ kubectl get pods
No resources found in default namespace.



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;blockquote&gt;
&lt;h4&gt;
  
  
  ⚠️ Deleting the operator
&lt;/h4&gt;

&lt;p&gt;If you for some reason would like to delete the operator altogether, just run make delete from the root of the project..&lt;/p&gt;

&lt;p&gt;The idea behind the operator however, is that you let it remain in your cluster between test executions, only applying and deleting the actual K6 custom resources used to run the tests.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Things to consider
&lt;/h2&gt;

&lt;p&gt;While the operator makes running distributed load tests a lot easier, it still comes with a couple of drawbacks or gotchas that you need to be aware of and plan for. For instance, the lack of metric aggregation.&lt;/p&gt;

&lt;p&gt;We’ll go through in detail how to set up the monitoring and visualisation of these test runs in a future article, but for now, here’s a list of things you might want to consider:&lt;/p&gt;

&lt;h3&gt;
  
  
  Metrics will not be automatically aggregated
&lt;/h3&gt;

&lt;p&gt;Metrics generated by running distributed k6 tests using the operator won’t be aggregated, which means that each test runner will produce its own results and end-of-test summary.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;To be able to aggregate your metrics and analyse them together, you’ll either need to:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;1) Set up some kind of monitoring or visualisation software and configure your K6 custom resource to make your jobs output there.&lt;/p&gt;

&lt;p&gt;2) Use &lt;a href="https://github.com/elastic/logstash" rel="noopener noreferrer"&gt;logstash&lt;/a&gt;, &lt;a href="https://github.com/fluent/fluentd" rel="noopener noreferrer"&gt;fluentd&lt;/a&gt;, splunk, or similar to parse and aggregate the logs yourself.&lt;/p&gt;

&lt;h3&gt;
  
  
  Thresholds are not evaluated across jobs at runtime
&lt;/h3&gt;

&lt;p&gt;As the metrics are not aggregated at runtime, your thresholds won’t be evaluated using aggregation either. Currently, the best way to solve this is by setting up alarms for passed thresholds in your monitoring or visualisation software instead.&lt;/p&gt;

&lt;h3&gt;
  
  
  Overpopulated nodes might create bottlenecks
&lt;/h3&gt;

&lt;p&gt;You want to make sure your k6 jobs have enough cpu and memory resources to actually perform your test. Using parallelism alone might not be sufficient. If you run into this issue, experiment with using the separate property.&lt;/p&gt;

&lt;h3&gt;
  
  
  Experimental
&lt;/h3&gt;

&lt;p&gt;As mentioned in the beginning of the article, the operator &lt;em&gt;is&lt;/em&gt; experimental, and as such it might change a lot from commit to commit.&lt;/p&gt;

&lt;h3&gt;
  
  
  Total cost of ownership
&lt;/h3&gt;

&lt;p&gt;The k6 operator significantly simplifies the process of running distributed load tests in your own cluster. However, there still is a maintenance burden associated with self-hosting. If you'd rather skip that, as well as the other drawbacks listed above, and instead get straight to load testing, you might want to have a look at the &lt;a href="https://k6.io/cloud" rel="noopener noreferrer"&gt;k6 cloud offering&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  See also
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/k6io/operator" rel="noopener noreferrer"&gt;The k6 operator project on GitHub&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h4&gt;
  
  
  🙏🏼 Thank you for reading!
&lt;/h4&gt;

&lt;p&gt;If you enjoyed this article and would like to read others like it in the future, it would definitely make us happy campers if you hit the ❤️ or 🦄 buttons.&lt;/p&gt;

&lt;p&gt;To not miss out on any of our future content, make sure to press the follow button.&lt;/p&gt;

&lt;p&gt;Want to get in touch with us? Hit us up either in the comments below or &lt;a href="https://twitter.com/k6_io" rel="noopener noreferrer"&gt;on Twitter&lt;/a&gt;&lt;/p&gt;

</description>
      <category>performance</category>
      <category>cloud</category>
      <category>testing</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Performance testing gRPC services</title>
      <dc:creator>Simme</dc:creator>
      <pubDate>Thu, 12 Nov 2020 00:00:00 +0000</pubDate>
      <link>https://dev.to/k6/performance-testing-grpc-services-93c</link>
      <guid>https://dev.to/k6/performance-testing-grpc-services-93c</guid>
      <description>&lt;blockquote&gt;
&lt;h3&gt;
  
  
  🎉 New in v0.29.0
&lt;/h3&gt;

&lt;p&gt;v0.29.0 contained a lot of interesting features. Have a look at the &lt;a href="https://github.com/loadimpact/k6/releases/tag/v0.29.0"&gt;release notes&lt;/a&gt; for details!&lt;/p&gt;
&lt;/blockquote&gt;



&lt;blockquote&gt;
&lt;h3&gt;
  
  
  📖What you will learn
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;What gRPC is.&lt;/li&gt;
&lt;li&gt;How gRPC differs from JSON-based REST.&lt;/li&gt;
&lt;li&gt;Creating and executing your first gRPC performance test using k6.&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Outline
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;What is gRPC&lt;/li&gt;
&lt;li&gt;API Types&lt;/li&gt;
&lt;li&gt;The proto definition&lt;/li&gt;
&lt;li&gt;Getting started&lt;/li&gt;
&lt;li&gt;Summary&lt;/li&gt;
&lt;li&gt;See also&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What is gRPC
&lt;/h2&gt;

&lt;p&gt;gRPC is a light-weight open-source RPC framework. It was originally developed by Google, with 1.0 being released in August 2016. Since then, it's gained a lot of attention as well as a wide adoption.&lt;/p&gt;

&lt;p&gt;In comparison to JSON, which is transmitted as human-readable text, gRPC is binary, making it both faster to transmit and more compact. In the benchmarks we've seen between gRPC and JSON-based REST, gRPC has proved to be a lot faster than its more traditional counterpart.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://auth0.com/blog/beating-json-performance-with-protobuf/"&gt;A benchmark by Auth0&lt;/a&gt; reported up to 6 times higher performance, while other benchmarks, like&lt;a href="https://dev.to/plutov/benchmarking-grpc-and-rest-in-go-565"&gt;this one by Alex Pliutau&lt;/a&gt; or&lt;a href="https://medium.com/@EmperorRXF/evaluating-performance-of-rest-vs-grpc-1b8bdf0b22da"&gt;this one by Ruwan Fernando&lt;/a&gt;, suggests improvements of up to 10 times.&lt;/p&gt;

&lt;p&gt;For chatty, distributed systems, these improvements accumulate quickly, making the difference not only noticeable in benchmarks, but also by the end-user.&lt;/p&gt;

&lt;h2&gt;
  
  
  API types
&lt;/h2&gt;

&lt;p&gt;gRPC supports four different types of RPCs, unary, server streaming, client streaming, and bi-directional streaming. In reality, the messages are multiplexed using the same connection, but in the spirit of keeping things simple and approachable, this is not illustrated in the gRPC service model diagrams below.&lt;/p&gt;

&lt;h3&gt;
  
  
  Unary
&lt;/h3&gt;

&lt;p&gt;Unary calls work the same way as a regular function call: a single request is sent to the server which in turn replies with a single response.&lt;/p&gt;

&lt;p&gt;&lt;a href="///blog/static/c48fc5ca8336d9cd8b98618c4b9d86ec/7842b/grpc-unary.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uC2nSXnm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/c48fc5ca8336d9cd8b98618c4b9d86ec/7842b/grpc-unary.png" alt="unary call" title="unary call"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Server Streaming
&lt;/h3&gt;

&lt;p&gt;In server streaming mode, the client sends a single request to the server, which in turn replies with multiple responses.&lt;/p&gt;

&lt;p&gt;&lt;a href="///blog/static/fbc08c79e035da81aca99e4bc220f95e/7842b/grpc-server.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CMzS0hiF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/fbc08c79e035da81aca99e4bc220f95e/7842b/grpc-server.png" alt="server streaming" title="server streaming"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Client Streaming
&lt;/h3&gt;

&lt;p&gt;The client streaming mode is the opposite of the server streaming mode. The client sends multiple requests to the server, which in turn replies with a single response.&lt;/p&gt;

&lt;p&gt;&lt;a href="///blog/static/ebdd5a0dedb7bdae80e59ceba69c6b75/7842b/grpc-client.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IVqETxsj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/ebdd5a0dedb7bdae80e59ceba69c6b75/7842b/grpc-client.png" alt="client streaming" title="client streaming"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Bi-directional streaming
&lt;/h3&gt;

&lt;p&gt;In bi-directional streaming mode, both the client and the server may send multiple messages.&lt;/p&gt;

&lt;p&gt;&lt;a href="///blog/static/094a57810030c8df50d05a8c87a1dee1/7842b/grpc-bidirectional.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xI0n0guP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/094a57810030c8df50d05a8c87a1dee1/7842b/grpc-bidirectional.png" alt="bi-directional streaming" title="bi-directional streaming"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The &lt;code&gt;.proto&lt;/code&gt; definition
&lt;/h2&gt;

&lt;p&gt;The messages and services used for gRPC are described in .proto files, containing&lt;a href="https://en.wikipedia.org/wiki/Protocol_Buffers"&gt;Protocol buffers&lt;/a&gt;, or protobuf, definitions.&lt;/p&gt;

&lt;p&gt;The definition file is then used to generate code which can be used by both senders and receivers as a contract for communicating through these messages and services. As the binary format used by gRPC lacks any self-describing properties, this is the only way for senders and receivers to know how to interpret the messages.&lt;/p&gt;

&lt;p&gt;Throughout this article, we'll use the &lt;code&gt;hello.proto&lt;/code&gt; definition available for download on the&lt;a href="https://grpcbin.test.k6.io/"&gt;k6 grpcbin website&lt;/a&gt;. For details on how to build your own grpc proto definition, see &lt;a href="https://grpc.io/docs/what-is-grpc/core-concepts/"&gt;this excellent article from the official gRPC docs&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight protobuf"&gt;&lt;code&gt;&lt;span class="c1"&gt;// ./definitions/hello.proto&lt;/span&gt;

&lt;span class="c1"&gt;// based on https://grpc.io/docs/guides/concepts.html&lt;/span&gt;

&lt;span class="na"&gt;syntax&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"proto2"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kn"&gt;package&lt;/span&gt; &lt;span class="nn"&gt;hello&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;service&lt;/span&gt; &lt;span class="n"&gt;HelloService&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;rpc&lt;/span&gt; &lt;span class="n"&gt;SayHello&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;HelloRequest&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;returns&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;HelloResponse&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;rpc&lt;/span&gt; &lt;span class="n"&gt;LotsOfReplies&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;HelloRequest&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;returns&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;stream&lt;/span&gt; &lt;span class="n"&gt;HelloResponse&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;rpc&lt;/span&gt; &lt;span class="n"&gt;LotsOfGreetings&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;stream&lt;/span&gt; &lt;span class="n"&gt;HelloRequest&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;returns&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;HelloResponse&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;rpc&lt;/span&gt; &lt;span class="n"&gt;BidiHello&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;stream&lt;/span&gt; &lt;span class="n"&gt;HelloRequest&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;returns&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;stream&lt;/span&gt; &lt;span class="n"&gt;HelloResponse&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kd"&gt;message&lt;/span&gt; &lt;span class="nc"&gt;HelloRequest&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;optional&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="na"&gt;greeting&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kd"&gt;message&lt;/span&gt; &lt;span class="nc"&gt;HelloResponse&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;required&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="na"&gt;reply&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Getting started
&lt;/h2&gt;

&lt;p&gt;With k6 v0.29.0, we're happy to introduce a native client for gRPC communication. In this early release, we've settled for providing a solid experience for unary calls. If any of the other modes would be particularly useful for you, we'd love to hear about your use case so we can move it up our backlog.&lt;/p&gt;

&lt;p&gt;The current API for working with gRPC in k6 using the native client is as follows:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Method&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://k6.io/blog/javascript-api/k6-net-grpc/client/client-load-importpaths----protofiles"&gt;Client.load(importPaths, ...protoFiles)&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Loads and parses the given protocol buffer definitions to be made available for RPC requests.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://k6.io/blog/javascript-api/k6-net-grpc/client/client-connect-address-params"&gt;Client.connect(address [,params])&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Opens a connection to the given gRPC server.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://k6.io/blog/javascript-api/k6-net-grpc/client/client-invoke-url-request-params"&gt;Client.invoke(url, request [,params])&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Makes an unary RPC for the given service/method and returns a &lt;a href="https://k6.io/blog/javascript-api/k6-net-grpc/response"&gt;Response&lt;/a&gt;.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://k6.io/blog/javascript-api/k6-net-grpc/client/client-close"&gt;Client.close()&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Close the connection to the gRPC service.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Creating the test
&lt;/h3&gt;

&lt;p&gt;The gRPC module is a separate package, available from your test script as &lt;code&gt;k6/net/grpc&lt;/code&gt;. Before we can use it, we first have to create an instance of the client. Instantiating the client, as well as the &lt;code&gt;.load&lt;/code&gt; operation, is only available during test initialization, ie. directly in the global scope.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;grpc&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;k6/net/grpc&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;grpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Client&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we'll load a &lt;code&gt;.proto&lt;/code&gt; definition applicable for the system under test. For the purpose of this article, we'll use &lt;a href="https://grpcbin.test.k6.io/"&gt;k6 grpcbin&lt;/a&gt;. Feel free to change this to whatever you please but keep in mind that you will also need an appropriate &lt;code&gt;.proto&lt;/code&gt; definition for the server you're testing. The &lt;code&gt;.load()&lt;/code&gt; function takes two arguments, the first one being an array of paths to search for proto files, and the second being the name of the file to load.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;grpc&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;k6/net/grpc&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;grpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Client&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;load&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;definitions&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;hello.proto&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once that is done, we'll go ahead and write our actual test.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;grpc&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;k6/net/grpc&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;grpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Client&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;load&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;definitions&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;hello.proto&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;grpcbin.test.k6.io:9001&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// plaintext: false&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;greeting&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Bert&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;invoke&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;hello.HelloService/SayHello&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="nx"&gt;check&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;status is OK&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;r&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;r&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nx"&gt;r&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="nx"&gt;grpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;StatusOK&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;

  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;

  &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;close&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="nx"&gt;sleep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So let's walk through this script to make sure we understand what's happening. First, we use the&lt;code&gt;.connect()&lt;/code&gt; function to connect to our system under test. By default, the client will set &lt;code&gt;plaintext&lt;/code&gt; to false, only allowing you to use encrypted connections. If you, for any reason, need to connect to a server that lacks SSL/TLS, just flip this setting to &lt;code&gt;true&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;We then continue by creating the object we want to send to the remote procedure we're invoking. In the case of &lt;code&gt;SayHello&lt;/code&gt;, it allows us to specify who the greeting should address using the &lt;code&gt;greeting&lt;/code&gt; parameter.&lt;/p&gt;

&lt;p&gt;Next, we invoke the remote procedure, using the syntax &lt;code&gt;&amp;lt;package&amp;gt;.&amp;lt;service&amp;gt;/&amp;lt;procedure&amp;gt;&lt;/code&gt;, as described in our proto file. This call is made synchronously, with a default timeout of 60000 ms (60 seconds). To change the timeout, add the key &lt;code&gt;timeout&lt;/code&gt; to the config object of &lt;code&gt;.connect()&lt;/code&gt; with the duration as the value, for instance &lt;code&gt;'2s'&lt;/code&gt; for 2 seconds.&lt;/p&gt;

&lt;p&gt;Once we've received a response from the server, we'll then make sure the procedure executed successfully. The grpc module includes constants for this comparison which are listed &lt;a href="https://k6.io/docs/javascript-api/k6-net-grpc/constants"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Comparing the response status with &lt;code&gt;grpc.StatusOK&lt;/code&gt;, which is &lt;code&gt;200 OK&lt;/code&gt; just like for HTTP/1.1 communication, ensures the call was completed successfully.&lt;/p&gt;

&lt;p&gt;We'll then log the message in the response, close the client connection, and sleep for a second.&lt;/p&gt;

&lt;h3&gt;
  
  
  Executing the test
&lt;/h3&gt;

&lt;p&gt;The test can be executed just like any other test, although you need to make sure you're on at least version &lt;code&gt;v0.29.0&lt;/code&gt; to have access to the gRPC module. To check this, run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;k6 version
k6 v0.29.0 &lt;span class="o"&gt;((&lt;/span&gt;devel&lt;span class="o"&gt;)&lt;/span&gt;, go1.15.3, darwin/amd64&lt;span class="o"&gt;)&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Anything less than &lt;code&gt;v0.29.0&lt;/code&gt; here, will require you to first update k6. Instructions on how to do that can be found &lt;a href="https://k6.io/docs/getting-started/installation"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Once that's out of the way, let's run our test:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ k6 run test.js

          /\      |‾‾| /‾‾/   /‾‾/
     /\  /  \     |  |/  /   /  /
    /  \/    \    |     (   /   ‾‾\
   /          \   |  |\  \ |  (‾)  |
  / __________ \  |__| \__\ \_____/ .io

  execution: local
     script: /Users/simme/code/grpc/test.js
     output: -

  scenarios: (100.00%) 1 scenario, 1 max VUs, 10m30s max duration (incl. graceful stop):
           * default: 1 iterations for each of 1 VUs (maxDuration: 10m0s, gracefulStop: 30s)

INFO[0000] {"reply":"hello Bert"} source=console

running (00m01.4s), 0/1 VUs, 1 complete and 0 interrupted iterations
default ✓ [======================================] 1 VUs 00m01.4s/10m0s 1/1 iters, 1 per VU

    ✓ status is OK

    checks...............: 100.00% ✓ 1 ✗ 0
    data_received........: 3.0 kB 2.1 kB/s
    data_sent............: 731 B 522 B/s
    grpc_req_duration....: avg=48.44ms min=48.44ms med=48.44ms max=48.44ms p(90)=48.44ms p(95)=48.44ms
    iteration_duration...: avg=1.37s min=1.37s med=1.37s max=1.37s p(90)=1.37s p(95)=1.37s
    iterations...........: 1 0.714536/s
    vus..................: 1 min=1 max=1
    vus_max..............: 1 min=1 max=1

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From the output, we can now tell that our script is working and that the server indeed responds with a greeting addressed to who, or what, we supplied in our request body. We can also see that our &lt;code&gt;check&lt;/code&gt; was successful, meaning the server responded with &lt;code&gt;200 OK&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;In this article, we've gone through some of the fundamentals of gRPC and how it works. We've also had a look at the gRPC client introduced in k6 v0.29.0. Last, but not least, we've created a working test script demonstrating this functionality.&lt;/p&gt;

&lt;p&gt;And that concludes this gRPC load testing tutorial. Thank you for reading!&lt;/p&gt;

&lt;h2&gt;
  
  
  See Also
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://k6.io/docs/javascript-api/k6-net-grpc"&gt;k6 gRPC Module API&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://grpcbin.test.k6.io/"&gt;k6 gRPCBin: A simple request/response service for gRPC&lt;/a&gt;, similar to &lt;a href="https://httpbin.test.k6.io/"&gt;k6 httpbin&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://grpc.io/"&gt;The official website of the gRPC project&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/grpc/grpc/tree/master/examples"&gt;Official examples from the gRPC repo on GitHub&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>performance</category>
      <category>testing</category>
      <category>grpc</category>
      <category>api</category>
    </item>
    <item>
      <title>Setting up a Kubernetes lab cluster</title>
      <dc:creator>Simme</dc:creator>
      <pubDate>Thu, 17 Sep 2020 20:07:28 +0000</pubDate>
      <link>https://dev.to/simme/setting-up-a-kubernetes-lab-cluster-5ep5</link>
      <guid>https://dev.to/simme/setting-up-a-kubernetes-lab-cluster-5ep5</guid>
      <description>&lt;p&gt;&lt;em&gt;This article assumes some experience with containers in general, using for instance docker, as well as a basic understanding of what Kubernetes is and what kind of problems it aims to solve.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;k3s&lt;/code&gt; is a lightweight Kubernetes distribution. It’s been validated by the CNCF as a true Kubernetes distribution, making it a viable option to running the full, original Kubernetes project, especially for labs or dev environments.&lt;/p&gt;

&lt;p&gt;What would then be the perks of running k3s over k8s? Well, k8s is designed with really high availability and scalability in mind.&lt;/p&gt;

&lt;p&gt;After all, it’s built by Google to support workloads at their scale of operation. If you read some of the best practices on how to design your clusters, you’ll soon realize that it takes &lt;strong&gt;a lot&lt;/strong&gt; of machines just to get started.&lt;/p&gt;

&lt;p&gt;Some common best practices include&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;separating the nodes and the controller plane leaders&lt;/li&gt;
&lt;li&gt;scaling out etcd to multiple nodes - or even its own cluster, as well as&lt;/li&gt;
&lt;li&gt;separating ingress nodes from regular work nodes to make sure they stay snappy for incoming requests.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;While all of this is sound advice, setting up such an architecture could easily have you end up with 2-3 leader instances, 2-3 etcd servers, 1-2 ingress servers. That’s 5 servers in the best case, but more likely 6 or 7. This might feel like overkill for a lab or dev environment, and certainly not feasible for most local labs.&lt;/p&gt;

&lt;p&gt;This is where k3s really shines!&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It consists of one binary of less than 50MB!&lt;/li&gt;
&lt;li&gt;As it is a CNCF certified distribution, it is fully compliant to the more complete, full k8s.&lt;/li&gt;
&lt;li&gt;Instead of using &lt;code&gt;etcd&lt;/code&gt;, it uses an embedded &lt;code&gt;SQLite&lt;/code&gt; database, which is fully sufficient for most local or lab use cases, although not suitable for HA environments.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Creating our virtual machines
&lt;/h2&gt;

&lt;p&gt;As I’m running on macOS, the first thing we need to do is to create a Linux VM capable of running k3s. This can be done easily using &lt;a href="https://multipass.run/"&gt;multipass&lt;/a&gt;, a command-line tool for fast and simple orchestration of Ubuntu VMs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Lx1gPHYM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://simme.dev/images/multi-pass.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Lx1gPHYM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://simme.dev/images/multi-pass.gif" alt="multipass" width="498" height="280"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s create a leader and two nodes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ multipass launch \
    --name k3s \
    --cpus 4 \
    --mem 4g \
    --disk 20g

Launched: k3s

$ multipass launch \
    --name k3s-node1 \
    --cpus 1 \
    --mem 1024M \
    --disk 3G

Launched: k3s-node1

$ multipass launch \
    --name k3s-node2 \
    --cpus 1 \
    --mem 1024M \
    --disk 3G

Launched: k3s-node2

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Installing k3s
&lt;/h3&gt;

&lt;p&gt;Installation is simple, and if we where installing k3s directly on our machine, we’d get away with just piping the install script to &lt;code&gt;sh&lt;/code&gt; like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ curl -sfL https://get.k3s.io | sh -

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;However, as we’re going to be running k3s in multiple virtual machines using multipass, we need to take a somewhat different approach:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ multipass exec k3s -- bash -c \
    "curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE="644" sh -"

[INFO] Finding release for channel stable
[INFO] Using v1.18.8+k3s1 as release
[INFO] Downloading hash https://github.com/rancher/k3s/releases/download/v1.18.8+k3s1/sha256sum-amd64.txt
[INFO] Downloading binary https://github.com/rancher/k3s/releases/download/v1.18.8+k3s1/k3s
[INFO] Verifying binary download
[INFO] Installing k3s to /usr/local/bin/k3s
[INFO] Creating /usr/local/bin/kubectl symlink to k3s
[INFO] Creating /usr/local/bin/crictl symlink to k3s
[INFO] Creating /usr/local/bin/ctr symlink to k3s
[INFO] Creating killall script /usr/local/bin/k3s-killall.sh
[INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO] env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO] systemd: Creating service file /etc/systemd/system/k3s.service
[INFO] systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO] systemd: Starting k3s

$ export K3S_LEAD_URL="https://$(multipass info k3s | grep "IPv4" | awk -F' ' '{print $2}'):6443"
$ export \
    K3S_TOKEN="$(multipass exec k3s -- /bin/bash -c "sudo cat /var/lib/rancher/k3s/server/node-token")"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With that, our leader is created and the leader URL as well as the node token has been exported to variables. Let’s set up our nodes and add them to the cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ multipass exec \
    k3s-node1 \
    -- /bin/bash -c \
    "curl -sfL https://get.k3s.io | K3S_TOKEN=${K3S_TOKEN} K3S_URL=${K3S_LEAD_URL} sh -"

[INFO] Finding release for channel stable
[INFO] Using v1.18.8+k3s1 as release
[INFO] Downloading hash https://github.com/rancher/k3s/releases/download/v1.18.8+k3s1/sha256sum-amd64.txt
[INFO] Downloading binary https://github.com/rancher/k3s/releases/download/v1.18.8+k3s1/k3s
[INFO] Verifying binary download
[INFO] Installing k3s to /usr/local/bin/k3s
[INFO] Creating /usr/local/bin/kubectl symlink to k3s
[INFO] Creating /usr/local/bin/crictl symlink to k3s
[INFO] Creating /usr/local/bin/ctr symlink to k3s
[INFO] Creating killall script /usr/local/bin/k3s-killall.sh
[INFO] Creating uninstall script /usr/local/bin/k3s-agent-uninstall.sh
[INFO] env: Creating environment file /etc/systemd/system/k3s-agent.service.env
[INFO] systemd: Creating service file /etc/systemd/system/k3s-agent.service
[INFO] systemd: Enabling k3s-agent unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s-agent.service
  → /etc/systemd/system/k3s-agent.service.
[INFO] systemd: Starting k3s-agent


$ multipass exec \
    k3s-node2 \
    -- /bin/bash -c \
    "curl -sfL https://get.k3s.io | K3S_TOKEN=${K3S_TOKEN} K3S_URL=${K3S_LEAD_URL} sh -"

[INFO] Finding release for channel stable
[INFO] Using v1.18.8+k3s1 as release
[INFO] Downloading hash https://github.com/rancher/k3s/releases/download/v1.18.8+k3s1/sha256sum-amd64.txt
[INFO] Downloading binary https://github.com/rancher/k3s/releases/download/v1.18.8+k3s1/k3s
[INFO] Verifying binary download
[INFO] Installing k3s to /usr/local/bin/k3s
[INFO] Creating /usr/local/bin/kubectl symlink to k3s
[INFO] Creating /usr/local/bin/crictl symlink to k3s
[INFO] Creating /usr/local/bin/ctr symlink to k3s
[INFO] Creating killall script /usr/local/bin/k3s-killall.sh
[INFO] Creating uninstall script /usr/local/bin/k3s-agent-uninstall.sh
[INFO] env: Creating environment file /etc/systemd/system/k3s-agent.service.env
[INFO] systemd: Creating service file /etc/systemd/system/k3s-agent.service
[INFO] systemd: Enabling k3s-agent unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s-agent.service
  → /etc/systemd/system/k3s-agent.service.
[INFO] systemd: Starting k3s-agent

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As the last step, we need to copy the &lt;code&gt;kubeconfig&lt;/code&gt; file created during setup to our local machine so that we’ll be able to access the cluster from our local machine.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ multipass exec k3s \
    -- /bin/bash -c \
    "sudo cat /etc/rancher/k3s/k3s.yaml" \
    &amp;gt; k3s.yaml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The kubeconfig file will contain a lot of references to localhost, so we also need to switch those to our leader IP.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sed -i '' "s/127.0.0.1/$(multipass info k3s | grep IPv4 | awk '{print $2}')/" k3s.yaml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s install kubectl locally and move our config file to the right location. If you’re not using macOS, you will need to switch this to the appropriate command for your platform.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ brew install kubectl
$ cp k3s.yaml ~/.kube/config

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s try a couple of commands to make sure everything works.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get nodes

NAME STATUS ROLES AGE VERSION
k3s-node2 Ready &amp;lt;none&amp;gt; 18m v1.18.8+k3s1
k3s Ready master 28m v1.18.8+k3s1
k3s-node1 Ready &amp;lt;none&amp;gt; 26m v1.18.8+k3s1

$ kubectl get pods --all-namespaces

NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system metrics-server-7566d596c8-2n65k 1/1 Running 0 29m
kube-system local-path-provisioner-6d59f47c7-z4625 1/1 Running 0 29m
kube-system helm-install-traefik-j26f7 0/1 Completed 0 29m
kube-system coredns-7944c66d8d-gc2r4 1/1 Running 0 29m
kube-system svclb-traefik-f7lrp 2/2 Running 0 28m
kube-system traefik-758cd5fc85-9zmpq 1/1 Running 0 28m
kube-system svclb-traefik-rrpz4 2/2 Running 0 27m
kube-system svclb-traefik-cg6s8 2/2 Running 0 18m

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If your output looks somewhat along the lines of this, then awesome! Our lab environment is now ready and we should be able to use it just as we would with any Kubernetes cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this article, we’ve gone through how to deploy a local Kubernetes lab cluster on our local computer using the incredibly lightweight k3s. We’ve also added two nodes to our new cluster.&lt;/p&gt;

&lt;p&gt;Next time, we’ll explore how to do a deployment, how to set up &lt;code&gt;metallb&lt;/code&gt; for external IP provisioning, as well as how to use load balancing services to allow access to our deployed pods.&lt;/p&gt;




&lt;h2&gt;
  
  
  Thank you for reading! 🙏🏼
&lt;/h2&gt;

&lt;p&gt;If you liked this article and would like to read more like it, make sure to press &lt;code&gt;Follow&lt;/code&gt; as well as the 💜 button.&lt;/p&gt;

</description>
      <category>k3s</category>
      <category>kubernetes</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>In response to COVID-19, the k6 team now offers free access to k6 Cloud to non-profits fighting the pandemic</title>
      <dc:creator>Simme</dc:creator>
      <pubDate>Tue, 31 Mar 2020 15:30:00 +0000</pubDate>
      <link>https://dev.to/k6/in-response-to-covid-19-the-k6-team-now-offers-free-access-to-k6-cloud-to-non-profits-fighting-the-pandemic-39gn</link>
      <guid>https://dev.to/k6/in-response-to-covid-19-the-k6-team-now-offers-free-access-to-k6-cloud-to-non-profits-fighting-the-pandemic-39gn</guid>
      <description>&lt;p&gt;The novel Coronavirus is continuing to spread around the world and has now reached most corners of the globe with devastating health, social and economical effects as a consequence, so we hope that you and your loved ones are safe and in good health.&lt;/p&gt;

&lt;p&gt;The pandemic will come to an end, and when it does all of us will have been affected. Many will lose a loved one. Many will lose a job or see a business go under. It is our collective actions now that will determine how many. It is a time to show solidarity, as we fight together against Covid-19.&lt;/p&gt;

&lt;p&gt;As a globally distributed team, we see the effects this pandemic is having on all of our countries, so as passionate technologists we want to offer a &lt;strong&gt;free subscription to our k6 Cloud load testing service to all non-profit organizations and projects&lt;/strong&gt; working on apps or websites to help their communities, countries and the world to get through this pandemic.&lt;/p&gt;

&lt;p&gt;If you are working for an organization or project around Covid-19, or know someone who is, reach out to our &lt;a href="https://k6.io/contact"&gt;support team&lt;/a&gt; (or &lt;a href="mailto:support@k6.io"&gt;support@k6.io&lt;/a&gt;) with details of the project and we’ll help get a subscription setup equivalent to our &lt;a href="https://k6.io/pricing"&gt;Pro plan&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;As a business, we’re fortunate to be in an industry that can continue to operate (mostly) as usual. Most of our team is normally working remotely, and all of us have been remote for the past couple of weeks, to be on the safe side.&lt;/p&gt;

&lt;p&gt;Stay safe.&lt;/p&gt;

&lt;p&gt;The k6 Team&lt;/p&gt;

</description>
      <category>covid19</category>
      <category>webdev</category>
      <category>testing</category>
      <category>news</category>
    </item>
    <item>
      <title>Performance Testing with Generated Data using k6 and Faker</title>
      <dc:creator>Simme</dc:creator>
      <pubDate>Tue, 17 Mar 2020 15:14:17 +0000</pubDate>
      <link>https://dev.to/k6/performance-testing-with-generated-data-using-k6-and-faker-2e</link>
      <guid>https://dev.to/k6/performance-testing-with-generated-data-using-k6-and-faker-2e</guid>
      <description>&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;A lot of the time, while performance testing, it might not be a huge issue if the data you submit as part of your tests only vary slightly. In some cases, however, you might find yourself in a position where you'd like to keep not only the user interactions but also the data, as realistic as possible. How do we accomplish this without having to maintain long data tables? In this article, we'll explore how we may utilize &lt;a href=""&gt;fakerjs&lt;/a&gt; and k6 to perform load tests using realistic generated data.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is k6?
&lt;/h3&gt;

&lt;p&gt;k6 is an open-source performance testing tool written and maintained by the team at &lt;a href="https://k6.io/" rel="noopener noreferrer"&gt;k6&lt;/a&gt;. One of the main goals of the project is to provide users with a developer-centered, code-first approach to performance testing. &lt;/p&gt;

&lt;blockquote&gt;
&lt;h3&gt;
  
  
  🤓 Completely new to k6?
&lt;/h3&gt;

&lt;p&gt;Then it might be a good idea to start out with this &lt;a href="https://dev.to/mostafa/beginner-s-guide-to-load-testing-with-k6-1od2"&gt;Beginners guide to k6&lt;/a&gt;, written by &lt;a href="https://dev.to/mostafa"&gt;Mostafa Moradian&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  What is Faker?
&lt;/h3&gt;

&lt;p&gt;Faker is a tool used for generating realistic data. It's available for a lot of different languages - &lt;a href="https://github.com/joke2k/faker" rel="noopener noreferrer"&gt;python&lt;/a&gt;, &lt;a href="https://github.com/faker-ruby/faker" rel="noopener noreferrer"&gt;ruby&lt;/a&gt;, &lt;a href="https://github.com/fzaninotto/Faker" rel="noopener noreferrer"&gt;php&lt;/a&gt; and &lt;a href="https://github.com/DiUS/java-faker" rel="noopener noreferrer"&gt;java&lt;/a&gt; to name a few.&lt;/p&gt;

&lt;p&gt;In this particular case, we'll use the javascript implementation, fakerjs, as it allows us to use it from within our test script, rather than generating the data before execution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Goals
&lt;/h2&gt;

&lt;p&gt;Historically performance testing, to a large extent, has been performed by running your test and then manually analyzing the result to spot performance degradation or deviations. k6 uses a different approach, utilizing goal-oriented performance thresholds to create pass/fail tollgates. Let's formulate a scenario (or use case if you prefer) for this test and what it tries to measure.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Acme Corp Scenario
&lt;/h3&gt;

&lt;p&gt;Acme Corp is about to release a submission form, allowing users to sign up for their newsletter. As they plan to release this form during Black Friday, they want to make sure that it can withstand the pressure of a lot of simultaneous registrations. After all, they are a company in the business of making everything, so they expect a surge of traffic Friday morning.&lt;/p&gt;

&lt;h3&gt;
  
  
  Our test goals
&lt;/h3&gt;

&lt;p&gt;While we could very well set up complex custom thresholds, it's usually more than enough to stick with the basics. In this case, we'll measure the number of requests where we don't receive an HTTP OK (200) status code in the response, as well as the total duration of each request. &lt;/p&gt;

&lt;p&gt;We'll also perform the test with 300 virtual users, which will all perform these requests simultaneously.&lt;/p&gt;

&lt;h4&gt;
  
  
  Configuration
&lt;/h4&gt;

&lt;p&gt;In k6, we express this as:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;


&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;formFailRate&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Rate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;failed form fetches&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;submitFailRate&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Rate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;failed form submits&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;options&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// ...&lt;/span&gt;
    &lt;span class="na"&gt;vus&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;300&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;thresholds&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;failed form submits&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;rate&amp;lt;0.1&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
      &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;failed form fetches&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;rate&amp;lt;0.1&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
      &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;http_req_duration&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;p(95)&amp;lt;400&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h4&gt;
  
  
  What does this mean?
&lt;/h4&gt;

&lt;p&gt;So, let's go through what we've done here. With 300 virtual users trying to fetch and submit the subscription form every second, we've set up the following performance goals:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Less than 10% are allowed to fail in retrieving the form&lt;/li&gt;
&lt;li&gt;Less than 10% are allowed to fail in submitting the form data&lt;/li&gt;
&lt;li&gt;Only 5% or less are permitted to have a request duration longer than 400ms&lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  The actual test
&lt;/h2&gt;

&lt;p&gt;Now, let's get on to the actual test code. The test code, which is executed by each VU once for each iteration, is put inside an anonymous function. We then expose this function as a default export.&lt;/p&gt;
&lt;h3&gt;
  
  
  The sleep test 😴
&lt;/h3&gt;

&lt;p&gt;To make sure our environment is working, I usually start by setting up a test that does nothing except sleeping for a second and execute it once.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;sleep&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;k6&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;sleep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Which, when run, produces output similar to this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fmck8bbbkeyhe659a7k4i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fmck8bbbkeyhe659a7k4i.png" alt="Running a k6 script with just sleep"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Adding our thresholds
&lt;/h3&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;sleep&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;k6&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Rate&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;k6/metrics&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;formFailRate&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Rate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;failed form fetches&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;submitFailRate&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Rate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;failed form submits&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;options&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// ...&lt;/span&gt;
  &lt;span class="na"&gt;vus&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;300&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;duration&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;10s&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;thresholds&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;failed form submits&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;rate&amp;lt;0.1&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;failed form fetches&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;rate&amp;lt;0.1&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;http_req_duration&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;p(95)&amp;lt;400&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;formFailRate&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;submitFailRate&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nf"&gt;sleep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Notice the two new lines in the default function? For each iteration, we're now adding data points to our &lt;a href="https://k6.io/docs/using-k6/thresholds" rel="noopener noreferrer"&gt;threshold&lt;/a&gt; metrics, telling it that our requests did not fail. We'll hook these up to do something meaningful as we proceed. We also added a duration to make the script run for more than one iteration.&lt;/p&gt;

&lt;p&gt;For now, running the script should give you the following output:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ftofe0dhv86f4pcfesjnk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ftofe0dhv86f4pcfesjnk.png" alt="Running a k6 script with sleep and rates"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Yay, it passes! Two green checks!&lt;/p&gt;

&lt;h3&gt;
  
  
  Adding requests
&lt;/h3&gt;

&lt;p&gt;To be able to measure anything useful, we also need to add some actual requests. In this example, we'll use &lt;a href="https://httpbin.test.loadimpact.com/" rel="noopener noreferrer"&gt;https://httpbin.test.loadimpact.com/&lt;/a&gt; as our API, which is our mirror of the popular tool &lt;a href="https://httpbin.org/" rel="noopener noreferrer"&gt;HTTPBin&lt;/a&gt;. Feel free to use whatever HTTP Request sink you prefer!&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;sleep&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;k6&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Rate&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;k6/metrics&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;http&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;k6/http&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;baseUrl&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;https://httpbin.test.loadimpact.com/anything&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;urls&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;form&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;baseUrl&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/form`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;submit&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;baseUrl&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/form/subscribe`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;


&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;formFailRate&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Rate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;failed form fetches&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;submitFailRate&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Rate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;failed form submits&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;options&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;vus&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;300&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;duration&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;10s&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;thresholds&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;failed form submits&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;rate&amp;lt;0.1&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
      &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;failed form fetches&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;rate&amp;lt;0.1&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
      &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;http_req_duration&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;p(95)&amp;lt;400&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;getForm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;formResult&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;http&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;urls&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;form&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;formFailRate&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;formResult&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt; &lt;span class="o"&gt;!==&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;submitForm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;submitResult&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;http&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;urls&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;submit&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{});&lt;/span&gt;
  &lt;span class="nx"&gt;submitFailRate&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;submitResult&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt; &lt;span class="o"&gt;!==&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nf"&gt;getForm&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="nf"&gt;submitForm&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="nf"&gt;sleep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And once again:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fzhtf4sdrgpqzrnxxq569.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fzhtf4sdrgpqzrnxxq569.png" alt="Running k6 with requests"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The output now also includes metrics around our HTTP requests, as well as a little green check next to the duration.&lt;/p&gt;

&lt;h3&gt;
  
  
  Adding Bundling and Transpiling
&lt;/h3&gt;

&lt;p&gt;Now that we've got our script to work, it's almost time to add faker. Before we do that, we need to make sure that k6 can use the faker library.&lt;/p&gt;

&lt;p&gt;As k6 does not run in a NodeJS environment, but rather in a goja VM, it needs a little help. Thankfully, it's not that complex. We'll use webpack and babel to achieve this, but any bundler compatible with babel would likely work.&lt;/p&gt;

&lt;p&gt;Let's start by initializing an npm package and add all the dependencies we'll need:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;yarn init &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; yarn add &lt;span class="se"&gt;\&lt;/span&gt;
    @babel/core &lt;span class="se"&gt;\&lt;/span&gt;
    @babel/preset-env &lt;span class="se"&gt;\&lt;/span&gt;
    babel-loader &lt;span class="se"&gt;\&lt;/span&gt;
    core-js &lt;span class="se"&gt;\&lt;/span&gt;
    webpack &lt;span class="se"&gt;\&lt;/span&gt;
    webpack-cli


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We'll then create our webpack config. The details of webpack and babel are outside the scope of this article, but there are plenty of great resources out there on how it works.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;

&lt;span class="c1"&gt;// webpack.config.js&lt;/span&gt;

&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;exports&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;mode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;production&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;entry&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./src/index.js&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;output&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;__dirname&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/dist&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;filename&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;test.[name].js&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;libraryTarget&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;commonjs&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="na"&gt;module&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;test&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sr"&gt;/&lt;/span&gt;&lt;span class="se"&gt;\.&lt;/span&gt;&lt;span class="sr"&gt;js$/&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;use&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;babel-loader&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="na"&gt;stats&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;colors&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="na"&gt;target&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;web&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;externals&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sr"&gt;/k6&lt;/span&gt;&lt;span class="se"&gt;(\/&lt;/span&gt;&lt;span class="sr"&gt;.*&lt;/span&gt;&lt;span class="se"&gt;)?&lt;/span&gt;&lt;span class="sr"&gt;/&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;devtool&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;source-map&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;and the &lt;code&gt;.babelrc&lt;/code&gt; file:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"presets"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"@babel/preset-env"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"useBuiltIns"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"usage"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"corejs"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;


&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We'll also modify our package.json so that we can launch our tests using yarn:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight diff"&gt;&lt;code&gt;&lt;span class="err"&gt;

&lt;/span&gt;{
  "name": "k6-faker",
  "scripts": {
&lt;span class="gi"&gt;+   "pretest": "webpack",
+   "test": "k6 run ./dist/test.main.js"
&lt;/span&gt;  },
  ...
}
&lt;span class="err"&gt;

&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;blockquote&gt;
&lt;h3&gt;
  
  
  🧠  Did you know?
&lt;/h3&gt;

&lt;p&gt;Using &lt;code&gt;pre&lt;/code&gt; or &lt;code&gt;post&lt;/code&gt; at the beginning of a script name, results in that script running before/after the script you're invoking. In this case, the &lt;code&gt;pretest&lt;/code&gt;  script ensures that every time we run our test, webpack first creates a new, fresh bundle from the source code. - Sweet, huh? 👍🏻&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Enter Faker!
&lt;/h2&gt;

&lt;p&gt;Let's get right into it then! The first step is to add faker to our dependencies:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;yarn add faker


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Faker has a quite extensive library of data that it's able to generate, ranging from company details to catchphrases and profile pictures. While these are all handy to have, we'll only use a tiny subset of what faker has to offer. Our object follows this structure:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="err"&gt;name:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'jane&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;doe'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="err"&gt;title:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'intergalactic&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;empress'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="err"&gt;company:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'Worldeaters&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Inc'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="err"&gt;email:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'jane@doe.example'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="err"&gt;country:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'N/A'&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;


&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We'll now go ahead and create a service that we may use to generate said persons:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;

&lt;span class="c1"&gt;// subscriber.js&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;faker&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;faker/locale/en_US&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; 

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;generateSubscriber&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`SUBSCRIPTION_TEST - &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;faker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;firstName&lt;/span&gt;&lt;span class="p"&gt;()}&lt;/span&gt;&lt;span class="s2"&gt; &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;faker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;lastName&lt;/span&gt;&lt;span class="p"&gt;()}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;faker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;jobTitle&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="na"&gt;company&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;faker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;company&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;companyName&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="na"&gt;email&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;faker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;internet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;email&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="na"&gt;country&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;faker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;address&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;country&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;blockquote&gt;
&lt;h3&gt;
  
  
  👿 Possible performance issues ahead!
&lt;/h3&gt;

&lt;p&gt;All dependencies added tend to balloon the memory consumption to some extent, especially when they scale up to 300 concurrent instances. Because of this, it's crucial that we only import the locale(s) we are using in our test case.&lt;/p&gt;

&lt;p&gt;While putting together the example repository for this article, I noticed that using faker adds about 2.3MB of memory per VU, which for 300 VUs resulted in a total memory footprint of around 1.5GB.&lt;/p&gt;

&lt;p&gt;You can read more about the javascript performance in k6 and how to tune it &lt;a href="https://k6.io/docs/using-k6/javascript-compatibility-mode" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You might have noticed that we prepend the name of the generated user with &lt;code&gt;SUBSCRIPTION_TEST&lt;/code&gt;. Adding a unique identifier for your test data is just something I find convenient to be able to quickly filter out all dummy data I've created as part of a test. While optional, this is usually a good idea - especially if you test against an environment that you can't easily prune. &lt;/p&gt;




&lt;h2&gt;
  
  
  Final assembly
&lt;/h2&gt;

&lt;p&gt;Now, let's put it all together!&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;

&lt;span class="c1"&gt;// index.js&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;sleep&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;k6&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;http&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;k6/http&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Rate&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;k6/metrics&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;generateSubscriber&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./subscriber&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;baseUrl&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;https://httpbin.test.loadimpact.com/anything&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;urls&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;form&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;baseUrl&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/form`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;submit&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;baseUrl&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/form/subscribe`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;formFailRate&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Rate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;failed form fetches&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;submitFailRate&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Rate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;failed form submits&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;options&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;vus&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;300&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;duration&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;10s&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;thresholds&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;failed form submits&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;rate&amp;lt;0.1&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;failed form fetches&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;rate&amp;lt;0.1&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;http_req_duration&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;p(95)&amp;lt;400&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;getForm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;formResult&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;http&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;urls&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;form&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;formFailRate&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;formResult&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt; &lt;span class="o"&gt;!==&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;submitForm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;person&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;generateSubscriber&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;    
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;payload&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;person&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;submitResult&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;http&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;urls&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;submit&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;submitFailRate&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;submitResult&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt; &lt;span class="o"&gt;!==&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;getForm&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="nf"&gt;submitForm&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="nf"&gt;sleep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;

&lt;span class="c1"&gt;// subscriber.js&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;faker&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;faker/locale/en_US&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; 

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;generateSubscriber&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`SUBSCRIPTION_TEST - &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;faker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;firstName&lt;/span&gt;&lt;span class="p"&gt;()}&lt;/span&gt;&lt;span class="s2"&gt; &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;faker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;lastName&lt;/span&gt;&lt;span class="p"&gt;()}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;faker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;jobTitle&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="na"&gt;company&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;faker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;company&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;companyName&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="na"&gt;email&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;faker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;internet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;email&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="na"&gt;country&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;faker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;address&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;country&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And with that, we're ready to go:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fg89wqlleq9867qw8ce7r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fg89wqlleq9867qw8ce7r.png" alt="Running k6 with faker, thresholds and http requests"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Closing thoughts
&lt;/h2&gt;

&lt;p&gt;While the flexibility you get by combining the javascript engine used in k6 with webpack and babel is near endless, it's essential to keep track of the memory consumption and performance of the actual test. After all, getting false positives due to our load generator being out of resources is not particularly helpful. &lt;/p&gt;

&lt;p&gt;All the code from this article is available as an example repository on&lt;br&gt;
&lt;a href="https://github.com/k6io/example-data-generation" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;, which I try to keep up to date with new versions of k6 and faker.&lt;/p&gt;

&lt;p&gt;I'd love to hear your thoughts, so please hit me up with questions and comments in the field below. 👇🏼&lt;/p&gt;

</description>
      <category>testing</category>
      <category>performance</category>
      <category>tutorial</category>
      <category>javascript</category>
    </item>
    <item>
      <title>What is the favorite thing you keep on your desk?</title>
      <dc:creator>Simme</dc:creator>
      <pubDate>Sat, 14 Mar 2020 14:24:44 +0000</pubDate>
      <link>https://dev.to/simme/what-is-the-favorite-thing-you-keep-on-your-desk-2fjl</link>
      <guid>https://dev.to/simme/what-is-the-favorite-thing-you-keep-on-your-desk-2fjl</guid>
      <description>&lt;p&gt;Keeping figurines, rubber ducks, alarm clocks and other kinds of decoration on ones desk seems to be quite common these days.&lt;/p&gt;

&lt;p&gt;It's not hard to get that you'd like to pimp out the place where you spend roughly eight hours every day.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;I'm curious as to what you keep on your desks, and more specifically; which one your favorite is, and why?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;My favorite? A little vault boy bobblehead that I bought at a carnival. Why? He's happy, always seems to dig the music I play (he's a bobblehead after all) and always prepared to take on whatever life throws at him. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WnnBR8y2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/geyvy8kc9tctmsstr0s7.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WnnBR8y2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/geyvy8kc9tctmsstr0s7.jpg" alt="Vault Boy figurine"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>workstations</category>
      <category>watercooler</category>
    </item>
    <item>
      <title>Open source load testing tool review 2020</title>
      <dc:creator>Simme</dc:creator>
      <pubDate>Wed, 04 Mar 2020 00:00:00 +0000</pubDate>
      <link>https://dev.to/k6/open-source-load-testing-tool-review-2020-5466</link>
      <guid>https://dev.to/k6/open-source-load-testing-tool-review-2020-5466</guid>
      <description>&lt;blockquote&gt;
&lt;h3&gt;
  
  
  This article is a guest post, written by &lt;a href="https://twitter.com/RagnarLonn"&gt;Ragnar Lönn&lt;/a&gt;.
&lt;/h3&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Comparing the best open source load testing tools since 2017!
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;It&lt;/strong&gt; has been almost three years since we first published our &lt;a href="https://dev.to/blog/ref-open-source-load-testing-tool-review"&gt;first comparison &amp;amp; benchmark articles&lt;/a&gt; that have become very popular, and we thought an update seemed overdue as some tools have changed a lot in the past couple of years. For this update, we decided to put everything into one huge article - making it more of a guide for those trying to choose a tool.&lt;/p&gt;

&lt;p&gt;First, a disclaimer: &lt;em&gt;I, the author, have tried to be impartial, but given that I helped create one of the tools in the review (k6), I am bound to have some bias towards that tool. Feel free to read between the lines and be suspicious of any positive things I write about k6 ;)&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  About the review
&lt;/h3&gt;

&lt;p&gt;The list of tools we look at hasn’t changed much. We have left out &lt;a href="http://grinder.sourceforge.net"&gt;The Grinder&lt;/a&gt; from the review because despite being a competent tool that we like, it doesn’t seem to be actively developed anymore, making it more troublesome to install (it requires old Java versions) and it also doesn’t seem to have many users out there. A colleague working with k6 suggested we’d add a tool built with Rust and thought &lt;a href="https://github.com/fcsonline/drill"&gt;Drill&lt;/a&gt; seemed a good choice, so we added that to the review. Here is the full list of tools tested, and what versions we have tested:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Apachebench 2.3&lt;/li&gt;
&lt;li&gt;Artillery 1.6.0&lt;/li&gt;
&lt;li&gt;Drill 0.5.0 (new)&lt;/li&gt;
&lt;li&gt;Gatling 3.3.1&lt;/li&gt;
&lt;li&gt;Hey 0.1.2&lt;/li&gt;
&lt;li&gt;Jmeter 5.2.1&lt;/li&gt;
&lt;li&gt;k6 0.26.0&lt;/li&gt;
&lt;li&gt;Locust 0.13.5&lt;/li&gt;
&lt;li&gt;Siege 4.0.4&lt;/li&gt;
&lt;li&gt;Tsung 1.7.0&lt;/li&gt;
&lt;li&gt;Vegeta 12.7.0&lt;/li&gt;
&lt;li&gt;Wrk 4.1.0&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  So what did we test then?
&lt;/h3&gt;

&lt;p&gt;Basically, this review centers around two things:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Tool performance&lt;/strong&gt;&lt;br&gt;
        How efficient is the tool at generating traffic and how accurate are its measurements?&lt;br&gt;&lt;br&gt;
&lt;strong&gt;2. Developer UX&lt;/strong&gt;&lt;br&gt;
        How easy and convenient is the tool to use, for a developer like myself?    &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Automating&lt;/strong&gt; load tests is becoming more and more of a focus for developers who do load testing, and while there wasn’t time to properly integrate each tool into a CI test suite, the author tried to figure out how well suited a tool is to automated testing by downloading, installing and running each tool from the command line and via scripted execution.&lt;/p&gt;

&lt;p&gt;The review contains both hard numbers for e.g. tool performance, but also a lot of very subjective opinions from the author on various aspects, or behaviour, of the tools.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;All clear?&lt;/strong&gt; Let’s do it! The rest of the article is written in first-person format to make it hopefully more engaging (or at least you’ll know who to blame when you disagree with something).&lt;/p&gt;
&lt;h3&gt;
  
  
  Chapters
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Chapter 1: History and status&lt;/strong&gt;&lt;br&gt;
        Where did the tools come from and which ones are actively developed/maintained?&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Chapter 2: Usability review&lt;/strong&gt;&lt;br&gt;
        What functionality do they have and how easy are they to use for a developer?&lt;br&gt;&lt;br&gt;
The top non-scriptable tools&lt;br&gt;&lt;br&gt;
The top scriptable tools&lt;br&gt;&lt;br&gt;
The rest of the tools  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Chapter 3: Performance review and benchmarks&lt;/strong&gt;&lt;br&gt;
        How efficient are the tools at generating traffic and how accurate are their measurements?&lt;br&gt;&lt;br&gt;
Max traffic generation capability&lt;br&gt;&lt;br&gt;
Memory usage&lt;br&gt;&lt;br&gt;
Measurement accuracy  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;End summary&lt;/strong&gt;&lt;/p&gt;


&lt;h2&gt;
  
  
  Chapter 1: History and status
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Tool overview
&lt;/h3&gt;

&lt;p&gt;Here is a table with some basic information about the tools in the review.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Tool&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Apachebench&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Artillery&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Drill&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;Gatling&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Created by&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Apache foundation&lt;/td&gt;
&lt;td&gt;Shoreditch Ops LTD&lt;/td&gt;
&lt;td&gt;Ferran Basora&lt;/td&gt;
&lt;td&gt;Gatling Corp&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;License&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Apache 2.0&lt;/td&gt;
&lt;td&gt;MPL2&lt;/td&gt;
&lt;td&gt;GPL3&lt;/td&gt;
&lt;td&gt;Apache 2.0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Written in&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;C&lt;/td&gt;
&lt;td&gt;NodeJS&lt;/td&gt;
&lt;td&gt;Rust&lt;/td&gt;
&lt;td&gt;Scala&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Scriptable&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes: JS&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes: Scala&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Multithreaded&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Distributed load generation&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No (Premium)&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No (Premium)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Website&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://httpd.apache.org/docs/2.4/programs/ab.html"&gt;httpd.apache.org&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://artillery.io/"&gt;artillery.io&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://github.com/fcsonline/drill"&gt;github.com/fcsonline&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://gatling.io/"&gt;gatling.io&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Source code&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://svn.apache.org/viewvc/httpd/httpd/branches/2.4.x/support/"&gt;svn.apache.org&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://github.com/artilleryio/artillery"&gt;github.com/artilleryio&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://github.com/fcsonline/drill"&gt;github.com/fcsonline&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://github.com/gatling/gatling"&gt;github.com/gatling&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Tool&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Hey&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;JMeter&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;k6&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Locust&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Created by&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Jaana B Dogan&lt;/td&gt;
&lt;td&gt;Apache foundation&lt;/td&gt;
&lt;td&gt;Load Impact&lt;/td&gt;
&lt;td&gt;Jonathan Heyman&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;License&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Apache 2.0&lt;/td&gt;
&lt;td&gt;Apache 2.0&lt;/td&gt;
&lt;td&gt;AGPL3&lt;/td&gt;
&lt;td&gt;MIT&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Written in&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Go&lt;/td&gt;
&lt;td&gt;Java&lt;/td&gt;
&lt;td&gt;Go&lt;/td&gt;
&lt;td&gt;Python&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Scriptable&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Limited (XML)&lt;/td&gt;
&lt;td&gt;Yes: JS&lt;/td&gt;
&lt;td&gt;Yes: Python&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Multithreaded&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Distributed load generation&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;No (Premium)&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Website&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://github.com/rakyll/hey"&gt;github.com/rakyll&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://jmeter.apache.org/"&gt;jmeter.apache.org&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://k6.io"&gt;k6.io&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://locust.io/"&gt;locust.io&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Source code&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://github.com/rakyll/hey"&gt;github.com/rakyll&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://github.com/apache/jmeter"&gt;github.com/apache&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://github.com/loadimpact/k6"&gt;loadimpact@github&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://github.com/locustio/locust"&gt;locustio@github&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Tool&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Siege&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Tsung&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Vegeta&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Wrk&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Created by&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Jeff Fulmer&lt;/td&gt;
&lt;td&gt;Nicolas Niclausse&lt;/td&gt;
&lt;td&gt;Tomás Senart&lt;/td&gt;
&lt;td&gt;Will Glozer&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;License&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;GPL3&lt;/td&gt;
&lt;td&gt;GPL2&lt;/td&gt;
&lt;td&gt;MIT&lt;/td&gt;
&lt;td&gt;Apache 2.0 modified&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Written in&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;C&lt;/td&gt;
&lt;td&gt;Erlang&lt;/td&gt;
&lt;td&gt;Go&lt;/td&gt;
&lt;td&gt;C&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Scriptable&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Limited (XML)&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes: Lua&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Multithreaded&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Distributed load generation&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Limited&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Website&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://www.joedog.org/siege-home/"&gt;joedog.org&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="http://tsung.erlang-projects.org/"&gt;erland-projects.org&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://github.com/tsenart/vegeta"&gt;tsenart@github&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://github.com/wg/wrk"&gt;wg@github&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Source code&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://github.com/JoeDog/siege"&gt;JoeDog@github&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://github.com/processone/tsung"&gt;processone@github&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://github.com/tsenart/vegeta"&gt;tsenart@github&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://github.com/wg/wrk"&gt;wg@github&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;
&lt;h3&gt;
  
  
  Development status
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;OK,&lt;/strong&gt; so which tools are being actively developed today, early 2020?&lt;/p&gt;

&lt;p&gt;I looked at the software repositories of the different tools and counted commits and releases since late 2017 when I did the last tool review. Apachebench doesn't have its own repo but is a part of Apache httpd so I skipped it here as Apachebench is fairly dead, development-wise anyway.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FyT2O83M--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/02ce8518fe122cd24465ccc1785f6dd7/04ff3/open-source-load-testing-tools-project-activity.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FyT2O83M--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/02ce8518fe122cd24465ccc1785f6dd7/04ff3/open-source-load-testing-tools-project-activity.png" alt="A chart with the project activity of the best open-source load testing tools" title="A chart with the project activity of the best open-source load testing tools"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It's positive to see that several of the projects seem to be moving fast! Jmeter could do with more frequent releases perhaps? Locust seems to have picked up speed the past year, as it had only 100 commits and one release in 2018, but in 2019 it had 300 commits and 10 releases. And looking at the sheer number of commits, Gatling, Jmeter and k6 seem to be moving very fast.&lt;/p&gt;

&lt;p&gt;Looking at Artillery gives me the feeling that the open source version get a lot less attention than the premium version. Reading the Artillery Pro &lt;a href="https://artillery.io/docs/pro/changelog/"&gt;Changelog&lt;/a&gt; (there seems to be no changelog for Artillery open source) it looks as if Artillery Pro has gotten a lot of new features the past two years, but when checking commit messages in the Github repo of the open source Artillery, I see what looks mostly like occasional bug fixes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Apachebench&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This old-timer was created as part of the tool suite for the &lt;a href="https://httpd.apache.org"&gt;Apache httpd&lt;/a&gt; webserver. It's been around since the late 90's and was apparently an offshoot of a similar tool created by Zeus Technology, to test the Zeus web server (an old competitor to Apache's and Microsofts web servers). Not much is happening with Apachebench these days, development-wise, but due to it being available to all who install the tool suite for Apache httpd, it is very accessible and most likely used by many, many people to run quick-and-dirty performance tests against e.g. a newly installed HTTP server. It might also be used in quite a few automated test suites.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Artillery&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Shoreditch Ops LTD in London created Artillery. These guys are a bit anonymous, but I seem to remember them being some kind of startup that pivoted into load testing either before or after Artillery became popular out there. Of course, I also remember other things that never happened, so who knows. Anyway, the project seems to have started sometime 2015 and was named "Minigun" before it got its current name.&lt;/p&gt;

&lt;p&gt;Artillery is a written in Javascript, and using &lt;a href="https://nodejs.org/en/"&gt;NodeJS&lt;/a&gt; as its engine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Drill&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Drill is the very newest newcomer of the bunch. It appeared in 2018 and is the only tool written in &lt;a href="https://www.rust-lang.org"&gt;Rust&lt;/a&gt;. Apparently, the author - Ferran Basora - wrote it as a side project in order to learn Rust.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Gatling&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Gatling was first released in 2012 by a bunch of former consultants in Paris, France, who wanted to build a load testing tool that was better for test automation. In 2015 Gatling Corp was founded and the next year the premium SaaS product "Gatling Frontline" was released by Gatling Corp. On their web site they say they have seen over 3 million downloads to date - I'm assuming this is downloads of the OSS version.&lt;/p&gt;

&lt;p&gt;Gatling is written in &lt;a href="https://www.scala-lang.org"&gt;Scala&lt;/a&gt;, which is a bit weird of course, but it seems to work quite well anyway.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hey&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Hey used to be named Boom, after a &lt;a href="https://github.com/tarekziade/boom"&gt;Python load testing tool&lt;/a&gt; of that name, but the author apparently got tired of the confusion that caused, so she changed it. The new name keeps making me think "horse food" when I hear it, so I'm still confused, but the tool is quite ok. It's written in the fantastic &lt;a href="https://golang.org"&gt;Go&lt;/a&gt; language, and is fairly close to Apachebench in terms of functionality. The author stated that one aim when she wrote the tool was to replace Apachebench.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Jmeter&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is the old giant of the bunch. It also comes from the Apache software foundation, is a big, old Java app and has a ton of functionality, plus it is still being actively developed. The last two years it has seen more commits to its codebase than &lt;em&gt;any&lt;/em&gt; other tool in the review. I suspect that Jmeter is slowly losing market share to newer tools, like Gatling, but given how long it's been around and how much momentum it still has, it's a sure bet that it'll be here a long time yet. There are so many integrations, add-ons etc for Jmeter, and whole SaaS services built on top of it (like &lt;a href="https://blazemeter.com"&gt;Blazemeter&lt;/a&gt;), plus people have spent so much time learning how to use it, that it will be going strong for many more years.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;k6&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A super-awesome tool! Uh, well, like I wrote earlier I am somewhat biased here. But objective facts are these: k6 was released in 2017, so is quite new. It is written in Go, and a fun thing I just realized is that we then have a tie between Go and C - three tools in the review are written in C, and three in Go. My two favourite languages - is it coincidence or a pattern?!&lt;/p&gt;

&lt;p&gt;k6 was originally built, and is maintained by, &lt;a href="https://loadimpact.com"&gt;Load Impact&lt;/a&gt; - a SaaS load testing service. Load Impact has several people working full time on k6 and that, together with community contributions, means development is very active. Less known is why this tool is called "k6" but I'm happy to leak that information here: after a lengthy internal name battle that ended in a standoff, we had a 7-letter name starting with "k" that most people hated, so we shortened it to "k6" and that seemed to resolve the issue. You gotta love first-world problems!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Locust&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Locust is a very popular load testing tool that has been around since at least 2011, looking at the release history. It is written in &lt;a href="https://www.python.org"&gt;Python&lt;/a&gt;, which is like the cute puppy of programming languages - everyone loves it! This love has made Python huge, and Locust has also become very popular as there aren't really any other competent load testing tools that are Python-based (and Locust is scriptable in Python too!)&lt;/p&gt;

&lt;p&gt;Locust was created by a bunch of swedes who needed the tool themselves. It is still maintained by the main author, Jonathan Heyman, but now has many external contributors also. Unlike e.g. Artillery, Gatling and k6, there is no commercial business steering the development of Locust - it is (as far as I know) a true community effort. Development of Locust has been alternating between very active and not-so-active - I'm guessing it depends on Jonathan's level of engagement mainly. After a lull in 2018, the project has seen quite a few commits and releases the past 18 months or so.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Siege&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Siege has also been around quite a while - since the early 2000's sometime. I'm not sure how much it is used but it is referenced in many places online. It was written by Jeff Fulmer and is still maintained by him. Development is ongoing, but a long time can pass between new releases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tsung&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Our only Erlang contender! Tsung was written by Nicolas Niclausse and is based on an older tool called IDX-Tsunami. It is also old - i.e. from the early 2000's and like e.g. Siege still developed but in a snail-like manner.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Vegeta&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Vegeta is apparently some kind of Manga superhero, or something. Damn it, now people will understand how old I am. Really, though, aren't all these aggressive-sounding names and word choices used for load testing software pretty silly? Like, you do &lt;code&gt;vegeta attack ...&lt;/code&gt; to start a load test. And don't get me started on "Artillery", "Siege", "Gatling" and the rest. Are we trying to impress an audience of five-year olds? "Locust" is at least a little better (though the "hatching" and "swarming" it keeps doing is pretty cheesy)&lt;/p&gt;

&lt;p&gt;See? Now I went off on a tangent here. Mental slap! OK, back again. Vegeta seems to have been around since 2014, it's also written in Go and seems very popular (almost 14k stars on Github! The very popular Locust, for reference, has about 12k stars). The author of Vegeta is Tomás Senart and development seems quite active.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Wrk&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Wrk is written in C, by Will Glozer. It's been around since 2012 so isn't exactly new, but I have been using it as kind of a performance reference point because it is ridiculously fast/efficient and seems like a very solid piece of software in general. It actually has over 23k stars on Github also, so it probably has a user base that is quite large even though it is less accessible than many other tools (you'll need to compile it). Unfortunately, Wrk isn't so actively developed. New releases are rare.&lt;/p&gt;

&lt;p&gt;I think someone should design a logotype for Wrk. It deserves one.&lt;/p&gt;


&lt;h2&gt;
  
  
  Chapter 2: Usability review
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;I'm&lt;/strong&gt; a developer, and I generally dislike point-and-click applications. I want to use the command line. I also like to automate things through scripting. I'm impatient and want to get things done. I'm kind of old, which in my case means I'm often a bit distrustful of new tech and prefer battle-proven stuff. You're probably different, so try to figure out what you can accept that I can't, and vice versa. Then you might get something out of reading my thoughts on the tools.&lt;/p&gt;

&lt;p&gt;What I've done is to run all the tools manually, on the command line, and interpreted results either printed to stdout, or saved to a file. I have then created shellscripts to automatically extract and collate results.&lt;/p&gt;

&lt;p&gt;Working with the tools has given me some insight into each tool and what its strengths and weaknesses are, for my particular use case. I imagine that the things I'm looking for are similar to what you're looking for when setting up automated load tests, but I might not consider all aspects, as I haven't truly integrated each tool into some CI test suite (that may be the next article to write). Just a disclaimer.&lt;/p&gt;

&lt;p&gt;Also, note that the &lt;em&gt;performance&lt;/em&gt; of the tools has coloured the usability review - if I feel that it's hard for me to generate the traffic I want to generate, or that I can't trust measurements from the tool, then the usability review will reflect that. If you want &lt;em&gt;details&lt;/em&gt; on performance you'll have to scroll down to the performance benchmarks, however.&lt;/p&gt;
&lt;h3&gt;
  
  
  RPS
&lt;/h3&gt;

&lt;p&gt;You will see the term &lt;strong&gt;&lt;em&gt;RPS&lt;/em&gt;&lt;/strong&gt; used liberally throughout this blog article. That acronym stands for "Requests Per Second", a measurement of how much traffic a load testing tool is generating.&lt;/p&gt;
&lt;h3&gt;
  
  
  VU
&lt;/h3&gt;

&lt;p&gt;This is another term used quite a lot. It is a (load) testing acronym that is short for "Virtual User". A Virtual User is a simulated human/browser. In a load test, a VU usually means a concurrent execution thread/context that sends out HTTP requests independently, allowing you to simulate many simultaneous users in a load test.&lt;/p&gt;
&lt;h3&gt;
  
  
  Scriptable tools vs non-scriptable ones
&lt;/h3&gt;

&lt;p&gt;I've decided to make a top list of my favourites both for tools that support scripting, and for those that don't. The reason for this is that whether you need scripting or not depends a lot on your use case, and there are a couple of very good tools that do not support scripting, that deserve to be mentioned here.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What's the difference between a scriptable and a non-scriptable tool?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A scriptable tool supports a real scripting language that you use to write your test cases in - e.g. Python, Javascript, Scala or Lua. That means you get maximum flexibility and power when designing your tests - you can use advanced logic to determine what happens in your test, you can pull in libraries for extra functionality, you can often split your code into multiple files, etc. It is, really, the "developer way" of doing things.&lt;/p&gt;

&lt;p&gt;Non-scriptable tools, on the other hand, are often simpler to get started with as they don't require you to learn any specific scripting API. They often also tend to be a bit less resource-consuming than the scriptable tools, as they don't have to have a scripting language runtime and execution contexts for script threads. So they are faster and consume less memory (generally, not true in all cases). The negative side is they're more limited in what they can do.&lt;/p&gt;

&lt;p&gt;OK, let's get into the subjective tool review!&lt;/p&gt;
&lt;h3&gt;
  
  
  The top non-scriptable tools
&lt;/h3&gt;

&lt;p&gt;Here are my favourite non-scriptable tools, in alphabetical order.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hey&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LeUH3HkT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/c3d44988d7f861d3b270deb8c4f383d6/04ff3/hey-run.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LeUH3HkT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/c3d44988d7f861d3b270deb8c4f383d6/04ff3/hey-run.png" alt="Hey runtime screenshot" title="Hey runtime screenshot"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hey is a simple tool, written in Go, with good performance and the most common features you'll need to run simple static URL tests. It lacks any kind of scripting, but can be a good alternative to tools like Apachebench or Wrk, for simple load tests. Hey supports HTTP/2, which neither Wrk nor Apachebench does, and while I didn't think HTTP/2 support was a big deal in 2017, today we see that HTTP/2 penetration is a lot higher than back then, so today it's more of an advantage for Hey, I'd say.&lt;/p&gt;

&lt;p&gt;Another potential reason to use Hey instead of Apachebench is that Hey is multi-threaded while Apachebench isn't. Apachebench is very fast, so often you will not need more than one CPU core to generate enough traffic, but if you do, then you'll be happier using Hey as its load generation capacity will scale pretty much linearly with the number of CPU cores on your machine.&lt;/p&gt;

&lt;p&gt;Hey has rate limiting, which can be used to run fixed-rate tests.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hey help output&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qBU2g1jR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/e5805eb44bc86efb18a9493a76f28001/04ff3/hey-help.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qBU2g1jR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/e5805eb44bc86efb18a9493a76f28001/04ff3/hey-help.png" alt="Hey help screenshot" title="Hey help screenshot"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Hey summary&lt;/u&gt;&lt;/p&gt;

&lt;p&gt;Hey is simple, but it does what it does very well. It's stable, among the more performant tools in the review, and has very nice output with response time histograms, percentiles and stuff. It also has rate limiting, which is something many tools lack.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Vegeta&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--A31r0WJ1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/35544ab93680da978b5867f74ad6ab63/04ff3/vegeta-run.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A31r0WJ1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/35544ab93680da978b5867f74ad6ab63/04ff3/vegeta-run.png" alt="Vegeta runtime screenshot" title="Vegeta runtime screenshot"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Vegeta has a lot of cool features like the fact that its default mode is to send requests at a constant rate and it adapts concurrency to try and achieve this rate. This is very useful for regression/automated testing, where you often want to run tests that are as identical to eachother as possible, as that will make it more likely that any deviating results are the result of a regression in newly committed code.&lt;/p&gt;

&lt;p&gt;Vegeta is written in Go (yay), performs very well, supports HTTP/2, has several output formats, flexible reporting and can generate graphical response time plots.&lt;/p&gt;

&lt;p&gt;If you look at the runtime screenshot above, you'll see that it is quite obvious that Vegeta was designed to be run on the command line; it reads from &lt;code&gt;stdin&lt;/code&gt; a list of HTTP transactions to generate, and sends results in &lt;em&gt;binary&lt;/em&gt; format to stdout, where you're supposed to redirect to a file or pipe them directly to another Vegeta process that then generates a report from the data.&lt;/p&gt;

&lt;p&gt;This design provides a lot of flexibility and supports new use cases like e.g. basic load distribution through remote shell-execution of Vegeta on different hosts and then copying the binary output from each Vegeta "slave" and piping it all into one Vegeta process that generates a report. You also "feed" (over stdin) Vegeta its list of URLs to hit, which means you could have a piece of software executing complex logic that generates this list of URLs (though that program would not have access to the results of transactions, so it is doubtful how useful such a setup would be I guess).&lt;/p&gt;

&lt;p&gt;The slightly negative side is that the command line UX is not what you might be used to, if you've used other load testing tools, and neither is it the simplest possible, if you just want to run a quick command-line test hitting a single URL with some traffic.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Vegeta help output&lt;/u&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fvkzYbHj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/c1ec47e963afbd4919c0f2058db4d170/04ff3/vegeta-help.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fvkzYbHj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/c1ec47e963afbd4919c0f2058db4d170/04ff3/vegeta-help.png" alt="Vegeta help screenshot" title="Vegeta help screenshot"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Vegeta summary&lt;/u&gt;&lt;/p&gt;

&lt;p&gt;Overall, Vegeta is a really strong tool that caters to people who want a tool to test simple, static URLs (perhaps API end points) but also want a bit more functionality. &lt;em&gt;Or&lt;/em&gt; people who want to assemble their own load testing solution and need a flexible load generator component that they can use in different ways. Vegeta can even be used as a Golang library/package if you want to create your own load testing tool.&lt;/p&gt;

&lt;p&gt;The biggest flaw (when I'm the user) is the lack of programmability/scripting, which makes it a little less developer-centric.&lt;/p&gt;

&lt;p&gt;I would definitely use Vegeta for simple, automated testing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Wrk&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TRJWBUeh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/114d7b2d257df8cf7507d4807cf6e232/04ff3/wrk-run.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TRJWBUeh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/114d7b2d257df8cf7507d4807cf6e232/04ff3/wrk-run.png" alt="Wrk runtime screenshot" title="Wrk runtime screenshot"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Wrk may be a bit dated, and doesn't get a lot of new features these days, but it is such a !#&amp;amp;%€ solid piece of code. It always behaves like you expect it to, and it is running circles around all other tools in terms of speed/efficiency. If you use Wrk you will be able to generate 5 times as much traffic as you will with k6, on the same hardware. If you think that makes k6 sound bad, think again because it is not that k6 is slow. It's just that Wrk is so damn fast. Comparing it to other tools, Wrk is 10 times faster than Gatling. 15-20 times faster than Locust and over 100 times faster than Artillery.&lt;/p&gt;

&lt;p&gt;The comparison is a bit unfair as several of the tools let their VU threads run much more sophisticated script code than what Wrk allows, but still. You'd think Wrk offered no scripting at all, but it actually allows you to execute Lua code in the VU threads and theoretically, you can create test code that quite complex. In practise, however, the Wrk scripting API is callback-based and not very suitable at all for writing complicated test logic. But it is also very fast. I did not execute Lua code when testing Wrk this time - I used the single-URL test mode instead, but previous tests have shown Wrk performance to be only minimally impacted when executing Lua code.&lt;/p&gt;

&lt;p&gt;However, being fast and measuring correctly is about all that Wrk does. It has no HTTP/2 support, no fixed request rate mode, no output options, no simple way to generate pass/fail results in a CI setting, etc. In short, it is quite feature-sparse.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Wrk help output&lt;/u&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--n2NSnadb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/9edfd56771f71e22d0b8f6c502e9d237/04ff3/wrk-help.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--n2NSnadb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/9edfd56771f71e22d0b8f6c502e9d237/04ff3/wrk-help.png" alt="Wrk help screenshot" title="Wrk help screenshot"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Wrk summary&lt;/u&gt;&lt;/p&gt;

&lt;p&gt;Wrk is included among the top non-scriptable tools because if your only goal is to generate a truckload of simple traffic against a site, there is no tool that does it more efficiently. It will also give you accurate measurements of transaction response times, which is something many other tools fail at when they're being forced to generate a lot of traffic.&lt;/p&gt;
&lt;h3&gt;
  
  
  The top scriptable tools
&lt;/h3&gt;

&lt;p&gt;To me, this is the most interesting category because here you'll find the tools that can be &lt;em&gt;programmed&lt;/em&gt; to behave in whatever strange ways you desire!&lt;/p&gt;

&lt;p&gt;Or, to put it in a more boring way, here are the tools that allow you to write test cases as pure code, like you're used to if you're a developer.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note that I list the top tools in alphabetical order - I won't rank them because lists are silly. Read the information and then use that lump that sits on top of your neck, to figure out which tool YOU should use.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Gatling&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CkVWL7TU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/68a62d8acfa15cee6678e1e3e7e18c57/04ff3/gatling-run.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CkVWL7TU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/68a62d8acfa15cee6678e1e3e7e18c57/04ff3/gatling-run.png" alt="Gatling runtime screenshot" title="Gatling runtime screenshot"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Gatling isn't actually a favourite of mine, because it is a Java app and I don't like Java apps. Java apps are probably easy to use for people who spend their whole day working in a Java environment, but for others they are definitely &lt;em&gt;not&lt;/em&gt; user-friendly. Whenever something fails in an app written in mostly any other language you'll get an error message that often helps you figure out what the problem is. If a Java app fails, you'll get 1,000 lines of stack trace and repeated, generical error messages that is of absolutely zero help whatsoever. Also, running Java apps often require manual tweaking of JVM runtime parameters. Perhaps Java is well suited for large enterprise backend software, but not for command-line apps like a load testing tool, so being a Java app is a clear minus in my book.&lt;/p&gt;

&lt;p&gt;If you look at the screenshot above, you'll note that you have to add parameters to your test inside a "JAVA_OPTS" environment variable, that is then read from your Gatling Scala script. There are no parameters you can give Gatling to affect concurrency/VUs, duration or similar, but this has to come from the Scala code itself. This way of doing things is nice when you're only running something in an automated fashion, but kind of painful if you want to run a couple of manual tests on the command line.&lt;/p&gt;

&lt;p&gt;Despite the Java-centricity (or is it "Jave-centrism"?), I have to say that Gatling is a quite nice load testing tool. Its performance is not great, but probably adequate for most people. It has a decent scripting environment based on Scala. Again, Scala is not my thing but if you're into it, or Java, it should be quite convenient for you to script test cases with Gatling.&lt;/p&gt;

&lt;p&gt;Here is what a very simple Gatling script may look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import io.gatling.core.Predef._
import io.gatling.http.Predef._
import scala.concurrent.duration._

class GatlingSimulation extends Simulation {

  val vus = Integer.getInteger("vus", 20)
  val duration = Integer.getInteger("duration", 10)

  val scn = scenario("Scenario Name") // A scenario is a chain of requests and pauses
    .during (duration) {
      exec(http("request_1").get("http://192.168.0.121:8080/"))
        }
        setUp(scn.inject(atOnceUsers(vus)))
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The scripting API seems capable and it can generate pass/fail results based on user-definable conditions.&lt;/p&gt;

&lt;p&gt;I don't like the text based menu system you get by default when starting Gatling. Luckily, that can be skipped by using the right command-line parameters. If you dig into it just a little bit, Gatling is quite simple to run from the command line.&lt;/p&gt;

&lt;p&gt;The documentation for Gatling is very good, which is a big plus for any tool.&lt;/p&gt;

&lt;p&gt;Gatling has a recording tool that looks competent, though I haven't tried it myself as I'm more interested in scripting scenarios to test individual API end points, not record "user journeys" on a web site. But I imagine many people who run complex load test scenarios simulating end user behaviour will be happy the recorder exists.&lt;/p&gt;

&lt;p&gt;Gatling will by default report results to stdout and generate nice HTML reports (using my favourite charting library - &lt;a href="https://highcharts.com"&gt;Highcharts&lt;/a&gt;) after the test has finished. It's nice to see that it has lately also gotten support for results output to Graphite/InfluxDB and visualization using Grafana.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Gatling help output&lt;/u&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TbRTBbyR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/085dee787a30df126b714c7a48544c36/04ff3/gatling-help.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TbRTBbyR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/085dee787a30df126b714c7a48544c36/04ff3/gatling-help.png" alt="Gatling help screenshot" title="Gatling help screenshot"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Gatling summary&lt;/u&gt;&lt;/p&gt;

&lt;p&gt;Overall, Gatling is a very competent tool that is actively maintained and developed. If you're using Jmeter today, you should definitely take a look at Gatling, just to see what you're missing (hint: usability!).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;k6&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8Uoxs1VE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/a3f63003df14745279224f80257eceab/04ff3/k6-run.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8Uoxs1VE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/a3f63003df14745279224f80257eceab/04ff3/k6-run.png" alt="k6 runtime screenshot" title="k6 runtime screenshot"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As I was involved in the creation of k6, it's not strange that I like the choices that project has made. The idea behind k6 was to create a high-quality load testing tool for the modern developer, which allowed you to write tests as pure code, had a simple and consistent command-line UX, had useful results output options, and had good enough performance. I think all these goals have been pretty much fulfilled, and that this makes k6 a very compelling choice for a load testing tool. Especially for a developer like myself.&lt;/p&gt;

&lt;p&gt;k6 is scriptable in plain Javascript and has what I think is the nicest scripting API of all tools I've tested. This API that makes it easy to perform common operations, test that things behave as expected, and control pass/fail behaviour for automated testing. Here is what a very simple k6 script might look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import http from 'k6/http';
import { check } from 'k6';

export default function() {
  var res = http.get('http://192.168.0.121:8080/');
  check(res, {
    'is status 200': r =&amp;gt; r.status === 200
  });
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above script will make each VU generate a HTTP transaction and then check that the response code was 200. The status of a check like this is printed on stdout, and you can set up thresholds to fail the test if a big enough percentage of your checks are failing. The k6 scripting API makes writing automated performance tests a very nice experience, IMO.&lt;/p&gt;

&lt;p&gt;The k6 command line interface is simple, intuitive and consistent - It feels modern. k6 is among the faster tools in this review, it supports all the basic protocols (HTTP 1/2/Websocket), has multiple output options (text, JSON, InfluxDB, StatsD, Datadog, Kafka). Recording traffic from a browser is pretty easy as k6 can convert HAR files to k6 script, and the major browsers can record sessions and save them as HAR files. Also, there are options to convert e.g. Postman collections to k6 script code. Oh yeah, and the documentation is stellar overall (though I just spoke to a guy working on the docs and he was dissatisfied with the state they're in today, which I think is great. When a product developer is satisfied, the product stagnates)&lt;/p&gt;

&lt;p&gt;What does k6 lack then? Well, load generation distribution is not included, so if you want to run really large-scale tests you'll have to buy the premium SaaS version (that has distributed load generation). On the other hand, its performance means you're not very likely to run out of load generation capacity on a single physical machine anyway. It doesn't come with any kind of web UI, if you're into such things. I'm not.&lt;/p&gt;

&lt;p&gt;One thing people may expect, but which k6 doesn't have, is NodeJS-compatibility. Many (perhaps even most?) NodeJS libraries can &lt;em&gt;not&lt;/em&gt; be used in k6 scripts. If you need to use NodeJS libs, Artillery may be your only safe choice (oh nooo!).&lt;/p&gt;

&lt;p&gt;Otherwise, the only thing &lt;em&gt;I&lt;/em&gt; don't like about k6 is the fact that I have to script my tests in Javascript! JS is not my favourite language, and personally, I would have preferred using Python or Lua - the latter being a scripting language Load Impact has been using for years to script load tests and which is very resource-efficient. But in terms of market penetration, Lua is a fruit fly whereas JS is an elephant, so choosing JS over Lua was wise. And to be honest, as long as the scripting is not done in XML (or Java), I'm happy.&lt;/p&gt;

&lt;p&gt;Like mentioned earlier, the open source version of k6 is being &lt;em&gt;very&lt;/em&gt; actively developed, with new features added all the time. Do check out the &lt;a href="https://github.com/loadimpact/k6/releases"&gt;Release notes/Changelog&lt;/a&gt; which, btw, are some of the best written that I've ever seen (thanks to the maintainer @na-- who is an ace at writing these things).&lt;/p&gt;

&lt;p&gt;&lt;u&gt;k6 help output&lt;/u&gt;&lt;/p&gt;

&lt;p&gt;k6 also deserves a shoutout for its built-in help, which is way nicer than that of any other tool in this review. It has a docker-style, multi-level &lt;code&gt;k6 help&lt;/code&gt; command where you can give arguments to display help for specific commands. E.g. &lt;code&gt;k6 help run&lt;/code&gt; will give you an extensive help text showing how to use the &lt;code&gt;run&lt;/code&gt;command.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Wi9ZxqNj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/dcbdd97baa3f280219de2145b8bbfc9e/04ff3/k6-help1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Wi9ZxqNj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/dcbdd97baa3f280219de2145b8bbfc9e/04ff3/k6-help1.png" alt="k6 help screenshot" title="k6 help screenshot"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;u&gt;k6 summary&lt;/u&gt;&lt;/p&gt;

&lt;p&gt;Not that I'm biased or anything, but I think k6 is way ahead of the other tools when you look at the whole experience for a developer. There are faster tools, but none faster that also supports sophisticated scripting. There are tools that support more protocols, but k6 supports the most important ones. There are tools with more output options, but k6 has more than most. In practically &lt;em&gt;every&lt;/em&gt; category, k6 is average or better. In some categories (documentation, scripting API, command line UX) it is outstanding.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Locust&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--EVCI-gBc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/2afc89887c3e3ab1b1bbafbe3b78784f/04ff3/locust-run.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--EVCI-gBc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/2afc89887c3e3ab1b1bbafbe3b78784f/04ff3/locust-run.png" alt="Locust runtime screenshot" title="Locust runtime screenshot"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The scripting experience with Locust is very nice. The Locust scripting API is pretty good though somewhat basic and lacks some useful things other APIs have, such as custom metrics or built-in functions to generate pass/fail results when you want to run load tests in a CI environment.&lt;/p&gt;

&lt;p&gt;The big thing with Locust scripting though is this - you get to script in &lt;em&gt;Python&lt;/em&gt;! Your mileage may vary, but if I could choose any scripting language to use for &lt;em&gt;my&lt;/em&gt; load tests I would probably choose Python. Here's what a Locust script can look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from locust import TaskSet, task, constant
from locust.contrib.fasthttp import FastHttpLocust

class UserBehavior(TaskSet):
  @task
  def bench_task(self):
    while True:
      self.client.get("/")

class WebsiteUser(FastHttpLocust):
  task_set = UserBehavior
  wait_time = constant(0)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Nice, huh? Arguably even nicer than the look of a k6 script, but the API does lack some things like built-in support for pass/fail results (&lt;a href="https://blazemeter.com"&gt;Blazemeter&lt;/a&gt; has an &lt;a href="https://www.blazemeter.com/blog/locust-assertions-a-complete-user-manual/"&gt;article&lt;/a&gt; about how you can implement your own assertions for Locust, which involves generating a Python exception and getting a stack trace - sounds a bit like rough terrain to me). Also, the new FastHttpLocust class (read more about it below) seems a bit limited in functionality (e.g. not sure there is HTTP/2 support?)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Locust&lt;/strong&gt; has a nice command-and-control web UI that shows you live status updates for your tests and where you can stop the test or reset statistics. Plus it has easy-to-use load generation distribution built in. Starting a distributed load test with Locust is as simple as starting one Locust process with the &lt;code&gt;--master&lt;/code&gt; switch, then multiple processes with the &lt;code&gt;--slave&lt;/code&gt; switch and point them at the machine where the master is located.&lt;/p&gt;

&lt;p&gt;Here is a screenshot from the UI when running a distributed test. As you can see I experienced some kind of UI issue (using Chrome 79.0.3945.130) that caused the live status data to get printed on top of the navigation menu bar (perhaps the &lt;code&gt;host&lt;/code&gt; string was too long?) but otherwise this web UI is neat and functional. Even if I wrote somewhere else that I'm not into web UI's, they can be quite nice sometimes when you're trying to control a number of slave load generators and stay on top of what's happening.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5jnG40Qo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/6a3a017021b423416b6767795816b187/04ff3/locust-webui2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5jnG40Qo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/6a3a017021b423416b6767795816b187/04ff3/locust-webui2.png" alt="Locust web UI screenshot" title="Locust web UI screenshot"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Python&lt;/strong&gt; is actually both the biggest upside &lt;em&gt;and&lt;/em&gt; the biggest downside with Locust. The downside part of it stems from the fact that Locust is written in Python. Python code is slow, and that affects Locust's ability to generate traffic and provide reliable measurements.&lt;/p&gt;

&lt;p&gt;The first time I benchmarked Locust, back in 2017, the performance was horrible. Locust used more CPU time to generate one HTTP request than &lt;em&gt;any&lt;/em&gt; other tool I tested. What made things even worse was that Locust was single-threaded, so if you did not run multiple Locust processes, Locust could only use one CPU core and would not be able to generate much traffic at all. Luckily, Locust had support for distributed load generation even then, and that made it go from the worst performer to the second worst, in terms of how much traffic it could generate from a single physical machine. Another negative thing about Locust back then was that it tended to add huge amounts of delay to response time measurements, making them very unreliable.&lt;/p&gt;

&lt;p&gt;The cool thing is that since then, the Locust developers have made some changes and really speeded up Locust. This is unique as all other tools have stayed still or regressed in performance the past two years. Locust introduced a new Python class/lib called FastHttpLocust, which is a lot faster than the old HttpLocust class (that was built on the Requests library). In my tests now, I see a 4-5x speedup in terms of raw request generation capability, and that is in line with what the Locust authors describe in the docs also. This means that a typical, modern server with 4-8 CPU cores should be able to generate 5-10,000 RPS running Locust in &lt;em&gt;distributed&lt;/em&gt; mode. &lt;strong&gt;Note&lt;/strong&gt; that distributed execution will often still be necessary as Locust is still single-threaded. In the benchmark tests I also note that Locust measurement accuracy degrades more gracefully with increased workload when you run it in distributed mode.&lt;/p&gt;

&lt;p&gt;The nice thing with these improvements, however, is that now, chances are a lot of people will find that a single physical server provides enough power for their load testing needs when they run Locust. They'll be able to saturate most internal staging systems, or perhaps even the production system (or a replica of it). Locust is still among the lower-performing tools in the review, but now it feels like performance is not making it unusable anymore.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Personally,&lt;/strong&gt; I'm a bit schizophrenic about Locust. I love that you can script in Python (and use a million Python libraries!). That is the by far biggest selling point for me. I like the built-in load generation distribution, but wouldn't trust that it scales for truly large-scale tests (I suspect the single &lt;code&gt;--master&lt;/code&gt; process will become a bottleneck pretty fast - would be interesting to test). I like the scripting API, although it wouldn't hurt if it had better support for pass/fail results and the HTTP support with FastHttpLocust seems basic. I like the built-in web UI. I don't like the overall low performance that may force me to run Locust in distributed mode even when on a single host - having to provision multiple Locust instances is an extra complication I don't really want, especially for automated tests. I don't like the command line UX so much.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;About distributed execution on a single host - I don't know how hard it would be to make Locust launch in &lt;code&gt;--master&lt;/code&gt; mode by default and then have it automatically fire off multiple &lt;code&gt;--slave&lt;/code&gt; daughter processes, one per detected CPU core? (everything of course configurable if the user wants to control it) Make it work more like e.g. Nginx. That would result in a nicer user experience, IMO, with a less complex provisioning process - at least when you're running it on a single host&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Locust help output&lt;/u&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--emRl4aky--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/2a67f7bc244cfeb97570d11b0de7a18c/04ff3/locust-help.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--emRl4aky--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/2a67f7bc244cfeb97570d11b0de7a18c/04ff3/locust-help.png" alt="Locust help screenshot" title="Locust help screenshot"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Locust summary&lt;/u&gt;&lt;/p&gt;

&lt;p&gt;If it wasn't for k6, Locust would be my top choice. If you're really into Python you should absolutely take a look at Locust first and see if it works for you. I'd just make sure the scripting API allows you to do what you want to do in a simple manner and that performance is good enough, before going all in.&lt;/p&gt;

&lt;h3&gt;
  
  
  The rest of the tools
&lt;/h3&gt;

&lt;p&gt;Here are my comments on the rest of the tools.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Apachebench&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eOndR8Jt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/3affb9f957c019b504647398fe15922f/04ff3/apachebench-run.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eOndR8Jt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/3affb9f957c019b504647398fe15922f/04ff3/apachebench-run.png" alt="Apachebench runtime screenshot" title="Apachebench runtime screenshot"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Apachebench isn't very actively developed and getting kind of old. I'm including it mainly because it is so common out there, being part of the bundled utilities for Apache httpd. Apachebench is fast, but single-threaded. It doesn't support HTTP/2 and there is no scripting capability.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Apachebench summary&lt;/u&gt;&lt;/p&gt;

&lt;p&gt;Apachebench is good for simple hammering of a single URL. Its only competitor for that use case would be Hey (which is multi-threaded and supports HTTP/2).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Artillery&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3olldWfh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/1a1e029eed7f8ee094f315f6a1788518/04ff3/artillery-run.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3olldWfh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/1a1e029eed7f8ee094f315f6a1788518/04ff3/artillery-run.png" alt="Artillery runtime screenshot" title="Artillery runtime screenshot"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Artillery is a seriously slow, very resource-hungry and possibly not very actively developed open source load testing tool. Not a very flattering summary I guess, but read on.&lt;/p&gt;

&lt;p&gt;It is written in Javascript, using NodeJS as its engine. The nice thing about building on top of NodeJS is NodeJS-compatibility: Artillery is scriptable in Javascript and can use regular NodeJS libraries, which is something e.g. k6 can't do, despite k6 also being scriptable in regular Javascript. The bad thing about being a NodeJS app, however, is performance: In the 2017 benchmark tests, Artillery proved to be the second-worst performer, after Locust. It was using a ton of CPU and memory to generate pretty unimpressive RPS numbers and response time measurements that were not very accurate at all.&lt;/p&gt;

&lt;p&gt;I'm sad to say that things have not changed much here since 2017. If anything, Artillery seems a bit slower today.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In 2017,&lt;/strong&gt; Artillery could generate twice as much traffic as Locust, running on a single CPU core. Today, Artillery can only generate 1/3 of the traffic Locust can produce, when both tools are similarly limited to using a single CPU core. Partly this is because Locust has improved in performance, but the change is bigger than expected so I'm pretty sure Artillery performance has dropped also. Another data point that supports that theory is Artillery vs Tsung. In 2017, Tsung was 10 times faster than Artillery. Today, Tsung is 30 times faster. I believe Tsung hasn't changed in performance at all, which then means Artillery is much slower than it used to be (and it wasn't exactly fast back then either).&lt;/p&gt;

&lt;p&gt;The performance of Artillery is definitely an issue, and an aggravating factor is that open-source Artillery still doesn't have any kind of distributed load generation support so you're stuck with a &lt;em&gt;very&lt;/em&gt; low-performing solution unless you buy the premium SaaS product.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://artillery.io"&gt;artillery.io&lt;/a&gt; site is not very clear on what differences there are between Artillery open source and Artillery Pro, but there appears to be a &lt;a href="https://artillery.io/docs/pro/changelog/"&gt;Changelog&lt;/a&gt; only for Artillery Pro, and looking at the &lt;a href="https://github.com/artilleryio/artillery"&gt;Github repo&lt;/a&gt;, the version number for Artillery open source is 1.6.0 while Pro is at 2.2.0 according to the Changelog. Scanning the commit messages of the open source Artillery, it seems there are mostly bug fixes there, and not too many commits over the course of 2+ years.&lt;/p&gt;

&lt;p&gt;The Artillery team should make a better effort at documenting the differences between Artillery open source and the premium product Artillery Pro, and also write something about their intentions with the open source product. Is it being slowly discontinued? It sure looks that way.&lt;/p&gt;

&lt;p&gt;Another thing to note related to performance is that nowadays Artillery will print "high-cpu" warnings whenever CPU usage goes above 80% (of a single core) and it is recommended to never exceed that amount so as not to "lower performance". I find that if I stay at about 80% CPU usage so as to avoid these warnings, Artillery will produce a lot less traffic - about 1/8 the number of requests per second that Locust can do. If I ignore the warning messages and let Artillery use 100% of one core, it will increase RPS to 1/3 of what Locust can do. But at the cost of a pretty huge measurement error.&lt;/p&gt;

&lt;p&gt;All the performance issues aside, Artillery has some good sides also. It is quite suitable for CI/automation as it is easy to use on the command line, has a simple and concise YAML-based config format, plugins to generate pass/fail results, outputs results in JSON format, etc. And like previously mentioned, it can use regular NodeJS libraries, which offer a huge amount of functionality that is simple to import. But all this is irrelevant to me when a tool performs the way Artillery does. The only situation where I'd even &lt;em&gt;consider&lt;/em&gt; using Artillery would be if my test cases &lt;em&gt;had to&lt;/em&gt; rely on some NodeJS libraries that k6 can't use, but Artillery can.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Artillery summary&lt;/u&gt;&lt;/p&gt;

&lt;p&gt;Only ever use it if you've already sold your soul to NodeJS (i.e. if you &lt;em&gt;have&lt;/em&gt; to use NodeJS libraries).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Drill&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--26LZKdgC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/4765d0140980e5c1d6740ad2fc8b0753/04ff3/drill-run.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--26LZKdgC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/4765d0140980e5c1d6740ad2fc8b0753/04ff3/drill-run.png" alt="Drill runtime screenshot" title="Drill runtime screenshot"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Drill is written in &lt;a href="https://www.rust-lang.org"&gt;Rust&lt;/a&gt;. I've avoided Rust because I'm scared I may like it and I don't want anything to come between me and Golang. But that is probably not within the scope of this article. What I meant to write was that Rust is supposed to be fast, so my assumption is that a load testing tool written in Rust would be fast too.&lt;/p&gt;

&lt;p&gt;Running some benchmarks, however, it quickly becomes apparent that this particular tool is &lt;em&gt;incredibly&lt;/em&gt; slow! Maybe I shouldn't have been so quick to include Drill in the review, seeing as it is both quite new and not yet widely used. Maybe it was not meant to be a serious effort at creating a new tool? (given that the author claims that Drill was created because he wanted to learn Rust). On the other hand it does have a lot of useful features, like a pretty powerful YAML-based config file format, thresholds for pass/fail results, etc. so it does &lt;em&gt;look&lt;/em&gt; like a semi-serious effort to me.&lt;/p&gt;

&lt;p&gt;So - the tool seems fairly solid, if simple (no scripting). But when I run it in my test setup it maxes out four CPU cores to produce a mind-bogglingly low ~180 requests/second. I ran many tests, with many different parameters, and that was the best number I could squeeze out of Drill. The CPU's are spending cycles like there is no tomorrow, but there are so few HTTP transactions coming out of this tool that I could probably respond to them using pen and paper. It's like it is mining a Bitcoin between each HTTP request! Compare this to Wrk (written in C) that does over 50,000 RPS in the same environment and you see what I mean. Drill is not exactly a poster child for the claim "Rust is faster than C".&lt;/p&gt;

&lt;p&gt;What is the point of using a compiled language like Rust if you get no performance out of your app?? You might as well use Python then. Or no, Python-based Locust is much faster than this. If the aim is ~200 RPS on my particular test setup I could probably use Perl! Or, hell, maybe even a shell script??&lt;/p&gt;

&lt;p&gt;I just had to try it. Behold the &lt;a href="https://github.com/ragnarlonn/curl-basher"&gt;curl-basher&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So this Bash script actually gives Drill a run for its money, by executing &lt;code&gt;curl&lt;/code&gt; on the command line multiple times, concurrently. It even counts errors. With &lt;code&gt;curl-basher.sh&lt;/code&gt; I manage to eke out 147 RPS in my test setup (a very stable 147 RPS I have to say) and Drill does 175-176 RPS so it is only 20% faster. This makes me wonder what the Drill code is actually doing to manage to consume so much CPU time. It has for sure set a new bottom record for inefficiency in generating HTTP requests - If you're concerned about global warming, don't use Drill!&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Drill summary&lt;/u&gt;&lt;/p&gt;

&lt;p&gt;For tiny, short-duration load tests it could be worth considering Drill, or if the room is a bit chilly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Jmeter&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MiPFfx6h--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/19a44620ac7318ecd0b40cb45e074eb3/04ff3/jmeter-run.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MiPFfx6h--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/19a44620ac7318ecd0b40cb45e074eb3/04ff3/jmeter-run.png" alt="Jmeter runtime screenshot" title="Jmeter runtime screenshot"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here's the 800-pound gorilla. Jmeter is a huge beast compared to most other tools. It is old and has acquired a larger feature set, more integrations, add-ons, plugins, etc than any other tool in this review. It has been the "king" of open source load testing tools for a long time, and probably still is.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In&lt;/strong&gt; the old days, people could choose between paying obscene amounts of money for an HP Loadrunner license, paying substantial amounts of money for a license of some Loadrunner-wannabe proprietary tool, or pay nothing at all to use Jmeter. Well, there was also the option of using Apachebench or maybe OpenSTA or some other best-forgotten free solution, but if you wanted to do serious load testing, Jmeter was really the only usable alternative that didn't cost money.&lt;/p&gt;

&lt;p&gt;So the Jmeter user base grew and grew, and development of Jmeter also grew. Now, 15 or so years later, Jmeter has been actively developed by a large community for longer than any other load testing tool, so it isn't strange that it also has more features than any other tool.&lt;/p&gt;

&lt;p&gt;As it was originally built as an alternative to old, proprietary load testing software from 15-20 years ago, it was designed to cater to the same audience as those applications. I.e. it was designed to be used by load testing experts running complex, large-scale integration load tests that took forever to plan, a long time to execute and a longer time to analyse the results from. Tests that required a lot of manual work and very specific load testing domain knowledge. This means that Jmeter was not, from the start, built for automated testing and developer use, and this can clearly be felt when using it today. It is a tool for professional &lt;em&gt;testers&lt;/em&gt;, not for developers. It is not great for automated testing as its command line use is awkward, default results output options are limited, it uses a lot of resources and it has no &lt;em&gt;real&lt;/em&gt; scripting capability, only some support for inserting logic inside the XML configuration.&lt;/p&gt;

&lt;p&gt;This is probably why Jmeter is losing market share to newer tools like Gatling, which has a lot in common with Jmeter so it offers an attractive upgrade path for organisations that want to use a more modern tool, with better support for scripting and automation, but want to keep their tooling Java-based. Anyway, Jmeter does have some advantages over e.g. Gatling. Primarily it comes down to the size of the ecosystem - all those integrations, plugins etc I mentioned. Performance-wise, they are fairly similar. Jmeter used to be one of the very best performing tools in this review, but has seen its performance drop so now it's about average and pretty close to (perhaps slightly faster than) that of Gatling.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Jmeter summary&lt;/u&gt;&lt;/p&gt;

&lt;p&gt;The biggest reasons to choose Jmeter today, if you're just starting out with load testing, would be if you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;had a need to test lots of different protocols/apps that only Jmeter had support for, or&lt;/li&gt;
&lt;li&gt;you're a Java-centric organisation and want to use the most common Java-based load testing tool out there, or&lt;/li&gt;
&lt;li&gt;you want a GUI load testing tool where you point and click to do things&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If none of those are true, I think you're better served by Gatling (which is fairly close to Jmeter in many ways), k6 or Locust. Or, if you don't care so much about programmability/scripting (writing tests as code) you can take a look at Vegeta.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Siege&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oIvJvLlo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/1b52d7b89c5d6a977a076850a4ce05d3/04ff3/siege-run1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oIvJvLlo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/1b52d7b89c5d6a977a076850a4ce05d3/04ff3/siege-run1.png" alt="Siege runtime screenshot" title="Siege runtime screenshot"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Siege is a simple tool, similar to e.g. Apachebench in that it has no scripting and is primarily used when you want to hit a single, static URL repeatedly. The biggest feature it has that Apachebench lacks is its ability to read a list of URLs and hit them all during the test. If you don't need this feature, however, just use Apachebench (or perhaps better - Hey) would be my advice. Because Siege is bound to give you a headache if you try doing anything even slightly advanced with it - like figuring out how fast the target site is responding when you hit it with traffic, or generating enough traffic to slow down the target system, or something like that. Actually, just running it with the correct config or command line options, though they're not too many, can feel like some kind of mystery puzzle game.&lt;/p&gt;

&lt;p&gt;Siege is unreliable, in more than one way. Firstly, it crashes fairly often. Secondly, it freezes even more often (mainly at exit, can't tell you how many times I've had to &lt;code&gt;kill -9&lt;/code&gt; it). If you try enabling HTTP keep-alive it crashes or freezes 25% of the time. I don't get how HTTP keep-alive can be experimental in such an old tool! HTTP keep-alive itself is very old and part of HTTP/1.1, that was standardized 20 years ago! It is also very, very commonly used in the wild today, and it has a huge performance impact. Most every HTTP library have support for it. HTTP keep-alive keeps connections open between requests, so the connections can be reused. If you're not able to keep connections open it means that every HTTP request results in a new TCP handshake and a new connection. This may give you misleading response time results (because there is a TCP handshake involved in every single request, and TCP handshakes are slow) and it may also result in TCP port starvation on the target system, which means the test will stop working after a little while because all available TCP ports are in a CLOSE_WAIT state and can't be reused for new connections.&lt;/p&gt;

&lt;p&gt;But hey, you don't have to enable weird, exotic, experimental, bleeding-edge stuff like HTTP keep-alive to make Siege crash. You just have to make it start a thread or two too many and it will crash or hang very quickly. And it is using smoke and mirrors to avoid mentioning that fact - it has a new &lt;code&gt;limit&lt;/code&gt; config directive that sets a cap on the max number you can give to the &lt;code&gt;-c&lt;/code&gt; (concurrency) command-line parameter - the one determining how many threads Siege will start. The value is set to 255 by default, with the motivation that Apache httpd by default can only handle 255 concurrent connections, so using more than that will "make a mess". What a load of suspicious-looking brown stuff in a cattle pasture. It so happens that during my testing, Siege seems to become unstable when you set the concurrency level to somewhere in the range 3-400. Over 500 and it crashes or hangs a lot. More honest would be to write in the docs that "Sorry, we can't seem to create more than X threads or Siege will crash. Working on it"&lt;/p&gt;

&lt;p&gt;Siege's options/parameters make up an inconsistent, unintuitive patchwork and the help sometimes lies to you. I still haven't been able to use the &lt;code&gt;-l&lt;/code&gt; option (supposedly usable to specify a log file location) although the long form &lt;code&gt;--log=x&lt;/code&gt; seems to work as advertised (and do what &lt;code&gt;-l&lt;/code&gt; won't).&lt;/p&gt;

&lt;p&gt;Siege performs on par with Locust now (when Locust is running in distributed mode), which isn't fantastic for a C application. Wrk is 25 times faster than Siege, offers pretty much the same feature set, provides much better measurement accuracy and doesn't crash. Apachebench is also a lot faster, as is Hey. I see very few reasons for using Siege these days.&lt;/p&gt;

&lt;p&gt;The only truly positive thing I can write is that Siege has implemented something quite clever that most tools lack - a command line switch (&lt;code&gt;-C&lt;/code&gt;) that just reads all config data (plus command-line params) and then prints out the full config it would be using when running a load test. This is a very nice feature that more tools should have. Especially when there are multiple ways of configuring things - i.e. command-line options, config files, environment variables - it can be tricky to know exactly what config you're actually using.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Siege summary&lt;/u&gt;&lt;/p&gt;

&lt;p&gt;Run fast in any other direction.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tsung&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TSENpNkN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/8129bfdea8c12cabb498e5342010ee66/04ff3/tsung-run.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TSENpNkN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/8129bfdea8c12cabb498e5342010ee66/04ff3/tsung-run.png" alt="Tsung runtime screenshot" title="Tsung runtime screenshot"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Tsung is our only Erlang-based tool and it's been around for a while. It seems very stable, with good documentation, is reasonably fast and has a nice feature set that includes support for distributed load generation and being able to test several different protocols.&lt;/p&gt;

&lt;p&gt;It's a very competent tool whose main drawback, in my opinion, is the XML-based config similar to what Jmeter has, and its lack of scriptability. Just like Jmeter, you can actually define loops and use conditionals and stuff inside the XML config, so in practise you &lt;em&gt;can&lt;/em&gt; script tests, but the user experience is horrible compared to using a real language like you can with e.g. k6 or Locust.&lt;/p&gt;

&lt;p&gt;Tsung is still being developed, but very slowly. All in all I'd say that Tsung is a useful option if you need to test one of the extra protocols it supports (like LDAP, PostgreSQL, MySQL, XMPP/Jabber), where you might only have the choice between Jmeter or Tsung (and of those two, I much prefer Tsung).&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Tsung summary&lt;/u&gt;&lt;/p&gt;

&lt;p&gt;Try it if you're an Erlang fan.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;curl-basher&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Yeah, well.&lt;/p&gt;




&lt;h2&gt;
  
  
  Chapter 3: Performance review and benchmarks
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Load testing&lt;/strong&gt; can be tricky because it is quite common that you run into some performance issue on the &lt;em&gt;load generation side&lt;/em&gt; that means you're measuring that systems' ability to generate traffic, not the target systems' ability to handle it. Even very seasoned load testing professionals regularly fall into this trap.&lt;/p&gt;

&lt;p&gt;If you don't have enough load generation power, you may either see that your load test becomes unable to go above a certain number of requests per second, or you may see that response time measurements become completely unreliable. Usually you'll see both things happening, but you might not know why and mistakenly blame the poor target system for the bad and/or erratic performance you're seeing. Then you might either try to optimize your already-optimized code (because &lt;em&gt;your&lt;/em&gt; code is fast, of course) or you'll yell at some poor coworker who have zero lines of code in the hot paths but it was the only low-performing code you could find in the whole repo. Then the coworker gets resentful and steals your mouse pad to get even, which starts a war in the office and before you know it, the whole company is out of business and you have to go look for a new job at Oracle. What a waste, when all you had to do was make sure your load generation system was up to its task!&lt;/p&gt;

&lt;p&gt;This is why I think it is very interesting to understand how load testing tools perform. I think everyone who use a load testing tool should have some basic knowledge of its strengths and weaknesses when it comes to performance, and also occasionally make sure that their load testing setup is able to generate the amount of traffic required to properly load the target system. Plus a healthy margin.&lt;/p&gt;

&lt;h3&gt;
  
  
  Testing the testing tools
&lt;/h3&gt;

&lt;p&gt;A good way of testing the testing tools is to not test them on &lt;em&gt;your&lt;/em&gt; code, but on some third-party thing that is sure to be very high-performing. I usually fire up an Nginx server and then I load test by fetching the default "Welcome to Nginx" page. It's important, though, to use a tool like e.g. &lt;code&gt;top&lt;/code&gt; to keep track of Nginx CPU usage while testing. If you see just one process, and see it using close to 100% CPU, it means you could be CPU-bound on the target side. Then you need to reconfigure Nginx to use more worker threads.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If&lt;/strong&gt; you see multiple Nginx processes but only one is using a lot of CPU, it means your load testing tool is only talking to that particular worker process. Then you need to figure out how to make the tool open multiple TCP connections and issue requests in parallell over them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Network delay&lt;/strong&gt; is also important to take into account as it sets an upper limit on the number of requests per second you can push through. If the network roundtrip time is 1 ms between server A (where you run your load testing tool) and server B (where the Nginx server is) and you only use one TCP connection to send requests, the theoretical max you will be able to achieve is 1/0.001 = 1,000 requests per second. In most cases this means that you'll want your load testing tool to use many TCP connections.&lt;/p&gt;

&lt;p&gt;Whenever you're hitting a limit, and can't seem to push through any more requests/second, you try to find out what resource you've run out of. Monitor CPU and memory usage on both load generation and target sides with some tool like &lt;code&gt;top&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;If CPU is fine on both sides, experiment with the number of concurrent network connections and see if more will help you increase RPS throughput.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Another&lt;/strong&gt; thing that is easily missed is network &lt;em&gt;bandwidth&lt;/em&gt;. If, say, the Nginx default page requires a transfer of 250 bytes to load, it means that if the servers are connected via a 100 Mbit/s link, the theoretical max RPS rate would be around 100,000,000 divided by 8 (bits per byte) divided by 250 =&amp;gt; 100M/2000 = 50,000 RPS. Though that is a very optimistic calculation - protocol overhead will make the actual number a lot lower so in the case above I would start to get worried bandwidth was an issue if I saw I could push through max 30,000 RPS, or something like that. And of course, if you happen to be loading some bigger resource, like e.g. an image file, this theoretical max RPS number can be a lot lower. If you're suspicious, try making changes to the size of the file you're loading and see if it changes the result. If you double the size and get half the RPS you know you're bandwidth limited.&lt;/p&gt;

&lt;p&gt;Finally, server &lt;em&gt;memory&lt;/em&gt; can be an issue also. Usually, when you run out of memory it will be very noticeable because most things will just stop working while the OS frantically tries to destroy the secondary storage by using it as RAM (i.e. swapping or thrashing).&lt;/p&gt;

&lt;p&gt;After some experimentation you'll know exactly what to do to get the highest RPS number out of your load testing tool, and you'll know what its max traffic generation capacity is on the current hardware. When you know these things you can start testing the &lt;em&gt;real&lt;/em&gt; system that you'd like to test, and be confident that whenever you see e.g. an API end point that can't do more than X requests/second you'll immediately know that it is due to something on the target side of things, not the load generator side.&lt;/p&gt;

&lt;h3&gt;
  
  
  These benchmarks
&lt;/h3&gt;

&lt;p&gt;The above procedure is more or less what I have gone through when testing these tools. I used a small, fanless, 4-core Celeron server running Ubuntu 18.04 with 8GB RAM as the load generator machine. I wanted something that was multi-core but not too powerful. It was important that the target/sink system could handle &lt;em&gt;more&lt;/em&gt; traffic than the load generator was able to generate (or I wouldn't be benchmarking the load generation side - the tools).&lt;/p&gt;

&lt;p&gt;For target, I used a 4Ghz i7 iMac with 16G RAM. I did use the same machine as work machine, running some terminal windows on it and having a Google spreadsheet open in a browser, but made sure nothing demanding was happening while tests were running. As this machine has 4 very fast cores with hyperthreading (able to run 8 things in parallell) there should be capacity to spare, but to be on the safe side I have repeated all tests multiple times at different points in time, just to verify that the results are somewhat stable.&lt;/p&gt;

&lt;p&gt;The machines were connected to the same physical LAN switch, via gigabit Ethernet.&lt;/p&gt;

&lt;p&gt;Practical tests showed that the target was powerful enough to test all tools but perhaps one. Wrk managed to push through over 50,000 RPS and that made 8 Nginx workers on the target system consume about 600% CPU. It may be that Nginx couldn't get much more CPU than that (given that 800% usage should be the absolute theoretical max on the 4-core i7 with hyperthreading) but I think it doesn't matter because Wrk is in a class of its own when it comes to traffic generation. We don't really have to find out whether Wrk is 200 times faster than Artillery, or only 150 times faster. The important thing is to show that the target system can handle some very high RPS number that most tools can't achieve, because then we know we actually &lt;em&gt;are&lt;/em&gt; testing the load generation side and not the target system.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Raw data&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There is a spreadsheet with the raw data, plus text comments, from all the tests run. More tests than these in the spreadsheet were run though - for instance, I ran a lot of tests to find out the ideal parameters to use for the "Max RPS" tests. The goal was to cram out as many RPS as was inhumanly possible, from each tool, and for that some exploratory testing was required. Also, whenever I felt a need to ensure results seemed stable I'd run a set of tests again and compare to what I had recorded. I'm happy to say there was usually very little fluctuation in the results. Once I did have an issue with &lt;em&gt;all&lt;/em&gt; tests suddenly producing performance numbers that were notably lower than they were before. This happened regardless of which tool was being used, and eventually led me to reboot the load generator machine, which resolved the issue.&lt;/p&gt;

&lt;p&gt;The raw data from the tests can be found &lt;a href="https://docs.google.com/spreadsheets/d/1JWfKhmhmSd5Lb1RnlJalRDFeZP6_nHbt-Dv90Dy_nGc/edit#gid=449003697"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  What I've tried to find out
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Max traffic generation capability&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;How many requests per second could each tool generate in this lab setup? Here I tried working with most parameters available, but primarily concurrence (how many threads the tool used, and how many TCP connections) and things like enabling HTTP keep-alive, disabling things the tool did that required lots of CPU (HTML parsing maybe), etc. The goal was to cram out as many requests per second out of each tool as possible, AT ANY COST!!&lt;/p&gt;

&lt;p&gt;The idea is to get some kind of baseline for each tool that shows how efficient the tool is when it comes to raw traffic generation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Memory usage per VU&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Several of the tools are quite memory-hungry and sometimes memory usage is also dependent on the size of the test, in terms of virtual users (VUs). High memory usage per VU can prevent people from running large-scale tests using the tool, so I think it is an interesting performance metric to measure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Memory usage per request&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Some tools collect lots of statistics throughout the load test. Primarily when HTTP requests are being made it is common to store various transaction time metrics. Depending on exactly what is stored, and how, this can consume large amounts of memory and be a problem for intensive and/or long-running tests.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Measurement accuracy&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;All tools measure and report transaction response times during a load test. There will always be a certain degree of inaccuracy in these measurements - for several reasons - but especially when the load generator itself is doing a bit of work it is common to see quite large amounts of extra delay being added to response time measurements. It is useful to know when you can trust the response time measurements reported by your load testing tool, and when you can't, and I have tried to figure this out for each of the different tools.&lt;/p&gt;

&lt;h3&gt;
  
  
  Max traffic generation capability
&lt;/h3&gt;

&lt;p&gt;Here is a chart showing the max RPS numbers I could get out of each tool when I really pulled out all the stops, and their memory usage:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9dY48IH8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/dd4cf811c387c64576f402455698ed81/04ff3/RPSvMemory_all.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9dY48IH8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/dd4cf811c387c64576f402455698ed81/04ff3/RPSvMemory_all.png" alt="A chart comparing the maximum traffic generation and the memory usage of the best open source load testing tools" title="A chart comparing the maximum traffic generation and the memory usage of the best open source load testing tools"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Pretty obvious is that Wrk has no real competition here. It is a beast when it comes to generating traffic, so if that is all you want - large amounts of HTTP requests - download (and compile) Wrk. You won't be displeased!&lt;/p&gt;

&lt;p&gt;But while being a terrific request generator, Wrk is definitely not perfect for all uses (see review), so it is interesting to see what's up with the other tools. Let's remove Wrk from the chart to get a better scale:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DNVDL5hW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/b4075246f21af514c7a1554d1406b337/04ff3/RPSvMemory.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DNVDL5hW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/b4075246f21af514c7a1554d1406b337/04ff3/RPSvMemory.png" alt="A chart comparing the maximum traffic generation and the memory usage of the best open source load testing tools excepts wrk" title="A chart comparing the maximum traffic generation and the memory usage of the best open source load testing tools excepts wrk"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before discussing these results, I'd like to mention that three tools were run in non-default modes in order to generate the highest possible RPS numbers:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Artillery&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Artillery was run with a concurrency setting high enough to cause it to use up a full CPU core, which is not recommended by the Artillery developers and results in &lt;a href="https://artillery.io/docs/faq/"&gt;high-cpu warnings&lt;/a&gt; from Artillery. I found that using up a full CPU core increased the request rate substantially, from just over 100 RPS when running the CPU at ~80% to 300 RPS when at 100% CPU usage. The RPS number is still abysmally low, of course, and like the Artillery FAQ says and like we also see in the response time accuracy tests, response time measurements are likely to be pretty much unusable when Artillery is made to use all of one CPU core.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;k6&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;k6 was run with the &lt;code&gt;--compatibility-mode=base&lt;/code&gt; command line option that disables newer Javascript features, stranding you with old ES5 for your scripting. It results in a ~50% reduction in memory usage and a ~10% general speedup which means that the max RPS rate goes up from ~10k to ~11k. Not a huge difference though, and I'd say that unless you have a memory problem it's not worth using this mode when running k6.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Locust&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Locust was run in distributed mode, which means that 5 Locust instances were started: one master instance and four slave instances (one slave for each CPU core). Locust is single-threaded so can't use more than one CPU core, which means that you &lt;em&gt;have&lt;/em&gt; to distribute load generation over multiple processes to fully use all the CPU on a multi-CPU server (they should really integrate the master/slave mode into the app itself so it auto-detects when a machine has multiple CPUs and starts multiple processes by default). If I had run Locust in just one instance it would only have been able to generate ~900 RPS.&lt;/p&gt;

&lt;p&gt;We also used the new FastHttpLocust library for the Locust tests. This library is 3-5 times faster than the old HttpLocust library. However, using it means you lose some functionality that HttpLocust has but which FastHttpLocust doesn't.&lt;/p&gt;

&lt;h4&gt;
  
  
  What has changed since 2017?
&lt;/h4&gt;

&lt;p&gt;I have to say these results made me a bit confused at first, because I tested most of these tools in 2017, and expected performance to be pretty much the same now. The absolute RPS numbers aren't comparable to my previous tests of course, because I used another test setup then, but I expected the relationships between the tools to stay roughly the same: e.g. I thought Jmeter would still be one of the fastest tools, and I thought Artillery would still be faster than Locust when run on a single CPU core.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Jmeter is slower!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Well, as you can see, Jmeter performance seems pretty average now. From my testing it seems Jmeter has dropped in performance by about 50% between versions 2.3 and the one I tested now - 5.2.1. It could be a JVM issue maybe. I tested with OpenJDK 11.0.5 and Oracle Java 13.0.1 and both performed pretty much the same, so it seems unlikely it is due to a slower JVM. I also tried upping the &lt;code&gt;-Xms&lt;/code&gt; and &lt;code&gt;-Xmx&lt;/code&gt; parameters that determine how much memory the JVM can allocate, but that didn't affect performance either.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Artillery is now glacially slow, and Locust is almost decent!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As for Artillery, it also seems to be about 50% slower now than two years ago, which means it is now as slow as Locust was two years ago when I whined endlessly about how slow &lt;em&gt;that&lt;/em&gt; tool was. And Locust? It is the single tool that has substantially &lt;em&gt;improved&lt;/em&gt; performance since 2017. It is now about 3 times faster than it was back then, thanks to its new FastHttpLocust HTTP library. It does mean losing a little functionality offered by the old HttpLocust library (which is based on the very user-friendly Python Requests library), but the performance gain was really good for Locust I think.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Siege is slower!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Siege wasn't a very fast tool two years ago, though written in C, but somehow its performance seems to have dropped further between version 4.0.3 and 4.0.4 so that now it is slower than Python-based Locust when the latter is run in distributed mode and can use all CPU cores on a single machine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Drill is very, very slow&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Drill is written in Rust, so should be pretty fast, and it makes good use of all CPU cores, which are kept very busy during the test. Noone knows what those cores are doing, however, because Drill only manages to produce an incredibly measly 176 RPS! That is about on par with Artillery, but Artillery only uses one CPU core while Drill uses four! I wanted to see if a shellscript could generate as much traffic as Drill. The answer was "yeah, pretty much". You can try it yourself: &lt;a href="https://github.com/ragnarlonn/curl-basher"&gt;curl-basher&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Vegeta can finally be benchmarked, and it isn't bad!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Vegeta used to offer no way of controlling concurrency, which made it hard to compare against other tools so in 2017 I did not include it in the benchmark tests. Now, though, it has gotten a &lt;code&gt;-max-workers&lt;/code&gt; switch that can be used to limit concurrency and which, together with &lt;code&gt;-rate=0&lt;/code&gt; (unlimited rate) allows me to test it with the same concurrency levels as used for other tools. We can see that Vegeta is quite performant - it both generates lots of traffic and uses little memory.&lt;/p&gt;

&lt;h3&gt;
  
  
  Summarizing traffic generation capability
&lt;/h3&gt;

&lt;p&gt;The rest of the tools offer roughly the same performance as they did in 2017.&lt;/p&gt;

&lt;p&gt;I'd say that if you need to generate huge amounts of traffic you might be better served by one of the tools on the left side of the chart, as they are more efficient, but most of the time it is probably more than enough to be able to generate a couple of thousand requests/second and that is something Gatling or Siege can do, or a distributed Locust setup. However, I'd recommend against Artillery or Drill unless you're a masochist or want an extra challenge. It will be tricky to generate enough traffic with those, and also tricky to interpret results (at least from Artillery) when measurements get skewed because you have to use up every ounce of CPU on your load generator(s).&lt;/p&gt;

&lt;h3&gt;
  
  
  Memory usage
&lt;/h3&gt;

&lt;p&gt;What about memory usage then? Let's pull up that chart again:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DNVDL5hW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/b4075246f21af514c7a1554d1406b337/04ff3/RPSvMemory.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DNVDL5hW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/b4075246f21af514c7a1554d1406b337/04ff3/RPSvMemory.png" alt="A chart comparing the maximum traffic generation and the memory usage of the best open source load testing tools excepts wrk" title="A chart comparing the maximum traffic generation and the memory usage of the best open source load testing tools excepts wrk"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The big memory hogs are Tsung, Jmeter, Gatling and Locust. Especially our dear Java apps - Jmeter and Gatling - really enjoy their memory and want lots of it. Locust wouldn't be so bad if it didn't have to run in multiple processes (because it is single-threaded), which consumes more memory. A multithreaded app can share memory between threads, but multiple processes are forced to keep identical sets of a lot process data.&lt;/p&gt;

&lt;p&gt;These numbers give an indication about how memory-hungry the tools are, but they don't show the whole truth. After all, what is 500 MB today? Hardly any servers come without a couple of GB of RAM, so 500 MB should never be much of an issue. The problem is, however, if memory usage grows when you scale up your tests. Two things tend to make memory usage grow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;long running tests that collect a lot of results data&lt;/li&gt;
&lt;li&gt;ramping up the number of VUs / execution threads&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To investigate these things I ran two suites of tests to measure "Memory usage per VU" and "Memory usage per request". I didn't actually try to calculate the exact memory use per VU or request, but ran tests with increasing amounts of requests and VUs, and recorded memory usage.&lt;/p&gt;

&lt;h3&gt;
  
  
  Memory usage per VU level
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--L8-9D15Z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/f257b9c64d50bf6575f6b3f7fc94bed7/04ff3/MemoryPerVU.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--L8-9D15Z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/f257b9c64d50bf6575f6b3f7fc94bed7/04ff3/MemoryPerVU.png" alt="A chart comparing the memory usage per VU level of the best open source load testing tools" title="A chart comparing the memory usage per VU level of the best open source load testing tools"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here we can see what happens as you scale up the number of virtual users (VUs). Note that the numbers shown are &lt;em&gt;average&lt;/em&gt; memory use throughout a very short (10 second) test. Samples have been taken every second during the test, so 9-10 samples typically. This test should really be done with more VUs, maybe going from 1VU to 200 VU or something, and have the VUs not do so much so you don't get too much results data. Then you'd really see how the tools "scale" when you're trying to simulate more users.&lt;/p&gt;

&lt;p&gt;But we can see some things here. A couple of tools seem unaffected when we change the VU number, which indicates that either they're not using a lot of extra memory per VU, or they're allocating memory in chunks and we haven't configured enough VUs in this test to force them to allocate more memory than they started out with. You can also see that with a tool like e.g. Jmeter it's not unlikely that memory could become a problem as you try to scale up your tests. Tsung and Artillery also look like they may end up using a ton of memory if you try to scale up the VU level substantially from these very low levels.&lt;/p&gt;

&lt;h3&gt;
  
  
  Memory usage per request volume
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Hm72UQ4p--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/81daa9835ef6c5c63b53dd1a25ef550d/04ff3/MemoryPerRequest.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Hm72UQ4p--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/81daa9835ef6c5c63b53dd1a25ef550d/04ff3/MemoryPerRequest.png" alt="A chart comparing the memory usage per request volume of the best open source load testing tools" title="A chart comparing the memory usage per request volume of the best open source load testing tools"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For this test, I ran all the tools with the same concurrency parameters but different test durations. The idea was to make the tools collect lots of results data and see how much memory usage grew over time. The plot shows how much the memory usage of each tool changes when it goes from storing 20k transaction results to 1 million results.&lt;/p&gt;

&lt;p&gt;As we can see, Wrk doesn't really use any memory to speak of. Then again, it doesn't store much results data either of course. Siege also seems quite frugal with memory, but we failed to test with 1 million transactions because Siege aborted the test before we could reach 1 million. Not totally unexpected as Siege only sends one request per TCP socket - then it closes the socket and opens a new one for next request. This starves the system of available local TCP ports. You can probably expect any larger or longer test to fail if you're using Siege.&lt;/p&gt;

&lt;p&gt;Tsung and Artillery seems to grow their memory usage, but not terribly fast, as the test runs on. k6 and Hey have much steeper curves and there you could eventually run into trouble, for &lt;em&gt;very&lt;/em&gt; long running tests.&lt;/p&gt;

&lt;p&gt;Again, the huge memory hogs are the Java apps: Jmeter and Gatling. Jmeter goes from 160MB to 660MB when it has executed 1 million requests. And note that this is &lt;em&gt;average&lt;/em&gt; memory usage throughout the whole test. The actual memory usage at the end of the test might be twice that. Of course, it may be that the JVM is just not garbage collecting at all until it feels it is necessary - not sure how that works. If that's the case, however, it would be interesting to see what happens to performance if the JVM actually has to do some pretty big garbage collection at some point during the test. Something for someone to investigate further.&lt;/p&gt;

&lt;p&gt;Oh and Drill got excluded from these tests. It just took way too much time to generate 1 million transactions using Drill. My kids would grow up while the test was running.&lt;/p&gt;

&lt;h3&gt;
  
  
  Measurement accuracy
&lt;/h3&gt;

&lt;p&gt;Sometimes, when you run a load test and expose the target system to lots of traffic, the target system will start to generate errors. Transactions will fail, and the service the target system was supposed to provide will not be available anymore, to some (or all) users.&lt;/p&gt;

&lt;p&gt;However, this is usually not what happens &lt;em&gt;first&lt;/em&gt;. The first bad thing that tends to happen when a system is put under heavy load, is that it slows down. Or, perhaps more accurately, things get queued and &lt;em&gt;service to the users&lt;/em&gt; gets slowed down. The transactions will not complete as fast as before. This generally results in a worse user experience, even if the service is still operational. In cases when this performance degradation is small, users will be slightly less happy with the service, which means more users bounce, churn or just don't use the services offered. In cases where performance degradation is severe, the effects can be a more or less total loss of revenue for e.g. an e-commerce site.&lt;/p&gt;

&lt;p&gt;This means that it is very interesting to measure transaction response times. You want to make sure they're within acceptable limits at the expected traffic levels, andkeep track of them so they don't regress as new code is added to your system.&lt;/p&gt;

&lt;p&gt;All load testing tools &lt;em&gt;try&lt;/em&gt; to measure transaction response times during a load test, and provide you with statistics about them. However, there will always be a measurement error. Usually in the form of an addition to the actual response time a real client would experience. Or, put another way, the load testing tool will generally report worse response times than what a real client would see.&lt;/p&gt;

&lt;p&gt;Exactly how large this error is, varies. It varies depending on resource utilisation on the load generator side - e.g. if your load generator machine is using 100% of its CPU you can bet that the response time measurements will be pretty wonky. But it also varies quite a lot between tools - one tool may exhibit much lower measurement errors overall, than another tool.&lt;/p&gt;

&lt;p&gt;As a user I'd like the error to be as small as possible because if it is big it may mask the response time regressions that I'm looking for, making them harder to find. Also, it may mislead me into thinking my system isn't responding fast enough to satisfy my users.&lt;/p&gt;

&lt;p&gt;Here is a chart showing reported response times at various VU/concurrency levels, for the different tools:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pvwzA4pC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/6e82a92042396c78ba8844c7d42c98cc/04ff3/RTTperVU1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pvwzA4pC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/6e82a92042396c78ba8844c7d42c98cc/04ff3/RTTperVU1.png" alt="A chart comparing the response time per VU level of the best open source load testing tools" title="A chart comparing the response time per VU level of the best open source load testing tools"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As we can see, one tool with a military-sounding name increases the scale of the chart by so much that it gets hard to compare the rest of the tools. So I'll remove the offender, having already slammed it thoroughly elsewhere in this article. Now we get:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WqOTN_D_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/b7dbff996664b6bd087a75a761781286/04ff3/RTTperVU2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WqOTN_D_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/b7dbff996664b6bd087a75a761781286/04ff3/RTTperVU2.png" alt="A chart comparing the response time per VU level of the best open source load testing tools except Artillery" title="A chart comparing the response time per VU level of the best open source load testing tools except Artillery"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;OK, that's a bit better. So first maybe some info about what this test does. We run each tool at a set concurrency level, generating requests as fast as possible. I.e. no delay in between requests. The request rate varies - from 150 RPS to 45,000 RPS depending on which tool and which concurrency level.&lt;/p&gt;

&lt;p&gt;If we start by looking at the most boring tool first - Wrk - we see that its MEDIAN (all these response times are &lt;em&gt;medians&lt;/em&gt;, or 50th percentile) response time goes from ~0.25ms to 1.79ms as we increase the VU level from 10 to 100. This means that at a concurrency level of 100 (100 concurrent connections making requests) and 45,000 RPS (which was what Wrk achieved in this test) the real server response time is below 1.79 ms. So anything a tool reports, at this level, that is above 1.79 ms is pretty sure to be delay added by the load testing tool itself, not the target system.&lt;/p&gt;

&lt;p&gt;*Why &lt;strong&gt;median&lt;/strong&gt; response times?, you may ask. Why not a higher percentile, which is often more interesting. It's simply because it's the only metric (apart from "max response time") that I can get out of &lt;strong&gt;all&lt;/strong&gt; the tools. One tool may report 90th and 95th percentiles, while another report 75th and 99th. Not even the mean (average) response time is reported by all tools (I know it's an awful metric, but it is a very common one).&lt;/p&gt;

&lt;p&gt;The tools in the middle of the field here report 7-8 ms median response times at the 100 VU level, which is ~5-6 ms above the 1.79 ms reported by Wrk. This makes it reasonable to assume that the average tool adds about 5 ms to the reported response time, at this concurrency level. Of course, some tools (e.g. Apachebench or Hey) manage to generate a truckload of HTTP requests while still not adding so much to the response time. Others - like Artillery - only manages to generate very small amounts of HTTP requests but still add very large measurement errors while doing so. Let's look at a chart showing the RPS number vs median response time measurement. And remember that the server side here is likely more or less &lt;em&gt;always&lt;/em&gt; able to give a median response time of less than 1.79 ms.&lt;/p&gt;

&lt;p&gt;Let's look at response times vs requests/second (RPS) each tool generates, as this gives an idea about how much work the tool can perform and still provide reliable measurements.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Da78wSDE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/65e53cf9bda572da509dcc05bff608e3/04ff3/RTTvsRPS1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Da78wSDE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/65e53cf9bda572da509dcc05bff608e3/04ff3/RTTvsRPS1.png" alt="A chart comparing the response time and request rate of the best open source load testing tools" title="A chart comparing the response time and request rate of the best open source load testing tools"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Again, Artillery is way, way behind the rest, showing a huge measurement error of roughly +150 ms while only being able to put out less than 300 requests per second. Compare that with Wrk, which outputs 150 times as much traffic while producing 1/100th of the measurement error and you'll see how big the performance difference really is between the best and the worst performing tool.&lt;/p&gt;

&lt;p&gt;I know Artillery people will say "But this is just because he used up all the CPU, despite Artillery printing high-CPU warnings". Well, I also ran a test where I slowed down Artillery so those warnings never appeared. I still used 100 concurrent visitors/users, but they each ran scripts with built-in sleeps that meant CPU usage was kept at around 80% and no warnings were printed. The RPS rate ended up being a lot worse, of course - it was 63 RPS. The response time measurement? 43.4 ms. More than +40 ms error.&lt;/p&gt;

&lt;p&gt;So even when Artillery is being run "correctly" and producing an astonishing 63 RPS it still adds a measurement error that is 20 times bigger than that which Wrk adds, when Wrk is producing close to 1,000 times as much traffic. I haven't tested it, but I wouldn't be surprised if &lt;a href="https://github.com/ragnarlonn/curl-basher"&gt;curl-basher&lt;/a&gt; did better than Artillery in this category.&lt;/p&gt;

&lt;p&gt;Let's remove Artillery from the chart again:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--F0V1k3Hr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/a1726508b96a5e208dc2bc2798c16c5d/04ff3/RTTvsRPS2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--F0V1k3Hr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://k6.io/blog/static/a1726508b96a5e208dc2bc2798c16c5d/04ff3/RTTvsRPS2.png" alt="A chart comparing the response time and request rate of the best open source load testing tools except Artillery" title="A chart comparing the response time and request rate of the best open source load testing tools except Artillery"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It's interesting to see the four tools that have the highest measurement errors (excluding Artillery) perform quite similarly here: Siege, Gatling, Jmeter and Locust. They all do just under 3,000 RPS on my setup when simulating 100 VU, and they all seem to add similar amounts of measurement error: between 20 and 30 ms.&lt;/p&gt;

&lt;p&gt;Jmeter used to be one of the more performant tools in these benchmarks, but it seems it has gotten a lot less so over the last 2-3 years. Siege has also sunk quite a bit, and its performance now doesn't really give a hint that it's a tool written in C. Instead, Python-based Locust has sailed up and placed itself next to these other tools, being equally good at generating traffic, if not quite as good at measuring correctly.&lt;/p&gt;

&lt;p&gt;Tsung impresses again. While being an old and not so actively maintained tool, its load generation capabilities are quite decent and the measurements are second to none but Wrk.&lt;/p&gt;

&lt;p&gt;Drill is just weird.&lt;/p&gt;

&lt;p&gt;Vegeta, Apachebench, k6 and Hey all seem to be quite good at generating traffic while keeping the measurement error reasonably low. Bias warning here again, but it makes me happy to see k6 end up smack in the middle in all these benchmarks, given that it is executing sophisticated script logic while the tools that outperform it don't.&lt;/p&gt;

&lt;h2&gt;
  
  
  End summary
&lt;/h2&gt;

&lt;p&gt;k6 rulez!&lt;/p&gt;

&lt;p&gt;Or, uh, well it does, but most of these tools have something going for them. They are simply good in different situations.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I don't really like &lt;strong&gt;Gatling&lt;/strong&gt; , but understand why others like it in the "I need a more modern Jmeter" use case.&lt;/li&gt;
&lt;li&gt;I like &lt;strong&gt;Hey&lt;/strong&gt; in the "I need a simple command-line tool to hit a single URL with some traffic" use case.&lt;/li&gt;
&lt;li&gt;I like &lt;strong&gt;Vegeta&lt;/strong&gt; in the "I need a more advanced command-line tool to hit some URLs with traffic" use case.&lt;/li&gt;
&lt;li&gt;I don't like &lt;strong&gt;Jmeter&lt;/strong&gt; much at all, but guess non-developers may like it in the "We really want a Java-based tool/GUI tool that can do everything" use case.&lt;/li&gt;
&lt;li&gt;I like &lt;strong&gt;k6&lt;/strong&gt; (obviously) in the "automated testing for developers" use case.&lt;/li&gt;
&lt;li&gt;I like &lt;strong&gt;Locust&lt;/strong&gt; in the "I'd really like to write my test cases in Python" use case.&lt;/li&gt;
&lt;li&gt;I like &lt;strong&gt;Wrk&lt;/strong&gt; in the "just swamp the server with tons of requests already!" use case.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then we have a couple of tools that seem best avoided:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Siege&lt;/strong&gt; is just old, strange and unstable and the project seems almost dead.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Artillery&lt;/strong&gt; is super-slow, measures incorrectly and the open source version doesn't seem to be moving forward much.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Drill&lt;/strong&gt; is accelerating global warming.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Good luck!&lt;/p&gt;

</description>
      <category>devops</category>
      <category>testing</category>
      <category>performance</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Practical DevOps #3: Shifting Left</title>
      <dc:creator>Simme</dc:creator>
      <pubDate>Tue, 14 Jan 2020 22:01:00 +0000</pubDate>
      <link>https://dev.to/simme/practical-devops-3-shifting-left-2nfd</link>
      <guid>https://dev.to/simme/practical-devops-3-shifting-left-2nfd</guid>
      <description>&lt;p&gt;One topic I always keep coming back to when it comes to DevOps is shift-left testing.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is shift left testing?
&lt;/h2&gt;

&lt;p&gt;The term was initially coined by Larry Smith in 2001 &lt;a href="https://web.archive.org/web/20140810171940/http://collaboration.cmc.ec.gc.ca/science/rpn/biblio/ddj/Website/articles/DDJ/2001/0109/0109e/0109e.htm"&gt;in an article in Dr. Dobbs Journal&lt;/a&gt; and refers to how we, by testing as early as possible, may deliver both faster and with higher quality.&lt;/p&gt;

&lt;p&gt;What if we instead of postponing testing until the sprint is over and we’ve delivered a new increment to QA were to test all the time; together?&lt;/p&gt;




&lt;h2&gt;
  
  
  Tests are expensive
&lt;/h2&gt;

&lt;p&gt;Many a manager has expressed their dissatisfaction with engineers spending too much time working on test automation. The usual reasoning is that test automation is expensive, and that Devs shouldn’t be doing the job of the QA team. The development team themselves might also reject test automation as too time-consuming, arguing that their time is better spent developing new features.&lt;/p&gt;

&lt;p&gt;And they’re not entirely wrong. Test automation &lt;strong&gt;IS&lt;/strong&gt; expensive. However, it’s not nearly as expensive as manual testing. There are a couple of reasons as to why this usually holds:&lt;/p&gt;

&lt;h3&gt;
  
  
  Lost context
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cYfVatTW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://simme.dev/images/forgot.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cYfVatTW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://simme.dev/images/forgot.gif" alt="forgot" width="500" height="247"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The developer(s) who implemented the feature has moved on to other tickets, which eliminates the possible benefit of still being “in that context” mentally as the testing is performed.&lt;/p&gt;

&lt;p&gt;This, in turn, means that they’ll have to stop whatever they’re doing and brush up on the details any time the one performing the manual testing requires assistance.&lt;/p&gt;

&lt;h3&gt;
  
  
  Limiting flow
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lzXw4bSc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://simme.dev/images/this-is-fine.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lzXw4bSc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://simme.dev/images/this-is-fine.gif" alt="this is fine" width="436" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Building a feature, just to put it in the &lt;code&gt;ready for testing&lt;/code&gt; stack until it gets tested is no way to deliver features. What if testing is postponed? Creating a fix might very well be a week or two from now, as sprint scopes seldom change once the sprint has started. The same goes for actually retesting the fix.&lt;/p&gt;

&lt;p&gt;You probably wouldn’t wait for a burger joint that waits for the cashier to tell the kitchen that the burger’s still raw on one side. They could just flip it an additional time as they cook it to make sure it’s done before they pass it on. Your features are no different.&lt;/p&gt;

&lt;h3&gt;
  
  
  Assurance decay
&lt;/h3&gt;

&lt;p&gt;The only thing we’ll be able to guarantee is that it worked &lt;em&gt;at the exact time of testing&lt;/em&gt;. After that, all bets are off. One could argue that the assurance given by manual tests decays, or rots, with time. Every time we want to increase the assurance again, we’ll have to execute the tests again. Automated tests on the other hand, while sometimes requiring some additional development effort, may be executed on every commit or push, without any additional effort.&lt;/p&gt;

&lt;h2&gt;
  
  
  When to tests
&lt;/h2&gt;

&lt;p&gt;So, when should we test? Only during the development phase of each feature or ticket?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--v5uwwMHP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://simme.dev/images/when-to-test.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--v5uwwMHP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://simme.dev/images/when-to-test.png" alt="when to test" width="880" height="478"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I’d like to argue that we should test all the time. From doing whiteboard simulations together with stakeholders or while designing our feature, all the way to chaos engineering experiments in production.&lt;/p&gt;

&lt;p&gt;With that said, it’s still important that we aim at testing something as early as possible and reasonable. Otherwise, we’ll risk passing the defects on to the next step in our development process, making the process of fixing them a lot more expensive.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jRCYujPr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://simme.dev/images/cost-of-defects.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jRCYujPr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://simme.dev/images/cost-of-defects.png" alt="when to test" width="880" height="473"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Every tool has its purpose
&lt;/h2&gt;

&lt;p&gt;It might sound like I’m proposing that we get rid of manual testing completely. That’s not the case. There’s still an extremely valid use case for manual testing: exploratory testing.&lt;/p&gt;

&lt;p&gt;While test automation is great for making sure that what we know, or assume, is indeed still valid, it’s very ill-suited for exploring new features, finding unintended ways in which they might break.&lt;/p&gt;

&lt;p&gt;So, if you ever find yourself writing a test script (as in instruction, not shell- or javascript), immediately stop what you’re doing, take a step back and get started on automating it instead!&lt;/p&gt;

</description>
      <category>devops</category>
      <category>agile</category>
      <category>testing</category>
      <category>cost</category>
    </item>
    <item>
      <title>Dealing with rejection as a Speaker</title>
      <dc:creator>Simme</dc:creator>
      <pubDate>Wed, 25 Dec 2019 19:43:34 +0000</pubDate>
      <link>https://dev.to/simme/dealing-with-rejection-as-a-speaker-4pfn</link>
      <guid>https://dev.to/simme/dealing-with-rejection-as-a-speaker-4pfn</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsimme.dev%2Fimages%2Frejection.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsimme.dev%2Fimages%2Frejection.jpeg" alt="feeling of rejection"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;During the last 12 months, I’ve been doing &lt;strong&gt;a lot&lt;/strong&gt; of public speaking, both as an instructor and as a conference/meetup speaker. If I count, I end up at:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;6 2-day classes classes&lt;/li&gt;
&lt;li&gt;7 conference or meetup talks&lt;/li&gt;
&lt;li&gt;1 podcast episode (in Swedish, available &lt;a href="https://kodsnack.se/335/" rel="noopener noreferrer"&gt;here&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;While being truly grateful for this opportunity, this also means a lot of really hard work and a whole bunch of rejections. Probably about 9 or 10 times the amount of accepts. According to others, the typical acceptance rate might even be as low as 10%. I’ve been rejected for more than 20 conferences this year.&lt;/p&gt;

&lt;p&gt;This, of course, takes a toll on your feeling of self-worth. As humans, we often tend to focus on these “negative” experiences over the positive ones.&lt;/p&gt;

&lt;p&gt;So, how do we deal with that? Is there any way to make sure that these experiences don’t weigh us down to the point of giving up?&lt;/p&gt;

&lt;h2&gt;
  
  
  Your worth is not defined in comparison to others
&lt;/h2&gt;

&lt;p&gt;When applying to conferences, your CFP submission is usually compared to a bunch of other submissions. Sometimes hundreds of them! It’s easy to get the feeling that the program committee is comparing &lt;em&gt;you&lt;/em&gt; to these other speakers.&lt;/p&gt;

&lt;p&gt;That is not the case! A lot of the time, the program committee won’t even know who submitted what proposal until the final pick, sometimes not even then.&lt;/p&gt;

&lt;h2&gt;
  
  
  There are always more reasons than one
&lt;/h2&gt;

&lt;p&gt;Sure, your proposal might very well be poorly written or uninteresting. Most of the time, that’s likely not the case. Consider aspects like conference focus topics, bad fit, lack of additional speaker slots, overlapping CFP submissions or budget constraints. Those are just as likely reasons as your proposal being bad!&lt;/p&gt;

&lt;h2&gt;
  
  
  Every rejection is a learning opportunity
&lt;/h2&gt;

&lt;p&gt;What could we have done differently? Consider following up on the &lt;em&gt;actual&lt;/em&gt; speaker lineup and see what you can learn from their titles and abstracts to improve your chances next time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ask the committee about constructive feedback
&lt;/h2&gt;

&lt;p&gt;Depending on what vibe I get from the organizers, I usually email them back and thank them for their response and ask whether they’de be willing to give some additional feedback as to why my proposal was rejected and what I could have done differently.&lt;/p&gt;

&lt;p&gt;This might not always be feasible depending on the availability of committee spokespersons and whether they’ve enough time on their hands to actually formulate some feedback. In case you don’t get a reply: remember that most organizers are working pro bono and out of an interest in contributing to their community.&lt;/p&gt;

&lt;h2&gt;
  
  
  Focus on your accomplishments
&lt;/h2&gt;

&lt;p&gt;I write lists. Actually, I write a lot of lists. At least once a year, usually quarterly, I try to sit down and write a list of what I’ve accomplished during that year. These lists contain a whole lot of different things, like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Books I’ve read.&lt;/li&gt;
&lt;li&gt;Conferences or meetups I’ve attended or spoken at.&lt;/li&gt;
&lt;li&gt;Articles I’ve written.&lt;/li&gt;
&lt;li&gt;Contacts I’ve established.&lt;/li&gt;
&lt;li&gt;Classes I’ve taught and people I’ve helped in some way.&lt;/li&gt;
&lt;li&gt;Non-techy accomplishments or decisions I’m especially proud of.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;You are not defined or valued by the amount of CFP acceptances you receive, nor the rejections.&lt;/li&gt;
&lt;li&gt;See each rejection as a learning opportunity and as such, take some time to reflect over what you could improve next time.&lt;/li&gt;
&lt;li&gt;The program committee did not reject you, only your proposal.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>techtalks</category>
      <category>speaking</category>
      <category>conference</category>
      <category>talks</category>
    </item>
  </channel>
</rss>
