DEV Community

jess unrein
jess unrein

Posted on

Different types of testing explained

In standup the other day, my team's DBA was talking about running smoke tests for his most recent project. I've heard people talk about smoke tests before, but for some reason it never really clicked that I have no idea what a smoke test is. How is it different than a unit test? An integration test? A regression test?

It feels a little embarrassing at this point that I can't articulate the difference between these things, so I decided to do a little research and write up an explainer so that I can reference it in the future and not feel like an ignorant dingus. I figured, since I've been working as a dev for almost 5 years and had this question, there are probably others out there who are similarly too shy to ask.

After reading a bunch of different blog posts, stack overflow questions, and random resources I've constructed a Frankenstein approximation of a consensus for several different categories of tests. After a little bit of time spent googling, I think there are three good things to think about to understand different kinds of testing.

1.) What kind of thing do they test?
2.) When are these tests written and run?
3.) What information does a test failure provide?

Different people have different definitions, and a single test suite might include multiple types of tests. For example, you might have a set of tests you run that combine integration tests and regression tests into a single suite. That's fine. There are grey areas, and teams have a habit of developing their own, team-specific vocabulary. You don't need to have a comprehensive suite for each of these categories. You should test at the level that makes sense for:

  • the complexity of your app
  • the amount of traffic your app sees
  • the size of your team

If you think I've radically mischaracterized or omitted something important, especially if you work in testing, please let me know in the comments!

Unit tests

What do they test?

Unit tests evaluate that each atomic unit of code performs the way it's supposed to. Ideally, when you're planning and writing unit tests, you should isolate functionality that can't be broken down any further, and then test that.

Unit tests should not test external dependencies or interactions. You should definitely mock out api calls. Unit test purists would also have you mock out database calls and only ensure that your code operates correctly given correct inputs from outside sources. Depending on your existing codebase or your manager's preferences, this might not be possible. If you aren't able to exclude database functionality from your unit test suite, make sure you are mindful of performance and look for potential optimizations. I can tell you from experience that long running unit test suites are extremely unpleasant and slow down development significantly.

When do I run them?

You should write and run unit tests in parallel with your code. When people refer to Test Driven Development, they're referring to unit tests, and using the tests as the spec for what your code should accomplish.

What happens when they fail?

A failing unit test lets you know that a specific piece of code is busted. If you've broken it down far enough, your failure should zoom in on the exact piece of code that isn't working as intended.

Failures should help you identify and fix problems quickly, and let you know when your specs need to be updated. They're probably a good guide for when to update your code documentation as well.

Integration tests

What do they test?

Integration tests check the interaction between two or more atomic units of code. Your application is composed of individual units that perform specific small functions, and each of those small functions might work in isolation but break when you knit them together.

Integration tests also test the integration of your code with outside dependencies, like database connections or third party APIs.

When do I run them?

Integration tests should be the next step after unit tests.

What happens when they fail?

When an integration test fails, it tells you that two or more core functions of your application aren't working together. These might be two modules you've written that clash in some complicated business logic, or a failure resulting from a third party API changing the structure of their response. It might alert you to bad error handling in the case of a database connection failure.

Failures might be easy to identify, or they might require some manual validation and experimentation to identify. Difficult to solve integration test failures are an indication of where you can improve your logging and error handling.

Regression testing

What do they test?

Regression tests check a set of scenarios that worked in the past and should be relatively stable.

When do I run them?

You should run your regression tests after your integration tests pass. Do not add your new feature to the regression test suite until existing regression tests pass.

What happens when they fail?

A regression test failure means that new functionality has broken some existing functionality, causing a regression.

The failure should let you know what old capabilities are broken, and indicate that you need to write additional integration tests between your new feature and the old, broken feature.

A regression test failure might also indicate that you have inadvertently reintroduced a bug that you fixed in the past.

Smoke testing

What do they test?

Smoke tests are a high level, tightly curated set of automated tests that live somewhere in the space between integration and regression tests. They're there as a sanity check that your site's core functionality isn't wrecked.

The term smoke test seems to be a holdover from plumbing. If you could see smoke or steam coming out of a pipe, it was leaky and needed to be fixed.

When do I run them?

Smoke tests should be a test of your whole system together, ensuring that core functionality remains intact. These shouldn't be comprehensive. These are your significant, big picture, no-go test failures. You should run them early and often, ideally daily, in both staging and production environments.

What happens when they fail?

If a smoke test fails there's a significant problem with your site's functionality. You should not deploy the new changes until these failures are addressed. If they fail in production, fixing these should be very high priority.

Acceptance testing

(I've also heard this called QA/BV/Manual testing, etc.)

What do they test?

Acceptance testing is usually a set of manual tests performed after the end-to-end development is finished. They check to make sure that the feature as written actually meets all of the initial specifications, or acceptance criteria.

What happens when they fail?

Looks like you missed a bit of functionality when writing your code. You'll need to go back to development and fix that. :(

If acceptance tests fail you probably need to decide on acceptance criteria earlier in your planning process next time.

When do I run them?

Since these are manual tests, not tests run as code, the timing is a little different. You and your project owner should draft a set of acceptance criteria before work begins on a project. Any additional scope that's discovered or added to the project should be reflected in the acceptance criteria.

Acceptance tests should happen fairly quickly after development is complete so that you can go back and iterate quickly if something isn't quite right. It makes sense to do these right after unit or integration testing, before you've gone too far in the testing process before significant changes need to be made.

Performance testing

What do they test?
Performance tests check stability, scalability, and usability of your product and infrastructure. You might check things like number of errors per second or how long it takes to load a page. There isn't necessarily pass/fail criteria associated with a performance test. This stage is more about data gathering and looking for areas of improvement.

What happens when they fail?
Performance tests don't exactly fail in the same way that a unit test suite would fail. Instead you collect a set of benchmarks and assess them against where you want those numbers to be. If your performance test fails, it might tell you that you need to pay more attention to infrastructure scaling, database query time, etc.

When do I run them?
Performance tests are a good idea after major releases and refactors.

Load testing

What do they test?
Load testing is a kind of specialized performance test that specifically checks how your product performs under significant stress over a predetermined period of time.

What happens when they fail?
Load tests assess how prepared you are for a significant increase in traffic. If a load test fails, it doesn't mean that your site is broken, but it does mean that you aren't prepared for a viral hit or a DDOS attack. This is probably not a big deal for small products just starting out, but failure should be a concern as your userbase starts to scale.

When do I write them?
Load tests should not be your first concern right out of the gate, but as your product becomes bigger and more established, you should probably run load tests on new features to see if they will affect the overall performance of the site and see if they can be optimized.

I can no longer say I don't know what a smoke test is, and hopefully you learned something along the way too! As I mentioned above, I am not a tester, so if you notice something I've missed or misinterpreted, let me know in the comments!

:)

Top comments (39)

Collapse
 
sublimemarch profile image
Fen Slattery

I'm so happy that you included acceptance testing! As a front end engineer and accessibility specialist who works in consulting, that's the main kind of testing I worry about.

Collapse
 
thejessleigh profile image
jess unrein

I had no idea that the umbrella term "acceptance testing" covered so much from spec validation to accessibility and security. Do you mind if I update the post and credit you for providing additional domain knowledge here?

Collapse
 
sublimemarch profile image
Fen Slattery

Yeah, no prob!

Collapse
 
thejessleigh profile image
jess unrein

I know that there are automated tools for accessibility testing but I really don't know much about them. Do you have any that you go to in particular, or is most of your accessibility acceptance testing a manual process?

Collapse
 
shiling profile image
Shi Ling

I use Google Lighthouse to do accessibility auditing on the frontend.

Accessibility acceptance testing something I'm working on at UI-licious. Right now the test engine has a bias to use ARIA descriptors as labels for buttons and input fields. Working on refining it further and allowing people to configure a strict mode to strictly evaluate ARIA descriptors and semantic HTML elements only when performing acceptance tests.

Collapse
 
sublimemarch profile image
Fen Slattery

So accessibility testing is about 60% automated, 40% manual. The are some things that fundamentally can't be checked automatically, or rather, there are some errors that won't be found with automated tools. (Some of the testing we think about in this space is 'will assistive tech recognize this HTML correctly', and if assistive tech can't parse it correctly, that usually means the automated tools can't parse it correctly either.)

Collapse
 
marcelahne profile image
Marcel Ahne

Hi Jess,
thank you for this overview. I try to continuously improve my test skills since the tests improve my confidence in the code.
I'd like to add one aspect. I found it on a german website. Besides the questions "What is tested" and "When are the tests executed" a third question appears: "Who is the tester?".
There are some kinds of tests the developer himself is responsible for: Unit tests, integration tests, smoke tests, regression tests, ... but some tests should be executed for example by the QA, the stakeholder or the customer: Alpha and beta tests, Usability tests, Accessability tests and more.
It's really important to test your own code but it's also important to be supported by other people, the users, to be sure that your app is working.

Thank you very much for this great resource.

Collapse
 
thejessleigh profile image
jess unrein

I actually dislike the distinction of “who is the tester?”

Software lives and dies by the team, not by the individual. I definitely agree there. As such, testing code, both in an automated and manual fashion, falls collectively to the team and not to an individual. Carving out by role is not super useful, imo.

And I definitely missed a few things here, especially with regard to accessibility and usability testing. Which makes sense - I’ve never touched production front end code. It’s very much a personal blind spot, but there are a number of great resources and points elsewhere in the comments!

Collapse
 
marcelahne profile image
Marcel Ahne

Hi Jess,

I think I understand your point and honestly I agree with it. I don't really use the Who-question to distinct the kind of test, I mean, obviously it's not the fact that a developer executes a test which makes it a unit test.
I just thought of it as a nice extra information. In my experience it's usually not a customer or an end user who executes unit tests. These people test by clicking through the app or at least it's what I think they do ;-). So the "Who" is more like a weak evidence when you ask how to distinct between kinds of tests.
(I have to admit that I maybe missed the topic a little bit but I like the discussion.)

Collapse
 
devanshh profile image
Devansh Agarwal

Oh My God! What a big bundle of knowledge this article and these comment-discussions are! I'm elated that I drop-by here.

The article in itself is self-sufficient but these comments are the cherry on the cake.
Thank you so much , the entire community is giving back selflessly!

Collapse
 
mhzprayer profile image
Matt Hernandez

This is a good distinction, but it doesn't necessarily sit at the top level with the other questions, because the "who" is often embedded via personas in all the testing levels from unit on up. So from a certain perspective it doesn't matter who is testing since the who can be embedded. HOWEVER..I think your statement was going more towards User acceptance testing, which doesn't mean much to developers, nicely proving your point. Developers consider success as "I fulfilled the written requirements" whereas everyone else considers success as "the customer got what they wanted." So there is necessarily a gap here in conversation on the value of User acceptance testing. But good UAT is necessary for success, business-wise.

Collapse
 
zejnilovic profile image
Saša Zejnilović

Nice article, I read through it and through the comments. But it should be called "Devs view of QA terminology" or something like that. I am scared for junior QAs reading this and thinking this is it.

I would add one more thing for all the starting in QA. As QA is the only engineering field I know with so many buzzwords, it is important for each team/company/project to create a glossary at the beginning of working. This is in my experience the only steady solution for misunderstandings.

Collapse
 
jcarlosr profile image
Juan Ramos

Hmm. Why are you scared for junior QAs reading this article?

This article provides a general overview very useful for devs, but is it not the same for QAs?

In case the concepts are wrong, it would be great if you can share some links with starting points for QAs.

Collapse
 
thejessleigh profile image
jess unrein

I'm hoping that a junior QA reading this can read in my bio that I'm not a QA person, and has the wherewithal to read critically and understand that this is not a comprehensive overview of the entire field! I think I make it pretty clear up top in the article that this is a broad overview from a dev perspective, but if you disagree let me know where I could try to highlight that this is nowhere near the end of testing education :)

Collapse
 
david_j_eddy profile image
David J Eddy

Thank you for this article Jess! Testing vocabulary is indeed unsettled and sometimes very confusing. I found this article on Guru 99 has been helpful in the past, and now I will be adding yours to my resource list!

A couple points I would like to share.

  • "...Acceptance testing is usually a set of manual tests...". Acceptance does not have to be a manual process. Ideally the entire test/s process should be automated as to facilitate rapid deployment. Tools like uilicious.com/ , cypress.io/, and selenium to name a few.

  • "...There isn't necessarily pass/fail criteria associated with a performance test...". Failing would be 'application did not load' :) . Honestly though when application reaches the point of implementing performance testing a 'known good' level of operation is know. That becomes the baseline performance. Change should decrease hardware usage per request, decrease wait time to respond, or both. Rarely should changes increase the error rate.

Thank you again for the article. I really enjoy seeing and reading other people care about testing and knowing more about it as it applies to software.

Collapse
 
thejessleigh profile image
jess unrein

Ah. I think I was using a slightly more literal definition of Acceptance Testing - literally checking that the specs are met on a quick pass through - but those resources for more robust acceptance testing could be very helpful to someone setting up a new environment. Thanks!

Also, for performance testing, I'm assuming if you get to this point, then the application will load. I'd hope you have failing tests well before this stage if things are that broken! :) If you have any good resources on "known good" performance levels I'd love to add that to my personal resource list! It can be tricky because so often when we test things as developers, we're testing on high speed ethernet on fairly good computers, and that is not an accurate depiction of all our users. Having some accepted benchmarks that work for a wider variety of environments would be very helpful!

Collapse
 
david_j_eddy profile image
David J Eddy

Re: Acceptance Testing, your view is correct. Those 'specs' can be provided in a machine format (image, HTMLDom, etc) to be compared to the output from the test run. :). Reading some of the other comments I agree 'Acceptance Testing' covers a wide swath. Maybe break out Security, Compliance, etc. I have no idea what they would be called though. :S

Re: Perf. Testing, Unit tests can pass, integrations pass, but the running application does not respond before a timeout; but maybe does in 120 seconds. A trick I learned was that Chrome can throttle the network speed of a request. This means headless version can as well :). The other side of Perf. Testing is x successful responses in Y seconds under Z load. The measure of these can be benchmarked using APM tools (Application Process Monitors). New Relic, Sentry, and AppDynamics are three of the more well known vendors.

"Application Testing": as wide as the horizon and as deep as a gravity well of a black hole.

Collapse
 
sliqjustin profile image
sliq-justin

"The term smoke test seems to be a holdover from plumbing. If you could see smoke or steam coming out of a pipe, it was leaky and needed to be fixed."

I was under the impression that "smoke test" was borrowed from Electrical Engineering. i.e.: "This is the first time we're powering on the device/circuit. Be ready to turn it back of if we see or smell smoke."

Interestingly, this is the first I've heard the term used as a reference to plumbing. TIL, I guess.

Collapse
 
thejessleigh profile image
jess unrein

So I looked at this and from what I found online (I will admit, I did not verify the sources), the smoke test concept from electrical engineering came from plumbing, so it's a term that's traversed several industries :)

Collapse
 
cotcotcoder profile image
JeffD

Great post. In "acceptance testing" we can add domain like Security, some tools to include into CI process can search problems like *injections, CSRF, ...

Collapse
 
thejessleigh profile image
jess unrein

I had no idea that the umbrella term "acceptance testing" covered so much from spec validation to accessibility and security. Do you mind if I update the post and credit you for providing additional domain knowledge here?

Collapse
 
cotcotcoder profile image
JeffD
Collapse
 
cdbartholomew profile image
Chris Bartholomew

I never heard the plumbing metaphor about smoke tests before, but it makes sense. When I think of smoke tests, I think of a different metaphor: "Where there's smoke, there's fire".

If the system can't pass the smoke tests, which are usually simple tests focusing on basic functionality, then there is something seriously wrong. Get ready to call the fire department.

Collapse
 
jacoby profile image
Dave Jacoby • Edited

I had always believed that a smoke test was one where you saw it plugging in as "let out the magic smoke", or short-circuited and caught fire.

I'm not arguing that you're wrong. The plumbing use of the term seems to predate the EE use by almost a century.

Collapse
 
priteshusadadiya profile image
Pritesh Usadadiya • Edited

Awesome article @thejessleigh

Here are my 2 cents :)

Regression testing and smoke testing both can be done with Automation or with Manual or both (it really depends on whether the team has implemented Automated tests or whether they are doing Manually testing only)

In the products that i have worked for,usually both gets done (Automation and Manual)

Acceptance testing

Acceptance testing gets done after end-to-end development.
Here we can consider E2ED as end-to-end development of a feature or end-to-end development for a release (combination of all features)

As you have stated on your article, it really depends on the team and what kind of workflows they have created.

Collapse
 
somedood profile image
Basti Ortiz

Finally! I now know what these buzzwords mean. Thank you for posting this!

Collapse
 
abz89 profile image
Latuconsina Abz

Nice and comprehensive article!
Just want to confirm, is the "Acceptance Testing" similar with "End to End Testing"?

And It might be 'nice' if You explain a little about "Penetration Testing" as security matter :)

Thanks

Collapse
 
meenakshi052003 profile image
Meenakshi Agarwal • Edited

Great, you have summed up the different testing types so well. However, my view on Acceptance testing is that it is not always manual testing or manual test cases. Instead, it focuses on scenarios which a tester writes to confirm the product/feature functionality keeping the customer context in focus. And since most of us if not everyone who operates in Agile, we have to automate acceptance tests as well. If we don't do so, then it'll be quite tough to produce in-time releases. By the way, I've also got a bit of hands-on with different types of testing and wrote a little piece by myself. Lastly, it always feels a bit improved on learnings after reading such a good article as yours was.

Collapse
 
ayrmer profile image
Charlie Collins

Spot on, but missing some that are increasing important these days:

Penetration Testing (Pen text)
General Security Testing
User Privilege (role) based Testing
Data Protection Testing

Of course one could argue that these could be encapsulated with the other tests, but cyber security should be build in at design stage and then tested to ensure there are no vulnerabilities. Recent York City fiasco is a good example, where developer found a vulnerability after implementation and was repaid by been reported to Yorkshire Police.

I haven't had time to read all comments so sorry if repeating what others have already raised.

Collapse
 
apatil4 profile image
Akshay Patil

Thanks for the post and it covers some important types of testing one should know.

Collapse
 
balkac profile image
Kral Balkizar

Hi, nice article. Anyway Load tests are part of Performance testing, together with Stress tests and Scalability tests. Means Load/Stress/Scalability tests are tests for specific part of software performance.

Collapse
 
thejessleigh profile image
jess unrein

Yup, as I indicated in my article, load testing is indeed a specialized form of performance testing.