TLDR; We can only test to the level supported by our Ability, and the degree to which we are supported by tooling to Observe, Interrogate and Manipulate the System.
Video
- 00:00 Introduction
- 00:26 Application Under Test
- 02:51 Live Testing
- 03:41 Testing vs Automating
- 06:03 Testability vs Automatability
- 09:25 Workarounds
- 10:55 Technical Knowledge
- 15:44 Test Approach - JavaScript
- 17:02 Exercise - Test Approach
- 17:24 Automated Execution Approach
- 18:34 Exercise - Code Review
- 20:35 Exercise - Your Project
- 20:59 Server Side
- 24:20 Exercise - Test Approach - API
- 24:52 API Tooling
- 29:14 Exercise - API Testing
- 29:34 Automating the API
- 31:54 API Interacting with UI
- 37:09 End Notes
Introduction
A customer has reported that one of your pages doesn’t work. But your automated execution coverage tells you that it does. What went wrong?
That’s what we’re going to explore now.
- difference between testing and automating,
- testability vs automatability,
- we’ll think about test coverage from a technology perspective,
- and we’ll look at some tools that can help you test.
Here’s the Application Under Test.
Infinite Scroll Challenge on the TestPages
Many sites have infinite scroll. You scroll to the bottom and it loads new items.
That functionality works. We have have automated coverage. That passes.
What’s the issue?
Oh yeah, sorry I forgot to mention. I’m going to encourage you to think. Ideally we would do this as a workshop, as hands on training, but it’s a video and text workbook.So all I can do is prompt you into thinking.
Contact me if you are interested in live training for your team or organization.
So what’s the issue?
Well, I can’t click the button. The page refreshes and scrolls too quickly and the button isn’t enabled for long enough when it is visible.
This happens with a lot of infinite scroll sites, there are footers that can’t be clicked, stuff at the bottom of the screen that you can’t see.
This did work at some point, because we tested it, but I guess someone changed the time out value and the automated execution assertions didn’t flag this as an issue.
So let’s start thinking about this from a Testing perspective.
Live
If this was a live project we could just stop here and raise a defect.
Obviously the timeout is wrong. Raise the defect. Move on to the next issue.
But wait.
We have automated coverage, and it didn’t highlight this issue.
We should think about the difference between Testing and Automating.
Testing vs Automating
Testing is what we do. As humans, we interact with the software and we observe, we build models, we learn, we compare the actual interaction with the models. We report information derived from the difference between our models and the observations.
When we investigate an issue and expand our models we interrogate the system more deeply to learn what’s going on.
- Testing is the human activity of interacting, observing, thinking, learning, experimenting.
- Automating, is the human activity of making the interaction and observation of the system an automated process. And that includes the comparing of observed results with the expected results.
Automating is a human activity which results in the output of an automated execution process. The only involvement with the human after the activity has been automated is to investigate the failure reports and maintain it when it fails.
So both Automating and Testing are human processes. But testing leads to more human processes, automating leads to an automated process that only involves humans when it fails.
NOTE: AI might change how I view the process of automating. Certainly I’ll need more distinctions in how I describe nuances around Automating, but for now, all the automating I do is Human Initiated, or Directed, and results in some automated execution process.
Testability vs Automatability
And let’s just quickly look at Testability and Automatability because these words are used badly when describing software.
Is this system testable?
Yes. I can access it in the browser, I can see it, I can interact with it. The button toggles too quickly but I can test the application.
But when we talk about testability we often talk about, adding ids to the elements, make it it more observable, etc.
Well the ids, are really for automatability, not testability.
In the browser, the browser tooling makes it observable because of the technology used, not the system.
Most of the time when we are talking about testability we are really talking about automatability.
This application has been built with automatability in mind.
It has ids, there are classes, the JavaScript source is visible, it is easy to change the state and configuration variables, this thing is so easy to automate and observe. But none of that was required to help me test it.
Testability is not the same as automatability. Keep that in mind as we continue through this process, particularly when we move to the server side interaction.
With the JavaScript Infinite Scroll system, the main thing that impacts my Testability, is the bug, preventing me from testing the button functionality.
Workarounds
Let’s quickly consider workarounds.
Your ability to find workarounds will impact your ability to test the system.
So your ability impacts the Testability.
Testability is as much about your ability to test the application as it is the application supporting you in testing it.
Depending on how deep you want to go. You can only test the application to the limit that you can observe, manipulate and interrogate the application.
Your ability to test the application is often impacted by the usability of the application, and that is true here. The application is not usable for one main function so it is hard to test that function.
But, we can work around that, for this application, with more Technical Knowledge.
Technical Knowledge
When working with the web, we need to understand our tool capabilities.
My tool at this point is the browser.
What can I do with it?
I can view the page source.
I need to be able to understand HTML. Some CSS. Some JavaScript.
Having the tooling ability to look at the source doesn’t help me unless I understand what I’m looking at.
So if you’re testing web applications, you probably want to understand:
- How a browser works,
- How HTML works,
- What is CSS and how we use it,
- What is JavaScript and how it works.
At the very least to have a reading ability of these technology artifacts.
So at this level, at the source, with the technical knowledge I have… I can see in the JavaScript some variables, I can see they are amendable.
But, I don’t have the tooling to amend it yet.
So, what else do I have available?
I can look at the dev tools.
I can see the DOM view shows me much the same as the source. But… it can be different.
- The source is what we gave the browser to work with.
- The DOM is what the browser created after interpreting and executing the source.
In the DOM I can see the JavaScript as well.
I also have the ability now, to interact and manipulate the JavaScript variables that I observed in the source.
There are a few timeout and millisecond variables there.
Perhaps the bug is the scrollAfterMillis variable?
Let me change that to 3000.
scrollAfterMillis
And… now it just takes longer to refresh, but I still can’t click the button.
Let me try the preLoadTimeout:
preLoadTimeout=2000
Now, the button stays active for 2 seconds before it loads the next set of data. That’s my workaround to make this application testable.
It also means that I can go a little deeper in the bug report and say that the root cause is the preLoadTimeout being too small a value so the user doesn’t have time to click the button.
You might be interested in this video and blog post covering more details of Chrome Dev Tools for Testing.
Test Approach
So what is my test approach for this?
- load the page
- scroll down
- see that it refreshes and adds the data I expect
- click the button
- see that it doesn’t refresh when I scroll
Is that it?
- Do I need to reload the page and check that it starts auto refreshing again?
- How long do I need to wait after stopping it, and trying to scroll to make sure it doesn’t start auto-refreshing again?
- What else?
Exercise - What is your test approach?
After this video I encourage you to think through what your Test Approach for this application would be.
Test Approach vs Automated Execution Approach
Once you’ve figure out how to test it, how would you automate it?
I’ll show you what we have here.
And this test passes.
@Test
public void scrollToStopLoadingAndClick(){
WebDriverWait(driver, Duration.ofSeconds(10)).until(
ExpectedConditions.elementToBeClickable(page.getStopLoadingButton()
));
page.getStopLoadingButton().click();
new WebDriverWait(driver, Duration.ofSeconds(10)).until(
ExpectedConditions.textToBePresentInElement(
driver.findElement(By.id("statusMessage")),"Clicked")
);
}
Is that good enough?
It works. It passes.
But it didn’t highlight the fact that the halting functionality is unusable for a human.
This is one of the issues when we automate something.
We automate the functionality.
We don’t automate the experience.
If you are interested in more details about WebDriver with Java then have a look at this video Masterclass on the basics of WebDriver with Java
Exercise - Critique the code
So after the video, or now, pause the video.
Critique this code:
- Understand what it does
- What does it not do?
- I can see it doesn’t try to re-scroll the page to make sure that clicking the button stopped the auto-scroll.
- What else does it not do?
- What conditions does it not check?
- Should it match the user experience?
- How could it?
- What would you change to match the user experience?
Much automated execution coverage does not match the user experience. When it doesn’t then there is a risk that the automated execution passes, but the user experience doesn’t.
And there is a risk we don’t notice.
Testing will hopefully identify those issues. But automated execution often does not.
Exercise - Consider Your Project
Is there a risk on the projects you work with that the automated execution covers the functionality, but not the user experience of that functionality?
Server Side
OK, so that was the JavaScript version of the Infinite Scroll.
It offered us some scope for thinking like the tester.
But let’s push this a little further.
We also have the server side infinite scroll.
https://testpages.eviltester.com/challenges/synchronization/infinite-scroll/server-side/
It looks the same. It has the same bug, the button is too fast. And my automated execution passes.
But this page, when it needs new items to display it calls the server to get the information back.
Do you trust that statement?
How do you know it calls the server?
Visibly, in the browser, when I test it and observe the output. It looks the same.
We need to be able to be able to interact with the application from a technical perspective.
What if I just think it is connecting to the server, but I accidentally release the wrong version? What if I just renamed the file?
So we need to check.
We need to observe the application at multiple technical levels.
So let’s look in the dev tools again. And this time we’ll look in the network tab.
Filter by Fetch/XHR to see the requests made by JavaScript and I can observe a call to moreitems
Let’s interrogate that request
I can see a JSON response:
[
{
"id": 1,
"text": "This is content item number 1. Scroll down
to automatically load more content when the
\"Click to Stop\" button becomes visible."
},
{
"id": 2,
"text": "This is content item number 2. Scroll down
to automatically load more content when the
\"Click to Stop\" button becomes visible."
},
...
]
Great.
So I have tooling to increase my ability to test this.
But I need to know about:
- HTTP Requests
- Fetch and XML Http Request
- JSON
My ability to test this application will be limited if I do not have that technical knowledge and if I do not know how to use the Dev tools network tab.
But at least I know now that it is making server requests.
I didn’t have to trust anyone. I could verify that this was true.
So now, our testing scope just expanded.
Exercise - Server Side Test Approach
Now, what do you have to test?
Is it good enough to just test the front end now but scrolling up and down and clicking the button?
After watching this video, take some time to think through what you want to test to cover the Server Side infinite scroll.
- Do we also have to test that HTTP call?
- How much do we have to test it?
Server Side Test Tooling
And… do you know how to do that?
How can you amend the HTTP requests?
We can do some of that from Chrome.
I could:
- copy as cURL
- copy as fetch
with cURL I can use the command line.
Or fetch I can amend it in the console.
I could take the cURL and paste it into an API tool.
I can do that in Bruno by creating a new request from cURL.
Or in postman I can paste in the cURL command into the new request.
I can experiment with it in these tools.
It is important to test the API on its own.
For example, when I built this, it was only when I was automating and testing the API, that I realised that I really need a limit on the count. If I let people make a request and ask for 6 million items back. That could easily bring down my server.
NOTE: you can find a list of API Tools on the API Challenges site.
Exercise - API Testing
As an exercise, think through what conditions you would want to test on the API, then you can use any of the tooling approaches mentioned to experiment.
Automating The API
I would probably want to automate the API as well.
This does remove us from the user experience because the responses from this API are normally handled by the UI, so just because we see something working in the API we can’t assume that it works with the API interacting with the UI.
I used RestAssured to automate the API.
The coverage runs at the same time as the web UI coverage.
And I have a lot more coverage at the API level, than I do at the UI level.
Think about what coverage you would add for the API.
Then try to automate it. I used Java, with RestAssured, but you could use any library or programming language you want.
Or you could even have a set of canned requests in Postman or Bruno or any of the other API tools.
For production API automating, I primarily use code and HTTP or API libraries. I don’t tend to use tools like Bruno or Postman.
But for production API Testing, I do use Bruno and other tools. Because I like the flexibility and the ability to see the requests being made.
Testing and Automating often sound like the same thing. But they have different aims and are supported by different tooling.
API Interacting with UI
And now I’m in the position where I’m testing the API in isolation of the UI.
Is that a risk?
For example, I don’t know how the front end handles a 500 error response. I can see it is the same JSON format but it is a different status code, does that make a difference?
[
{
"id"=0,
"text"="For input string: '-1.02'"
}
]
Would you test that?
Do you know how to test that?
One way to do that is to use a Proxy.
Intercept the request, amend it to be one that triggers an error and play it through to the front end.
And then I can see if the system handles error responses or not.
I would either use Zap or BurpSuite.
For this exercise I would use ZAP.
- open a session using Edge
- create a new context for testpages.eviltester.com
- filter the history to “Show only URLs in Scope”
- Set the breakpoint on all requests and responses
- trigger a scroll through the UI
- amend the request
- see the result in the UI
I can use tooling to increase my ability to observe the system and manipulate the system at different technology levels.
NOTE: A list of recommended HTTP Proxy Tools is available on the API Challenges site
End Notes
It is often surprising how much depth we have to test and automate when testing even simple pieces of functionality.
The more that we extend our technical ability to cover the multiple levels of the application, and we learn how to use tooling to help us observe, interrogate and manipulate at those different technology levels, the more we can expand the coverage of our testing.
Automating is not the same as Testing. Both are human processes, but the output of automating is not the same as the output of testing.
But yes… we can be testing, as we are automating, we may well learn things during the process of automating the application. But we should not confuse the continued execution of the output from automating with testing.
Testing can miss things because we are human, or we may not have covered the conditions, or we may not have gone deep enough into the system.
Automated execution can miss things because we humans forgot to add the conditions, or we didn’t assert enough. But automated execution can also miss the human user experience and tell us things are working, when they clearly are not.
I hope that you do now go off and do the exercises yourself. You may not have all the skills to do this yet, you may not have tried all the tools. That just means you can revisit this exercise and application multiple times as you grow your knowledge and skill set. And repeat it when you want to evaluate or learn new tools.
That’s why I created the Test Pages, and that’s also why this is a fairly high level overview.
Remember to work through the exercises.
Exercises
The Applications Under Test
- https://testpages.eviltester.com/challenges/synchronization/infinite-scroll/
- https://testpages.eviltester.com/challenges/synchronization/infinite-scroll/server-side/
Exercise - What is your test approach for JavaScript Infinite Scroll?
- think through what your Test Approach for the JavaScript Infinite Scroll application would be.
- Do you need to reload the page and check that it starts auto refreshing again?
- How long do you need to wait after stopping it, and trying to scroll to make sure it doesn’t start auto-refreshing again?
- What else?
- what conditions would you cover?
- how would you approach the testing?
Exercise - Critique the code
- Understand what it does
- What does it not do? - I can see it doesn’t try to re-scroll the page to make sure that clicking the button stopped the auto-scroll. - What else does it not do? - What conditions does it not check?
- Should it match the user experience? - How could it? - What would you change to match the user experience?
Critique this code:
public class InfiniteScrollTest {
static WebDriver driver;
static InfiniteScrollPage page;
@BeforeAll
static void setupWebDriver(){
driver = DriverFactory.getNew();
page = new InfiniteScrollPage(driver);
}
@BeforeEach
public void reload(){
page.open();
}
@Test
public void scrollToStopLoadingAndClick(){
ExpectedConditions.elementToBeClickable(page.getStopLoadingButton()
));
page.getStopLoadingButton().click();
new WebDriverWait(driver, Duration.ofSeconds(10)).until(
ExpectedConditions.textToBePresentInElement(
driver.findElement(By.id("statusMessage")),"Clicked")
);
}
@AfterAll
public static void closeDriver(){
driver.close();
}
}
Supporting Abstractions:
public class DriverFactory {
public static WebDriver getNew() {
WebDriver driver;
ChromeOptions options = new ChromeOptions();
options.addArguments("--disable-smooth-scrolling");
driver = new ChromeDriver(options);
return driver;
}
}
public class InfiniteScrollPage {
private final WebDriver driver;
public InfiniteScrollPage(WebDriver driver) {
this.driver = driver;
}
public void open() {
String url = SiteConfig.SITE_DOMAIN +
"/challenges/synchronization/infinite-scroll/";
driver.get(url);
}
public WebElement getStopLoadingButton() {
return driver.findElement(By.id("loadMoreBtn"));
}
}
public class SiteConfig {
public static final String SITE_DOMAIN = "https://testpages.eviltester.com";
}
Exercise - Consider Your Project
- Is there a risk on the projects you work with, that the automated execution covers the functionality, but not the user experience of that functionality?
Because if that’s a risk, you might want to revisit your test approach and your automated execution approach.
- How might your approach need to change?
Exercise - Server Side Test Approach
What do you have to test now that the Server Side calls are involved?
- Is it good enough to just test the front end now but scrolling up and down and clicking the button?
- Do we also have to test that HTTP call?
- How much do we have to test it?
Exercise - API Testing
As an exercise, think through what conditions you would want to test on the API, then you can use any of the tooling approaches mentioned to experiment.
Suggested Tools:
- Browser Dev Tools - fetch requests in console
- Browser Dev Tools - generate cURL and use from CLI
- Bruno
- Postman
A List of API Tools is available on API Challenges site
Exercise - UI and API Interactive Testing
- Use a Proxy to allow you to observe and interrogate the traffic from the Web Site to the Internal API backend.
- Amend the Request to trigger an error response and see how the front end handles it.
- Try testing the API from within the proxy.
A list of recommended HTTP Proxy Tools is available on the API Challenges site
Join our Patreon from as little as $1 a month for early access to videos, ad-free videos, free e-books and courses, and lots of exclusive content.
Top comments (0)