<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Michael Battat</title>
    <description>The latest articles on DEV Community by Michael Battat (@michaelvisualai).</description>
    <link>https://dev.to/michaelvisualai</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/michaelvisualai"/>
    <language>en</language>
    <item>
      <title>How to eradicate visual drift in applications</title>
      <dc:creator>Michael Battat</dc:creator>
      <pubDate>Tue, 16 Mar 2021 16:52:28 +0000</pubDate>
      <link>https://dev.to/michaelvisualai/how-to-eradicate-visual-drift-in-applications-3ima</link>
      <guid>https://dev.to/michaelvisualai/how-to-eradicate-visual-drift-in-applications-3ima</guid>
      <description>&lt;p&gt;Recently we released a &lt;a href="https://info.applitools.com/udccc"&gt;case study&lt;/a&gt; with Sonatype that showed how it eradicated visual drifts in its applications. If you're interested, here's what we shared.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Situation
&lt;/h3&gt;

&lt;p&gt;Sonatype first bought &lt;a href="https://info.applitools.com/uw9e"&gt;Applitools&lt;/a&gt; in 2017 to address a core problem affecting web applications: visual drift due to code changes with unaccounted UI dependencies. In developing the Nexus Lifecycle product, Sonatype learned that small code changes could change the application visually in unanticipated ways. &lt;/p&gt;

&lt;p&gt;Nexus runs on developer systems that include Windows, Mac, and Linux. Their customers would run various versions of browsers – from most stable to most recent. The big driver for Sonatype involved ensuring Internet Explorer 8 support even as development moved beyond IE 8 to more recent browser versions.&lt;/p&gt;

&lt;p&gt;Multiple development teams worked on the product. Teams would reuse each other’s code. Small changes in markup or function used by one team might impact the code of another. Sonatype used a range of functional automation tools, but visual issues went undetected. Understanding the limitations of functional tests to catch visual defects, developers had to devote engineering time to uncover these failing cases manually.&lt;/p&gt;

&lt;p&gt;Jamie Whitehouse and everyone on the development team spent time each release working to uncover and address these undetected failures. Often, this work occurred as spot checks of the 1,000+ pages of the application during development. In reality, this work, and the inherent risk of unintended changes, slowed the delivery of the product to market.&lt;/p&gt;

&lt;h3&gt;
  
  
  Looking For A Solution
&lt;/h3&gt;

&lt;p&gt;Jamie and his team sought an approach for highlighting these differences. Key capabilities he needed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automated – a solution that could capture and highlight visual differences without manual intervention&lt;/li&gt;
&lt;li&gt;Accurate – low incidence of false positives and missed errors. Sonatype needed element-level accuracy, and a way to ignore pixel-level differences that plague pixel diff tools.&lt;/li&gt;
&lt;li&gt;Integrated – easy to integrate with existing tests.&lt;/li&gt;
&lt;li&gt;Complete – In evaluating tools, Jamie wanted to avoid tools that would require his team’s incremental development work to use. Tooling work would cost development hours without delivering value to Sonatype customers. &lt;/li&gt;
&lt;li&gt;Supported – Sonaytpe would pay for support instead of using their own engineers to support the solution.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Jamie’s team investigated the available solutions. They wrote off pixel-diff comparators because the team’s experience indicated a high degree of false positives from pixel diffs. Also, the Sonatype use case differed greatly from graphic design use cases, where pixel match matters. They also wrote off open source tools, which put engineering resources at risk to develop and maintain a solution that met Sonatype’s needs. &lt;/p&gt;

&lt;p&gt;Given the constraints, Jamie says, Applitools was the obvious choice to try. Applitools provided AI-based approach that allowed it to identify and compare visual elements instead of comparing pixels. It also provided a range of match comparisons, including layout comparisons, content checks, and even exact pixel comparison (if needed). Jamie’s team saw the promise of automated visual validation without false positives.&lt;/p&gt;

&lt;p&gt;During their initial deployment, Applitools met their needs. It integrated with their existing test automation, delivered accurate results, and Applitools provided great support.&lt;/p&gt;

&lt;h3&gt;
  
  
  Results
&lt;/h3&gt;

&lt;p&gt;After two years of using Applitools, Sonatype has reaped some serious rewards. &lt;/p&gt;

&lt;p&gt;While they continue to justify the cost of Applitools based on the engineering effort saved, the real benefit involves the complete elimination of visual drift in their applications. Sonatype now knows, with certainty, the effect of changes to rendering and markup across reused components – simply through running their test automation.  Developers can deliver code changes to existing code and know that Applitools can properly catch all the changes – including unintended changes – early in the development process.&lt;/p&gt;

&lt;p&gt;If Sonatype engineers make a change in their margins across a number of pages, all the differences show up as highlights in Applitools. Features in Applitools like Auto-Maintenance make visual validation a time saver. Auto-Maintenance lets engineers quickly accept identical changes across a number of pages – leaving only the unanticipated differences. As Jamie says Applitools takes the guesswork out of testing the rendered pages. &lt;/p&gt;

&lt;p&gt;As Jamie explains:&lt;/p&gt;

&lt;p&gt;“With unit tests, you check them in once and then run them forever. Once they’re running, you expect them to stay green and check them only when they’re red. Now, imagine being able to do the same thing with UI tests. You write them once and only check them when they’re red. Imagine how much more productive you would be if you didn’t have to worry about missing visual changes?”&lt;/p&gt;

&lt;p&gt;With Applitools deployed, Jamie has his engineering team focused on delivering value in Nexus Lifecycle. Applitools has helped uncover countless unexpected changes during the development phase, as opposed to waiting for final testing. The true value of using Applitools – development free from visual risk – is hard to quantify. But, when calculating the speed of delivering products to market, Jamie knows he’s much faster with Applitools.&lt;/p&gt;

&lt;h3&gt;
  
  
  Where Next?
&lt;/h3&gt;

&lt;p&gt;To reduce its UI risk further, the Sonatype team is deploying a shared components library. Their library will form the basis of the future UI, with components reused across their Nexus Lifecycle application. The component team evaluated Applitools alongside several solutions specifically geared toward component testing. They concluded that Applitools would test their entire component library with the same accuracy and workflow they rely on in their existing end-to-end tests. &lt;/p&gt;

&lt;p&gt;Sonatype plans to use Applitools to test the entire library in action across all browsers and viewports. Their goal is to deliver visual validation even earlier in the development phase and diminish their visual risk in end-to-end tests.&lt;/p&gt;

&lt;p&gt;Jamie expects this approach can speed their delivery further, as Sonatype can re-use components with much lower risk of unexpected UI behavior.&lt;/p&gt;

</description>
      <category>writing</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Get A Jump Into GitHub Actions</title>
      <dc:creator>Michael Battat</dc:creator>
      <pubDate>Thu, 04 Mar 2021 20:11:28 +0000</pubDate>
      <link>https://dev.to/michaelvisualai/get-a-jump-into-github-actions-4j5b</link>
      <guid>https://dev.to/michaelvisualai/get-a-jump-into-github-actions-4j5b</guid>
      <description>&lt;p&gt;On January 27, 2021, &lt;a href="https://www.linkedin.com/in/angiejones/" rel="noopener noreferrer"&gt;Angie Jones&lt;/a&gt; of Applitools hosted &lt;a href="https://www.linkedin.com/in/brianldouglas/" rel="noopener noreferrer"&gt;Brian Douglas&lt;/a&gt;, aka “bdougie”, Staff Developer Advocate at GitHub, for a webinar to help you jump into &lt;a href="https://docs.github.com/en/actions/learn-github-actions/introduction-to-github-actions" rel="noopener noreferrer"&gt;GitHub Actions&lt;/a&gt;. You can watch the &lt;a href="https://www.youtube.com/watch?v=fW6dUuNr0gg&amp;amp;feature=youtu.be" rel="noopener noreferrer"&gt;entire webinar on YouTube&lt;/a&gt;. &lt;/p&gt;

&lt;h2&gt;
  
  
  What Are GitHub Actions?
&lt;/h2&gt;

&lt;p&gt;Angie’s first question asked Brian to jump into GitHub Actions.&lt;/p&gt;

&lt;p&gt;Brian explained that GitHub Actions is a feature you can use to automate actions in GitHub. GitHub Actions let you code event-driven automation inside GitHub. You build monitors for events, and when those events occur, they trigger workflows.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi8dxc4w3ckbysmplij6f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi8dxc4w3ckbysmplij6f.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you’re already storing your code in GitHub, you can use GitHub Actions to automate anything you can access via webhook from GitHub. As a result, you can build and manage all the processes that matter to your code without leaving GitHub. &lt;/p&gt;

&lt;h2&gt;
  
  
  Build Test Deploy
&lt;/h2&gt;

&lt;p&gt;Next, Angie asked about Build, Test, Deploy as what she hears about most frequently when she hears about GitHub Actions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyeorowmakodqzao72nm8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyeorowmakodqzao72nm8.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Brian mentioned that the term, GitOps, describes the idea that a push to GitHub drives some kind of activity. A user adding a file should initiate other actions based on that file. External software vendors have built their own hooks to drive things like continuous integration with GitHub. GitHub Actions simplifies these integrations by using native code now built into GitHub.com.&lt;/p&gt;

&lt;p&gt;Brian explained how GitHub Actions can launch a workflow. He gave the example that a team has created a JavaScript test in Jest. There’s an existing test using Jest – either npm test, or jest. With GitHub Action workflows, the development team can automate actions based on a starting action.  In this case, the operator can drive GitHub to execute the test based on uploading the JavaScript file.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Get Back To What You Like To Do
&lt;/h2&gt;

&lt;p&gt;Angie pointed out that this catchphrase, “Get back to what you like to do,” caught her attention. She spends lots of time in meetings and doing other tasks when she’d really just like to be coding. So, she asked Brian, how does that work?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8x4g5y54hdyk0j7xzw57.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8x4g5y54hdyk0j7xzw57.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Jump into GitHub Actions and get back to programming work.&lt;br&gt;
Brian explained that, as teams grow, so much more of the work becomes coordination and orchestration. Leaders have to answer questions like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What should happen during a pull request? &lt;/li&gt;
&lt;li&gt;How do we automate testing? &lt;/li&gt;
&lt;li&gt;How do we manage our build processes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When engineers have to answer these questions with external products and processes, they stop coding. With GitHub Actions, Brian said, you can code your own workflow controls. You can ensure consistency by coding the actions yourself. And, by using GitHub Actions, you make the processes transparent for everyone on the team.&lt;/p&gt;

&lt;p&gt;Do you want a process to call Applitools? That’s easy to set up. &lt;/p&gt;

&lt;p&gt;Brian explained that GitHub hosted a GitHub Actions Hackathon in late 2020. The team coded the controls for the submission process into the hackathon. You can still check it out at &lt;a href="https://githubhackathon.com/" rel="noopener noreferrer"&gt;githubhackathon.com&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The entire submission process got automated to check for all the proper files being included in a submission. The code recognized completed submissions on the hackathon home page automatically.&lt;/p&gt;

&lt;p&gt;Brian then gave the example of work he did on the GitHub Hacktoberfest in October. For the team working on the code, Brian developed a custom workflow that allowed any authenticated individual to sign up to address issues exposed in the Hackathon. Brian’s code latched onto existing authentication code to validate that individuals could participate in the process and assigned their identity to the issue. As the developer, Brain built the workflow for these tasks using GitHub Actions.&lt;/p&gt;

&lt;p&gt;What can you automate? Informing your team when a user does a pull request. Send a tweet when the team releases a build. Any webhook in GitHub you can automate with GitHub Actions. For example, you can even automate the nag emails that get sent out when a pull request review does not complete within a specified time. &lt;/p&gt;

&lt;h2&gt;
  
  
  Common Actions
&lt;/h2&gt;

&lt;p&gt;Angie then asked about the most common actions that Brian sees users running.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk18vf3dw06aqpwbyt47z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk18vf3dw06aqpwbyt47z.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Brian summarized by saying, basically, continuous integration (CI). The most common use is ensuring that tests get run against code as it gets checked in to ensure that test suites get applied. You can have tests run when you push code to a branch, push code to a release branch or do a release, or even when you do a pull request.&lt;/p&gt;

&lt;p&gt;While test execution gets run most frequently, there are plenty of tasks that one can automate. Brian did something specific to assign gifts to team members who reviewed pull requests. He also used a cron job to automate a GitHub Action which opened up a global team issue each Sunday US, which happens to be Monday in Australia, and assigned all the team members to this issue. Each member needed to explain what they were working on. This way, the globally-distributed team could stay on top of their work together without a meeting that would occur at an awkward time for at least one group of team members.&lt;/p&gt;

&lt;p&gt;Brian talked about people coming up with truly use cases – like someone linking IOT devices to webhooks in existing APIs using GitHub Actions. &lt;/p&gt;

&lt;p&gt;But the cool part of these actions is that most of them are open source and searchable. Anyone can inspect actions and, if they don’t like them, modify them. If a repo includes GitHub Actions, they’re searchable.&lt;/p&gt;

&lt;p&gt;On github.dom/bdougie, you can see existing workflows that Brian has already put together.&lt;/p&gt;

&lt;h2&gt;
  
  
  Jump Into GitHub Actions – What Next?
&lt;/h2&gt;

&lt;p&gt;I shared some of the basic ideas in Brian’s conversation with Angie. If you want to jump into GitHub Actions in more detail, you can check out the full webinar and the slides in Addie Ben Yehuda’s &lt;a href="https://info.applitools.com/udapN" rel="noopener noreferrer"&gt;summary blog for the webinar&lt;/a&gt;. That blog also includes a number of Brian’s links, several of which I include here as well:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://bdougie.live/" rel="noopener noreferrer"&gt;https://bdougie.live/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://github.com/bdougie/live" rel="noopener noreferrer"&gt;https://github.com/bdougie/live&lt;/a&gt;&lt;br&gt;
&lt;a href="https://lab.github.com/githubtraining/devops-with-github-actions" rel="noopener noreferrer"&gt;https://lab.github.com/githubtraining/devops-with-github-actions&lt;/a&gt;&lt;br&gt;
&lt;a href="https://youtube.com/ilikerobot" rel="noopener noreferrer"&gt;https://youtube.com/ilikerobot&lt;/a&gt;&lt;br&gt;
&lt;a href="https://github.com/bdougie" rel="noopener noreferrer"&gt;https://github.com/bdougie&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.github.com/en/actions/reference/events-that-trigger-workflows" rel="noopener noreferrer"&gt;https://docs.github.com/en/actions/reference/events-that-trigger-workflows&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Enjoy jumping into GitHub Actions!&lt;/p&gt;

</description>
      <category>github</category>
      <category>tutorial</category>
      <category>testing</category>
    </item>
    <item>
      <title>How Thunderhead Speeds Quality Delivery with Applitools</title>
      <dc:creator>Michael Battat</dc:creator>
      <pubDate>Wed, 17 Feb 2021 17:23:15 +0000</pubDate>
      <link>https://dev.to/michaelvisualai/how-thunderhead-speeds-quality-delivery-with-applitools-490b</link>
      <guid>https://dev.to/michaelvisualai/how-thunderhead-speeds-quality-delivery-with-applitools-490b</guid>
      <description>&lt;p&gt;Thunderhead is the recognised global leader in the Customer Journey Orchestration and Analytics market. The ONE Engagement Hub helps global brands build customer engagement in the era of digital transformation.  &lt;/p&gt;

&lt;p&gt;Thunderhead provides its users with great insights into customer behavior. To continue to improve user experience with their highly-visual web application, Thunderhead develops continuously. How does Thunderhead keep this visual user experience working well? A key component is Applitools.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YW8fc77E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lblw2tiz2razejszakux.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YW8fc77E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lblw2tiz2razejszakux.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Before – Using Traditional Output Locators
&lt;/h1&gt;

&lt;p&gt;Prior to using Applitools, Thunderhead drove its UI-driven tests with Selenium for browser automation and Python as the primary test language. They used traditional web element locators both for setting test conditions and for measuring the page responses.&lt;/p&gt;

&lt;p&gt;Element locators have been state-of-the-art for measuring page response because of precision. Locators get generated programmatically. Test developers can find any visual structure on the page as an element.&lt;/p&gt;

&lt;p&gt;Depending on page complexity, a given page can have dozens, or even hundreds, of locators. Because test developers can inspect individual locators, they can choose which elements they want to check. But, locators limit inspection. If a change takes place outside the selected locators, the test cannot find the change.&lt;/p&gt;

&lt;p&gt;These output locators must be maintained as the application changes. Unmaintained locators can cause test problems by reporting errors because the locator value has changed while the test has not. Locators may also remain the same but reflect a different behavior not caught by a test.&lt;/p&gt;

&lt;p&gt;Thunderhead engineers knew about pixel diff tools for visual validation. They also had experience with those tools; they had concluded that pixel diff tools would be unusable for test automation because of the frequency of false positives.&lt;/p&gt;

&lt;h1&gt;
  
  
  Introducing Applitools at Thunderhead
&lt;/h1&gt;

&lt;p&gt;When Thunderhead started looking to improve their test throughput, they came across Applitools. Thunderhead had not considered a visual validation tool, but Applitools made some interesting claims. The engineers thought that AI might be marketing buzz, but they were intrigued by a tool that could abstract pixels into visible elements.&lt;/p&gt;

&lt;p&gt;As they began using Applitools, Thunderhead engineers realized that Applitools gave them the ability to inspect an entire page.  Not only that, Applitools would capture visual differences without yielding bogus errors. Soon they realized that Applitools offered more coverage than their existing web locator tests, with less overall maintenance because of reduced code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The net benefits included:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Coverage – Thunderhead could write tests for each visible on-page element on every page.&lt;/li&gt;
&lt;li&gt;Maintainability – By measuring the responses visually, Thunderhead did not have to maintain all the web element locator code for the responses – reducing the effort needed to maintain tests.&lt;/li&gt;
&lt;li&gt;Visual Validation – Applitools helped Thunderhead engineers see the visual differences between builds under test, highlighting problems and aiding problem-solving.&lt;/li&gt;
&lt;li&gt;Faster operation – Visual validation analyzed more quickly than traditional web element locators.&lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  Moving Visual Testing Into Development
&lt;/h1&gt;

&lt;p&gt;After using Applitools in end-to-end testing, Thunderhead realized that Applitools could help in several areas.&lt;/p&gt;

&lt;p&gt;First, Applitools could help with development. Often, when developers made changes to the user interface, unintended consequences could show up at check-in time. However, by waiting for end-to-end tests to expose these issues, developers often had to stop existing work and shift context to repair older code. By moving visual validation to check-in, Thunderhead could make developers more effective.&lt;/p&gt;

&lt;p&gt;Second, developers often waited to run their full suite of element locator tests until final build. These tests ran against multiple platforms, browsers, and viewports. The net test run would take several hours. The equivalent test. using Applitools took five minutes. So, Thunderhead could run these tests with every build.&lt;/p&gt;

&lt;p&gt;For Thunderhead, the net result was both greater coverage with tests run at the right time for developer productivity.&lt;/p&gt;

&lt;h1&gt;
  
  
  Adding Visual Testing to Component Tests
&lt;/h1&gt;

&lt;p&gt;Most recently, Thunderhead has seen the value of using a component library in their application development. By standardizing on the library, Thunderhead looks to improve development productivity over time. Components ensure that applications provide consistency across different development teams and use cases.&lt;/p&gt;

&lt;p&gt;To ensure component behavior, Thunderhead uses Applitools to validate the individual components in the library. Thunderhead also tests the components in mocks that demonstrate the components in typical deployment uses cases.&lt;/p&gt;

&lt;p&gt;By adding visual validation to components, Thunderhead expects to see visual consistency validated much earlier in the application development cycle.&lt;/p&gt;

&lt;h1&gt;
  
  
  Other Benefits From Applitools
&lt;/h1&gt;

&lt;p&gt;Beyond the benefits listed above, Thunderhead has seen the virtual elimination of visual defects found through end-to-end testing. The check-in and build tests have exposed the vast majority of visual behavior issues during the development cycle. They have also made developers more productive by eliminating the context switches previously needed if bugs were discovered during end-to-end testing. As a result, Thunderhead has gained greater predictability in the development process.&lt;/p&gt;

&lt;p&gt;In turn, Thunderhead engineers have gained greater agility. They can try new code and behaviors and know they will visually catch all unexpected behaviors. As a result, they are learning previously-unexplored dependencies in their code base. As they expose these dependencies, Thunderhead engineers gain greater control of their application delivery process.&lt;/p&gt;

&lt;p&gt;With predictability and control comes confidence. Using Applitools has given Thunderhead increased confidence in the effectiveness of their design processes and product delivery. With Applitools, Thunderhead knows how customers will experience the ONE platform and how that experience changes over time.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Our quality increases exponentially with Applitools. We run it with every build - which takes around five minutes. Without Applitools - the process would take 4 hours per build for less coverage than what we do now in 5 minutes - simply not something we could afford."&lt;/p&gt;

&lt;p&gt;WALT HARRIS — Head of Quality at Thunderhead&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;To read more about the technical details and how Thunderhead's engineering team speeds quality software delivery with Applitools, &lt;a href="https://applitools.com/case-studies/thunderhead/"&gt;check this out!&lt;/a&gt;&lt;/p&gt;

</description>
      <category>testing</category>
      <category>testdev</category>
      <category>python</category>
      <category>ux</category>
    </item>
    <item>
      <title>Five Data-Driven Reasons To Add Visual AI To Your End-To-End Tests</title>
      <dc:creator>Michael Battat</dc:creator>
      <pubDate>Tue, 19 May 2020 17:08:51 +0000</pubDate>
      <link>https://dev.to/michaelvisualai/five-data-driven-reasons-to-add-visual-ai-to-your-end-to-end-tests-1hk0</link>
      <guid>https://dev.to/michaelvisualai/five-data-driven-reasons-to-add-visual-ai-to-your-end-to-end-tests-1hk0</guid>
      <description>&lt;p&gt;Do you believe in learning from the experiences of others? If others found themselves more productive adding Visual AI to their functional tests, would you give it a try?&lt;/p&gt;

&lt;p&gt;In November 2019, over 3,000 engineers signed up to participate in the Applitools Visual AI Rockstar Hackathon. 288 completed the challenge and submitted tests – comparing their use of coded test validation versus the same tests using Visual AI. They found themselves with better coverage, faster test development, more stable test code, with easier code test code maintenance.&lt;/p&gt;

&lt;p&gt;On April 23, James Lamberti, CMO at Applitools, and Raja Rao DV, Director of Growth Marketing at Applitools, discussed the findings from the Applitools Hackathon submissions. The 288 engineers who submitted their test code for evaluation by the Hackathon team spent an average of 11 hours per submission. That’s over 3,000 person-hours – the equivalent of  1 ½ years of engineering work.&lt;/p&gt;

&lt;p&gt;Over 3000 participants signed up. They came from around the world.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yr4KPWdI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/qslogeo0dfbh5aqm1uto.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yr4KPWdI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/qslogeo0dfbh5aqm1uto.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;They used a variety of testing tools and a range of programming languages.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PBGyCZTI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/467px0sqrptfpkls3qea.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PBGyCZTI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/467px0sqrptfpkls3qea.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the end, they showed some pretty amazing results from adding Applitools Visual AI to their existing test workflow.&lt;/p&gt;

&lt;h1&gt;
  
  
  Describing the Hackathon Tests
&lt;/h1&gt;

&lt;p&gt;Raja described the tests that made up the Hackathon.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DVIMdI34--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/vc7a3htl893qexwlmbkg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DVIMdI34--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/vc7a3htl893qexwlmbkg.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each test involved a side-by-side comparison of two versions of a web app. In one version, the baseline, the page rendered correctly. In the other version, the new candidate, the page rendered with errors. This would simulate the real-world issues of dealing with test maintenance as apps develop new functionality.&lt;/p&gt;

&lt;p&gt;Hackathon participants had to write code that did the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ensure the page rendered as expected on the baseline.&lt;/li&gt;
&lt;li&gt;Capture all mistakes in the page rendering on the new candidate&lt;/li&gt;
&lt;li&gt;Report on all the differences between the baseline and the new candidate&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Also, Hackathon participants needed to realize that finding a single error on a page met the necessary – but not sufficient condition for testing. A single test that captures all the problems at once has a faster resolution time than running into multiple bug capture/fix loops. Test engineers needed to write tests that captured all the test conditions, as well as properly reporting all the failures.&lt;/p&gt;

&lt;p&gt;Hackathon participants would code their test using a conventional test runner plus assertions of results in the output DOM. Then, they used the same test runner code but replaced all their assertions with Applitools Visual AI comparisons.&lt;/p&gt;

&lt;p&gt;To show these test results, he used the &lt;a href="https://github.com/corinazaharia/applitools_hackathon_2019/"&gt;Github repository&lt;/a&gt; of Corina Zaharia, one of the platinum Hackathon winners.&lt;/p&gt;

&lt;p&gt;At this point here, Raja walked through each of the test cases.&lt;/p&gt;

&lt;h1&gt;
  
  
  CASE 1 – Missing Elements
&lt;/h1&gt;

&lt;p&gt;Raja presented two web pages. One was complete. The other had missing elements. Hackathon participants had to find those elements and report them in a single test.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dGskZmyN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/8e9l38dk8v24n225ejhj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dGskZmyN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/8e9l38dk8v24n225ejhj.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To begin coding tests, Corina started with the baseline. She identified each of the HTML elements and ensured that their text identifiers existed. She wrote assertions for every element on the page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mZpk1FKv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/lxkeotfv7tuoqovasxot.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mZpk1FKv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/lxkeotfv7tuoqovasxot.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In evaluating submissions, judges ensured that the following differences got captured:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The title changed&lt;/li&gt;
&lt;li&gt;The Username icon was missing&lt;/li&gt;
&lt;li&gt;The Password icon was missing&lt;/li&gt;
&lt;li&gt;The username placeholder changed&lt;/li&gt;
&lt;li&gt;The wrong password label&lt;/li&gt;
&lt;li&gt;The password placeholder changed&lt;/li&gt;
&lt;li&gt;There was extra space next to the check box&lt;/li&gt;
&lt;li&gt;The Twitter icon had moved&lt;/li&gt;
&lt;li&gt;The Facebook icon had moved&lt;/li&gt;
&lt;li&gt;The LinkedIn Icon was missing.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Capturing this page required identifying element locators and validating locator values.&lt;/p&gt;

&lt;p&gt;In comparison, adding Visual AI required only three instructions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open a capture session&lt;/li&gt;
&lt;li&gt;Capture the page with an eyes.checkWindow() command&lt;/li&gt;
&lt;li&gt;Close the capture session&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;No identifiers needed – Applitools captured the visual differences.&lt;/p&gt;

&lt;p&gt;With much less coding, Applitools captured all the visual differences. And, test maintenance takes place in Applitools.&lt;/p&gt;

&lt;h1&gt;
  
  
  CASE 2 – Data-Driven Testing
&lt;/h1&gt;

&lt;p&gt;In Case 2, Hackathon participants needed to validate how a login page behaved when applying different inputs. The test table looked like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No username, no password&lt;/li&gt;
&lt;li&gt;Username, no password&lt;/li&gt;
&lt;li&gt;Password, no username&lt;/li&gt;
&lt;li&gt;Username and password combination invalid&lt;/li&gt;
&lt;li&gt;Valid username and password&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yabGUbFd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/cnynampracp1nznkrg6q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yabGUbFd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/cnynampracp1nznkrg6q.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each condition resulted in a different response page.&lt;/p&gt;

&lt;p&gt;Hackathon participants found an identical page to the tests in Case 1 – but they were responsible for handling the different responses to each of the different test conditions.&lt;/p&gt;

&lt;p&gt;Again, the coding for the conventional test required entering the test conditions via test runner asserting all the elements on the page, including asserting error messages.&lt;/p&gt;

&lt;p&gt;Also, the question was left open for testers – what should they test when they test the valid password and username condition? The simplest answer – just make sure the app visits the correct target post-login page. But, more advanced testers wanted to make sure that the target paged rendered as expected.&lt;/p&gt;

&lt;p&gt;So, again, the comparison with coded assertions and adding Visual AI makes clear how much more easily Visual AI captures baselines and then compares the new candidate against the baselines.&lt;/p&gt;

&lt;h1&gt;
  
  
  CASE 3 – Testing Table Sort
&lt;/h1&gt;

&lt;p&gt;The next case – testing table capabilities – fits into capabilities found on many web apps that provide multiple selections. Many consumer apps, such as retailers, reviewers, and banks, provide tables for their customers. Some business apps provide similar kinds of selectors – in retail, financial, and medical applications.  In many use cases, users expect tables with advanced capabilities, such as sorting and filtering.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--baIh0p5T--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/6676qtlcx37k5l2viv35.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--baIh0p5T--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/6676qtlcx37k5l2viv35.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Tables can provide some challenges for testers. Tables can contain lots of elements. Many table functions can require complex test coding – for example, sorting and filtering.&lt;/p&gt;

&lt;p&gt;To test table sorting with conventional assertion code, Hackathon participants had to write code that captured all the data in the table, performed the appropriate sort of that data, and use the internally-sorted table in the test code with the sorted table on the web page. Great test coders took pains to ensure that they had done this well and could handle various sorting options.  The winners took time to ensure that their code covered the table behavior. This complex behavior did not get caught by all participants, even with a decent amount of effort.  &lt;/p&gt;

&lt;p&gt;In contrast, all the participants understood how to test the table sort with Visual AI. Capture the page, execute the sort, capture the result, and validate inside Applitools.&lt;/p&gt;

&lt;h1&gt;
  
  
  Case 4 – Non-Textual Plug-ins
&lt;/h1&gt;

&lt;p&gt;The fourth case involved using graphical rendering of a table in canvas. How do you test that?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yVLo1qy8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/lzthfulyod8orj3b17m2.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yVLo1qy8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/lzthfulyod8orj3b17m2.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Without normal web element locators, a lot of participants got lost.  They weren’t sure how to start finding the graphing elements and to build a comparison between the baseline behavior and the new candidate.&lt;/p&gt;

&lt;p&gt;Winning Hackathon participants dug into the rendering code to find the javascript calls for the graph and the source data for table elements. This allowed them to extract the values that should be rendered and compare them between the baseline and the new candidate. And, while the winners wrote fairly elegant code, this particular skill took time to dive into JavaScript. And, a fair amount of coding effort.&lt;/p&gt;

&lt;p&gt;As with the table sorting Case 3, all the participants understood how to test the graph with Visual AI. Capture the page, and then compare the new candidate with the baseline in Applitools.&lt;/p&gt;

&lt;h1&gt;
  
  
  Case 5 – Dynamic Data
&lt;/h1&gt;

&lt;p&gt;The final case required the participants to test a page with floating advertisements that can change.  In fact, as long as content gets rendered in the advertising box, and the rest of the candidate remains unchanged, the test passes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TAvjGx0o--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/fmcak74207o0odwwfppv.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TAvjGx0o--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/fmcak74207o0odwwfppv.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The winning participants coded conditional tests to ensure that code existed in the advertising boxes, though they did not have the ability to see how that code got rendered.&lt;/p&gt;

&lt;p&gt;With Visual AI, participants had to use different visual comparison modes in Applitools. The standard mode – Strict Mode – searches for visual elements that have moved or rendered in unexpected ways. With dynamic data, Strict Mode comparisons fail.&lt;/p&gt;

&lt;p&gt;For these situations, Applitools offers Layout Mode instead. When using Layout Mode, the text and graphical elements need to share order and orientation, but their actual visual representation can be different.  In Layout Mode, the following are considered identical – image above text.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JwzRAi3a--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/fy2b7nq53gxq9rph28xk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JwzRAi3a--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/fy2b7nq53gxq9rph28xk.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
However, the pair below has a different layout. On the left, the text sits below the image, while on the right the text sits above the image&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3J2O5r8W--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/lk7gqshyfz8kc4bpl8k9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3J2O5r8W--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/lk7gqshyfz8kc4bpl8k9.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
Applitools users can hard-code their check mode for different regions into their page capture. Alternatively, they can use Strict Mode for the entire page and handle the region as a Layout Mode exception in the Applitools UI.&lt;/p&gt;

&lt;p&gt;All the Hackathon participants, whether coding their tests for Layout mode for the region or by using Layout mode for the selected area once the baseline had been captured in Applitools, had little difficulty coding their tests.&lt;/p&gt;

&lt;h1&gt;
  
  
  Learning From Hackathon Participants
&lt;/h1&gt;

&lt;p&gt;At this point, James began describing what we had learned from the 1.5 person-years of coding work done on the Hackathon. We learned what gave people difficulty, where common problems occurred, and how testing with Visual AI compared with conventional assertions of values in the DOM.&lt;/p&gt;

&lt;h1&gt;
  
  
  Faster Test Creation
&lt;/h1&gt;

&lt;p&gt;I alluded to it in the test description, but test authors wrote their tests much more quickly using Visual AI. On average, coders spent 7 person-hours writing coded assertion-based tests for the Hackathon test cases. In contrast, they spent a mere 1.2 hours writing tests using Visual AI for the same test cases.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--525ls-eg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/kuwwjv7ozxzza6drwz4y.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--525ls-eg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/kuwwjv7ozxzza6drwz4y.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Interestingly, the prize-winning submitters spent, on average 10.2 hours writing their winning submissions. They wrote more thorough conventional tests, which would yield accurate coverage when failures did occur. On the other hand, their coverage did not match the complete-page coverage they got from Visual  AI. And, their prize-winning Visual AI tests required, on average, six minutes more to write than the average of the whole of the test engineers.&lt;/p&gt;

&lt;h1&gt;
  
  
  More Efficient Coding
&lt;/h1&gt;

&lt;p&gt;The next takeaway came from calculating coding efficiency. For conventional tests, the average participant wrote about 350 lines of code. The prize winners, whose code had greater coverage, wrote a little more than 450 lines of code, on average. This correlates with the 7 hours and 10 hours of time spent writing tests.  It’s not a perfect measure, but participants writing conventional tests wrote about 50 lines of code per hour over 7 hours, and the top winners wrote about 45 lines of code per hour over 10 hours.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yrEjr0lF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/8a3hed2yy7yonsy35rza.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yrEjr0lF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/8a3hed2yy7yonsy35rza.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In contrast, with Visual AI, the average coder needed 60 lines of code, and the top coders only 58 lines of code. Visual AI still results in 50 lines of code per hour for the average participant, and 45 lines of code for the winning participant. But, they are much more efficient.&lt;/p&gt;

&lt;h1&gt;
  
  
  More Stable Code
&lt;/h1&gt;

&lt;p&gt;End-to-end tests depend on element locators in the DOM to determine how to apply test conditions, such as by allowing test runners to enter data and click buttons. Conventional tests also depend on locators for asserting content in the response to the applied test conditions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JnJU69Xr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/agtiikecwh9gvepfp9xc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JnJU69Xr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/agtiikecwh9gvepfp9xc.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Most software engineers realize that labels and other element locators get created by software developers – who can change locators due to intentional change or unanticipated difference. And element locator using Xpath can suddenly discover the wrong relative locator due to an enhancement. The same is true for labels, which can change between releases – even when there is no visible user behavior difference.&lt;/p&gt;

&lt;p&gt;No one wants testing to overconstrain development. No one wants development to remain ignorant of testing needs. And yet, because mistakes sometimes happen, or changes are sometimes necessary, locators and labels change – resulting in test code that no longer works properly.&lt;/p&gt;

&lt;p&gt;Interestingly, when evaluating conventional tests, the average Hackathon participant used 34 labels and locators, while the Hackathon prize winners used 47 labels and locators.&lt;/p&gt;

&lt;p&gt;Meanwhile, for the Visual AI tests, the average participant used 9 labels and locators, while the winning submissions used only 8. At a conservative measure, Visual AI reduces the dependency of code on external factors – we calculate it at 3.8 x more stable.&lt;/p&gt;

&lt;h1&gt;
  
  
  Catching Bugs Early
&lt;/h1&gt;

&lt;p&gt;Visual AI can catch bugs early in coding cycles.  Because Visual AI depends on the rendered representations and not on the code to be rendered, Visual AI will catch visual differences that might be missed by the existing test code. For instance, think of an assertion for the contents of a text box. In this new release, the test passes because the box has the same text. However, the box width has been cut in half, causing the text to extend outside the box boundary and be obscured. The test passes, but in reality it fails. The test assumed a condition that is no longer true.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--l-FVPTFd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/5pp93mmi2l6ld3ztl27r.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--l-FVPTFd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/5pp93mmi2l6ld3ztl27r.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Visual AI catches these differences. It will catch changes that result in different functional behavior that requires new coding. It will catch changes – like the one described above, that result in visual differences that impact users. And, it will avoid flagging changes that may change the DOM but not the view or behavior from the user’s perspective.&lt;/p&gt;

&lt;h1&gt;
  
  
  Easier to Learn than Code-Based Testing
&lt;/h1&gt;

&lt;p&gt;The last thing James shared involved the learning curve for users. In general, we assumed that test coverage and score on the Hackathon evaluation correlated with participant coding skill. The average score achieved by all testers using conventional code-based assertions was 79%. After taking a 90-minute online course on Visual AI through Test Automation University, the average score for Visual AI testers was 88%.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--So2pe0-t--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/odjbbkt3couqrlx5e6zq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--So2pe0-t--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/odjbbkt3couqrlx5e6zq.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Because people don’t use visual capture every day, testers need to learn how to think about applying visual testing. But, once the participants had just a little training, they wrote more comprehensive and more accurate tests, and they learned how to run those test evaluations in Applitools.&lt;/p&gt;

&lt;h1&gt;
  
  
  What This Means For You
&lt;/h1&gt;

&lt;p&gt;James and Raja reiterated the benefits they outlined in their webinar: faster test creation, more coverage, code efficiency, code stability, early bug catching and ease of learning. Then they asked: what does this mean for you?&lt;/p&gt;

&lt;p&gt;If you use text-based assertions for your end-to-end tests, you might find clear, tangible benefits from using Visual AI in your product release flow. It integrates easily into your CICD or other development processes. It can augment existing tests, not requiring any kind of rip and replace.  Real, tangible benefits come to many companies that deploy Visual AI. What is stopping you?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lXGgQ8aP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/sljj9hbn6j4diqz7nomd.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lXGgQ8aP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/sljj9hbn6j4diqz7nomd.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Often, learning comes first. Fortunately, Applitools makes it really easy for you to learn Visual AI. Just take a class on &lt;a href="https://info.applitools.com/uy2G"&gt;Test Automation University&lt;/a&gt;. There is Raja’s course: &lt;a href="https://info.applitools.com/uy2D"&gt;Modern Functional Test Automation through Visual AI&lt;/a&gt;. There is Angie Jones’s course: &lt;a href="https://info.applitools.com/uy2I"&gt;Automated Visual Testing: A Fast Path To Test Automation Success&lt;/a&gt;.  And, there are others.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Learning Appium Visual Testing</title>
      <dc:creator>Michael Battat</dc:creator>
      <pubDate>Thu, 09 Apr 2020 17:37:32 +0000</pubDate>
      <link>https://dev.to/michaelvisualai/learning-appium-visual-testing-32a0</link>
      <guid>https://dev.to/michaelvisualai/learning-appium-visual-testing-32a0</guid>
      <description>&lt;p&gt;I’m learning Appium with visual testing in &lt;a href="https://www.linkedin.com/in/jlipps/" rel="noopener noreferrer"&gt;Jonathan Lipps&lt;/a&gt; course, &lt;a href="https://testautomationu.applitools.com/appium-visual-testing/?utm_term=cat&amp;amp;utm_source=syndication&amp;amp;utm_medium=&amp;amp;utm_content=tau&amp;amp;utm_campaign=test-automation-university&amp;amp;utm_subgroup=devto" rel="noopener noreferrer"&gt;Automated Visual Testing with Appium&lt;/a&gt;, on &lt;a href="https://testautomationu.applitools.com/?utm_term=cat&amp;amp;utm_source=syndication&amp;amp;utm_medium=&amp;amp;utm_content=tau&amp;amp;utm_campaign=test-automation-university&amp;amp;utm_subgroup=devto" rel="noopener noreferrer"&gt;Test Automation University&lt;/a&gt;. I’m familiar with testing but new to Appium, so I figure this is a great way to get started.&lt;/p&gt;

&lt;p&gt;Jonathan knows what he’s talking about when it comes to Appium. He serves as a lead developer and maintainer of the Appium project. His company, CloudGrey, consults on native mobile app development and testing. And, his company publishes the &lt;a href="https://appiumpro.com/" rel="noopener noreferrer"&gt;AppiumPro blog&lt;/a&gt;. I assume it’s best to learn from the master.&lt;/p&gt;

&lt;h1&gt;
  
  
  Learning Appium with Visual Testing – Prerequisites
&lt;/h1&gt;

&lt;p&gt;Okay, Jonathan already burst my bubble in Chapter 1. He expects me to know Appium. I’m learning Appium, so I’ll do my best to be a quick study. The course uses examples in Java, but the code can be any supported language. Python, Ruby, JavaScript, etc. all work. Jonathan also wants to make sure that I understand automated testing concepts. I do.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F6p9hkpsy0kjsbnr95ugu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F6p9hkpsy0kjsbnr95ugu.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, I need to have node.js installed and NPM. I need an Appium CLI installation. And, I need to have software that allows me to get Appium Android dependencies installed. Android Studio is one solution. Actually, Jonathan recommends getting all this installed before taking his course, so I figure I’ll go get a little bit more training and get the software installed.  Done.&lt;/p&gt;

&lt;p&gt;Then comes the tricky bit – installing OpenCV libraries. OpenCV installs a library of computer vision (ergo “CV”) routines to work with Appium. As Jonathan notes, there’s an easy way and a hard way to install OpenCV. The easy way uses a direct NPM command to build the libraries. He has a Mac and ran into issues with the easy way. I have a Mac and ran into the same issues. So, Jonathan provides a second way to install the OpenCV software using Homebrew.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F9qo8hmmeowz9jrraynbp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F9qo8hmmeowz9jrraynbp.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The steps I list here are the few steps here you may run through to get Appium up and running on your local machine. As Jonathan points out, this is all documented.&lt;/p&gt;

&lt;p&gt;On to testing.&lt;/p&gt;

&lt;h1&gt;
  
  
  Learning Appium Testing With A Demo App
&lt;/h1&gt;

&lt;p&gt;I’m used to testing apps with web element locators. In the case of Jonathan’s demo app, it’s a game. It uses a calculator paradigm with the goal of having the user achieve a certain number by pressing on buttons that perform certain operations. Like this example:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fryndfu0m1iapho908tpa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fryndfu0m1iapho908tpa.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When Jonathan tried to find traditional locators, there was only one leaf node – and it wasn’t a control button. How often does this happen? Often. Your development team can choose one of many frameworks for mobile application development that omits traditional element locators.&lt;/p&gt;

&lt;p&gt;The app source looks like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ftcrylanudw9zgwqp0lwv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ftcrylanudw9zgwqp0lwv.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can try to hard code your tests by region geography on the screen, but what happens when you go from a small format phone to a larger phone – to an Android tablet? Yuck!&lt;/p&gt;

&lt;p&gt;Fortunately, by using OpenCV with Appium you can define the images of buttons that you can save. Take a screen capture, crop the button of interest, and save it in a file accessible to Appium. Later you can pass that image to Appium to find a match and take an action – click.  That’s really useful. To reiterate – with OpenCV, Appium can use stored images to find buttons and manipulate the app. That was new to me and really interesting.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fyrz66dpy2r7e6lfurpz0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fyrz66dpy2r7e6lfurpz0.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The specific command you use is:&lt;/p&gt;

&lt;p&gt;driver.findElementByImage(templateImage);&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fzdepafwdqbhsq68b6kx0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fzdepafwdqbhsq68b6kx0.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This command returns an ImageElement that can be used for action.  The workflow between Appium, OpenCV, your test code, and a stored image looks like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F4iuigqrx2qcgetxck39r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F4iuigqrx2qcgetxck39r.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Setting Up Your Test Device as an MJPEG Server
&lt;/h1&gt;

&lt;p&gt;Part of learning Appium is learning Appium tricks.  From Jonathan’s experience, Appium’s integration with the Android native screen capture and processing can be pretty slow. It can take several seconds to capture one screen.  Fortunately, you can install an MJPEG server on your test device. Appium can process the MJPEG server image much more quickly.&lt;/p&gt;

&lt;p&gt;Jonathan uses one such piece of software to connect his Android emulator to his Appium test infrastructure. Called Screen Stream, the software serves the image of the Android device so that a person or piece of software can subscribe to a URL and see what’s on the screen. Jonathan notes that the device or emulator under test must have an IP address accessible to Appium.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Frg19z3c5w8c2rhzlmkbh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Frg19z3c5w8c2rhzlmkbh.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you have a stream set up, you need to give Appium access to that stream. Jonathan shows that you set the value of mjpegScreenshotUrl to your appropriate stream URL.&lt;/p&gt;

&lt;h1&gt;
  
  
  Example Test Code
&lt;/h1&gt;

&lt;p&gt;As Jonathan mentions, his test code is written in Java, and if it’s not a language you know, his code syntax is easy to read. So, don’t let lack of Java stand in the way of learning Appium. Jonathan spends a little time showing you the code he has created for running his app tests.&lt;/p&gt;

&lt;p&gt;His course presents the test code in easy-to-read chunks. If you’re not a Java person, you might not get all the syntax, but you’ll get why things are organized the way they are. (Note – if you’re interested in learning Java, take &lt;a href="https://testautomationu.applitools.com/java-programming-course/?utm_term=cat&amp;amp;utm_source=syndication&amp;amp;utm_medium=&amp;amp;utm_content=tau&amp;amp;utm_campaign=test-automation-university&amp;amp;utm_subgroup=devto" rel="noopener noreferrer"&gt;Angie Jones’s course about the Java language&lt;/a&gt; on Test Automation University).&lt;/p&gt;

&lt;p&gt;Jonathan walks you through his test demo of an Android game that uses a calculator as the input device. And, unfortunately, there are no code hooks to drive the UI. So, Jonathan uses the image matching to drive the app. Each active button can be captured and used to drive the behavior.&lt;/p&gt;

&lt;p&gt;You can find the source code in addition to Jonathan’s test excerpts. This will help you see the tests in more detail.&lt;/p&gt;

&lt;h1&gt;
  
  
  Visual Testing with OpenCV
&lt;/h1&gt;

&lt;p&gt;Jonathan spends an entire chapter focused on OpenCV for visual testing. Jonathan distinguishes functional test – where the goal of testing is to find unexpected behavior in the input-output behavior of the application – versus visual test – where the goal is finding a visual behavior that may cause a user to find the app either inoperative or unappealing.&lt;/p&gt;

&lt;p&gt;If an app allows the user to enter two numbers and add them, a functional bug results when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Either number cannot be entered in the expected numeric fields&lt;/li&gt;
&lt;li&gt;The numeric entry fields can receive numbers but the button click has no effect&lt;/li&gt;
&lt;li&gt;Data entry and the button click works but the presented response does not match the expected sum.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As a tester, you can also imagine other cases for a functional test: handling negative numbers, non-standard numeric formats, and non-numeric inputs.&lt;/p&gt;

&lt;p&gt;For the same app, an exclusively visual bug exists when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Number fields cannot be distinguished on a page&lt;/li&gt;
&lt;li&gt;The button cannot be distinguished on a page&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Note – Appium may be able to execute a functional test even if there are visual bugs, which explains why you need a solution to examine for visual bugs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fcipmdxfcd5n5a2d1tdde.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fcipmdxfcd5n5a2d1tdde.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To catch visual bugs, OpenCV lets you capture a screen from the mobile device and store it in a location on your local machine or in a network repository. Once you have captured a screenshot, that becomes your baseline for that screen. On subsequent runs, you call OpenCV to compare your screenshot to the saved baseline.  OpenCV uses a pixel comparison engine, and the comparison response on the comparator comes back with a value between 0 and 1.  The number is roughly analogous to the percentage of pixels unchanged.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fc7dvimc0dp5cyjkncdr4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fc7dvimc0dp5cyjkncdr4.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Testing with Appium and OpenCV
&lt;/h1&gt;

&lt;p&gt;To automate his tests, Jonathan sets up his Java to call OpenCV to capture a screen and compare the image to the baseline. For automation purposes, Jonathan sets 0.99 as his acceptance threshold.  Anything that shows 0.99 or above generally has small differences – clock, cell signal level, etc. Anything lower likely shows a noticeable difference.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fj2fwo2mstzden9t9vekh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fj2fwo2mstzden9t9vekh.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can play with the acceptance criteria to see what works for you.&lt;/p&gt;

&lt;p&gt;One great thing you can do is see the differences in OpenCV. OpenCV will capture a screen of both the checkpoint and the baseline and highlight the differences. You can use this comparison to determine what you want to do, either:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Declare the difference a bug and pass the information back to developers or&lt;/li&gt;
&lt;li&gt;Declare the difference as intentional and point future tests to the most recent capture, which becomes the new baseline.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Real-World Visual Tests with OpenCV
&lt;/h1&gt;

&lt;p&gt;Next, Jonathan demonstrates the visual capture with OpenCV. He goes through a demo app and captures two screen images. He saves the images as baselines. Next, he runs the app with screen modifications and captures the new images and runs his pixel comparison. Each screen that has a difference where the comparison falls below 0.99 fails his test.&lt;/p&gt;

&lt;p&gt;Jonathan shows the process you go through using OpenCV. Running through the test the first time, there is no screen capture, so you capture your initial baseline image. The second time you run through the test, you capture screens and your tests either pass or fail.  The tests end on the first fail, and you must inspect the compare image to see where the failure occurred.&lt;/p&gt;

&lt;p&gt;In the course, Jonathan ran through his sample application. He ran a test, which included two screen captures. His code captured the screenshot as a baseline if no prior baseline existed. Each screen capture served as its own unique test. After running his test with the baseline application, he then pointed to a modified application and reran his tests using the visual comparison as the validation step. Each screenshot in the second app behaved slightly differently from the initial capture. Then we went through the results.&lt;/p&gt;

&lt;h1&gt;
  
  
  Reviewing the Appium OpenCV Test Results
&lt;/h1&gt;

&lt;p&gt;The first test run showed one failure, because the Appium run reported the first failure as a failed test.  In Jonathan’s first screen, the visual difference occurred because he added a new item to the screen. Because he expected the new behavior, he could set the most recent screen capture as the new baseline for subsequent testing.&lt;/p&gt;

&lt;p&gt;Then, he had to rerun his test. Again, the test run stopped with an image mismatch failure. While the first image passed with the new baseline, the second image now failed. Again, comparing the images showed the highlighted difference, which turned out to be a visually unacceptable change in the location of an action button. In this case, the check image needed to be sent to developers, as this image contained a bug. This test would continue to fail until the bug got resolved.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fh82z1lc8qbgzzjsdbns0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fh82z1lc8qbgzzjsdbns0.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As Jonathan shows, image comparison exposes usability changes that you may or may not desire. More importantly, you learn some of the challenges of OpenCV integration with Appium used in production testing.&lt;/p&gt;

&lt;p&gt;First, you must manage images yourself. If you have a handful of screenshots, that could be doable. But, if you capture many screenshots, that seems problematic.  Second, an image is either all-good or else partially buggy. An image with both an acceptable and unacceptable change cannot be partially accepted. So, you need to manage the partial workflow on your own.  Finally, the whole notion of a percentage pixel change as acceptance criteria to account for pixels you know will change seems a little imprecise.&lt;/p&gt;

&lt;p&gt;With this, Jonathan moves on to the next chapter – visual testing with Appium and Applitools.&lt;/p&gt;

&lt;h1&gt;
  
  
  Appium with Applitools for Visual Testing
&lt;/h1&gt;

&lt;p&gt;Jonathan starts the next chapter helping you learn the drawbacks of using Appium with OpenCV for visual testing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;OpenCV can be challenging to install&lt;/li&gt;
&lt;li&gt;With OpenCV, you need to manage your own baselines&lt;/li&gt;
&lt;li&gt;Self-Managing images – you also manage compare images&lt;/li&gt;
&lt;li&gt;Image compare errors – you will have reported failures that are not failures&lt;/li&gt;
&lt;li&gt;No support for full-screen comparisons with scrolling screens – something we hadn’t yet covered, but you must manually scroll and capture screens and sync them up. Yuck.&lt;/li&gt;
&lt;li&gt;You must maintain the image comparison code, which will require work from release to release.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So, funny enough, Jonathan focuses on these limitations of using Appium with OpenCV, and then he proceeds to show how using Applitools overcomes each of these.&lt;/p&gt;

&lt;p&gt;Adding Applitools code – easy. You don’t need to manage a whole image compare library. Applitools does it for you. All you need to do is call into the Applitools Eyes SDK. After that, you need to link to your valid Applitools API Key (best to use a call to a system variable, rather than use the actual key, so that you don’t worry about publishing your key with your code) and start integrating the Applitools code into your Appium tests. Jonathan also goes through the test close steps, including the calls to Applitools to abort an open session in case, for some reason, your test logic skipped a close.&lt;/p&gt;

&lt;p&gt;After that, it’s easy to add the Applitools Eyes calls into your test code. Instead of having to use OpenCV calls that include test compares you run and validate on your own manually, all the calls to Applitools Eyes SDK pull screenshots into Applitools. Later after your tests run, you check out the results.&lt;/p&gt;

&lt;h1&gt;
  
  
  Looking at Applitools Results
&lt;/h1&gt;

&lt;p&gt;Here you learn the differences between running Appium with OpenCV and running Appium with Applitools.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fmn98zrobp4yd1xrhkw1w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fmn98zrobp4yd1xrhkw1w.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First, your tests run through to completion. Your initial tests, like your initial tests with Appium and OpenCV, run through to completion and capture a baseline image. Unlike with OpenCV, Applitools gives you a useful UI that shows you all the captured images for that run. You don’t have to manage them on your own.  When you run your comparison test, the Applitools tests run through to completion while capturing all differences. So, you don’t end up executing multiple test runs to ensure that each visual difference gets captured.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fkvmdgyig7r4faij21y06.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fkvmdgyig7r4faij21y06.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the Applitools UI, you see all the tests that have been run and ones with differences identified as “Unresolved” but not yet classified as failures. That’s because a difference could be expected or unexpected. You either give a difference a thumbs-up – meaning that the difference was expected (and turning the checkpoint into the new baseline) or a thumbs down – meaning it was a failure.&lt;/p&gt;

&lt;p&gt;One useful thing to note is that you can have Applitools ignore the status bar of the phone entirely and focus just on the app. So, unlike the OpenCV screen captures, which capture the full screen and requires you to do some postprocessing, the Applitools capture can select just the app in question.&lt;/p&gt;

&lt;p&gt;Jonathan shows how the Applitools UI lets you automate the visual capture and streamline your testing. You can review capture history over time for different tests, and see what you have added or changed. It can give you a history of how your app looks over time.&lt;/p&gt;

&lt;h1&gt;
  
  
  Advanced Features – Learning Appium with Applitools
&lt;/h1&gt;

&lt;p&gt;Now, Jonathan helps you learn some of the advanced features of Appium with Applitools. First, Applitools lets you capture the full application window – not just a screenshot.  You invoke the method:&lt;/p&gt;

&lt;p&gt;eyes.setForceFullPageScreenshot(true);&lt;/p&gt;

&lt;p&gt;before your call to eyes.open() to start your test, and Applitools will automatically grab a full-page screenshot.  Jonathan shows the difference between the prior and current tests by showing that the prior tests only looked at the first screen of the app, but Applitools could see a range of additions to the app.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fv6dqe4of77polad3u1z1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fv6dqe4of77polad3u1z1.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here, two different runs of the same home screen are captured. The one on the right, the checkpoint, is actually a longer screen and has more content than the baseline on the left.  A full capture sometimes finds new content that gets added between test runs, as shown here. But, often, the benefit of full-screen capture for a run is the ability to auto-scroll and stitch together a single image for comparison purposes.&lt;/p&gt;

&lt;p&gt;Next, Jonathan shows how you can use Applitools to treat different regions on a page for visual testing.  Sometimes your captured screen includes content that will vary from test to test. Jonathan shows a mobile test of an echo screen, where a user types a test phrase and clicks an on-screen “OK” button.  The mobile device then displays the typed text in a different region of the screen.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F5uhdz6nddx3c8q3qc6xb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F5uhdz6nddx3c8q3qc6xb.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When you use a variable to define your test input phrase and your expected output phrase, your Appium functional test will run and, as long as the two phrases match the variable, the test will pass.  However, you will have a visual difference if you change the phrase while capturing the image with Applitools.  So, Jonathan shows you how to treat a region as “ignored”. You can select a region, identify it as ignorable, and then Applitools ignores the regions.&lt;/p&gt;

&lt;p&gt;Jonathan’s exception demo shows some of the power in Applitools, but the use case might be better for a region that changes independent of the test being run.  For instance, if the test screen includes a digital clock region that shows the device time, then that clock will always show a different time. When you run your tests, you might want to ignore the clock region as its region will always change and will always give you an error. You likely want to run and capture multiple versions of the input-output test to handle things like text wrapping and scrolling.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusions
&lt;/h1&gt;

&lt;p&gt;Wow. Appium provides powerful tools for testing mobile apps in both Android and iOS. While Jonathan’s course focused on Android, and runs his tests on an Android emulator, I can easily imagine doing similar tests using iOS.&lt;/p&gt;

&lt;p&gt;By focusing on apps that lack traditional locators, Jonathan shows how mobile device testing can depend on visual capabilities for both applying tests as well as measuring results. And, he shows both the benefits of using a package like OpenCV to add visual capabilities for both applying tests and measuring results, and he also shows some of the limitations of OpenCV in production visual validation. Those included the challenges of managing images manually as well as having the first visual failure with OpenCV causing a test run to fail and not collecting all the visual differences for a test run.&lt;/p&gt;

&lt;p&gt;Finally, Jonathan does a great job showing how Applitools overcomes a lot of those OpenCV limitations and actually provides a valuable tool for production test automation.&lt;/p&gt;

</description>
      <category>testing</category>
      <category>codenewbie</category>
    </item>
    <item>
      <title>How do you bridge the gap between quality engineering and product management?</title>
      <dc:creator>Michael Battat</dc:creator>
      <pubDate>Thu, 02 Apr 2020 17:41:34 +0000</pubDate>
      <link>https://dev.to/michaelvisualai/how-do-you-bridge-the-gap-between-quality-engineering-and-product-management-1lee</link>
      <guid>https://dev.to/michaelvisualai/how-do-you-bridge-the-gap-between-quality-engineering-and-product-management-1lee</guid>
      <description>&lt;p&gt;In October 2019, Evan Wiley from &lt;a href="https://www.capitalone.com/about/corporate-information/"&gt;Capital One&lt;/a&gt; presented on this topic for an &lt;a href="https://applitools.com/?utm_term=cat&amp;amp;utm_source=syndication&amp;amp;utm_medium=&amp;amp;utm_content=request-a-demo&amp;amp;utm_campaign=&amp;amp;utm_subgroup=devto"&gt;Applitools&lt;/a&gt; webinar. Evan introduced a number of cool tools that bridge the gap between the product specified by the product managers and the nuts and bolts of testing. &lt;/p&gt;

&lt;p&gt;Evan Wiley spent six years in Quality Engineering at Capital One before moving into product management. In his engineering time, Evan discovered that product managers and quality engineers share complementary views on their products. Product managers envision behaviors that quality engineers execute in their tests.&lt;/p&gt;

&lt;p&gt;Evan experienced this relationship directly when he was invited by product managers to attend “empathy interviews.” In these interviews, team members speak with customers to understand the customer’s environment, needs, fears, and expectations. In attending these interviews, Evan heard the first-hand lives of customers who were using the results of his work. These interviews both informed his work in quality engineering and later fueled his move to product management&lt;/p&gt;

&lt;h1&gt;
  
  
  What Is Quality Engineering?
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VIgvMPcs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/2a3s08ravki6efe1fprs.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VIgvMPcs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/2a3s08ravki6efe1fprs.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Evan described the work of quality engineering as finding bugs within products early and often. Noting that the job varies from organization to organization and company to company, Evan described the responsibilities including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Manual testing – ensuring that the product can be exercised and behaves as expected&lt;/li&gt;
&lt;li&gt;Test automation – creating automated tests that can be run to assure behavioral coverage without manual intervention&lt;/li&gt;
&lt;li&gt;Production testing – verifying behavior with a production-ready product.&lt;/li&gt;
&lt;li&gt;Test case design – ensuring that tests cover both positive cases, or normal function, as well as negative cases, or handling problematic inputs, internal inconsistencies, and other errors.&lt;/li&gt;
&lt;li&gt;Test execution – running tests and reporting results&lt;/li&gt;
&lt;li&gt;Penetration testing – running tests and reporting results for tests based on known threat vectors&lt;/li&gt;
&lt;li&gt;Accessibility testing – ensuring that customers with known visual and other disabilities can navigate and use the product successfully.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Evan noted that the nature of this work changes as products, product delivery platforms, and customer environments evolve.  And, things change constantly.&lt;/p&gt;

&lt;h1&gt;
  
  
  What is Product Management?
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vZF__ssS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/q2aijhfhycjfb0r7aemn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vZF__ssS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/q2aijhfhycjfb0r7aemn.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Evan next dove into describing the role of product management. Frankly, describing product management can take a two-day course, and even then not cover what everyone thinks product management might do at their company. I know this as I remain a poorly-hidden closet product manager.&lt;/p&gt;

&lt;p&gt;Evan does not try to describe product management comprehensively. Instead, he focuses on how empathy interviews drive product managers to make product decisions.  Primarily, Evan says, empathy interviews help product managers become the “voice of the customer.”&lt;/p&gt;

&lt;p&gt;One part of empathy interviews guides testing. For example, does your test environment match the customer’s environment? Do your test conditions match the customer’s test conditions?&lt;/p&gt;

&lt;p&gt;A larger set of questions helps product managers understand problems their customers try to solve, how to prioritize solutions to these problems, which of these problems need to be higher on the near-term backlog versus further back, and how customers might respond to different marketing messages.&lt;/p&gt;

&lt;p&gt;And, when product managers take their products to the field, they can validate a customer’s reaction with expectations from empathy interviews to help make their empathy interviews even more effective. The initial empathy interview forms the basis for product management key performance indicators.&lt;/p&gt;

&lt;p&gt;Ultimately, the needs of product managers and quality engineers diverge significantly. But involving quality engineering in customer empathy interviews help the overlap succeed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZoFkvFhn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/wv2xh1v89ue3l52nrkh5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZoFkvFhn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/wv2xh1v89ue3l52nrkh5.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Quality Engineering and Product Manager Needs Overlap&lt;/p&gt;

&lt;h1&gt;
  
  
  Quality over Quantity
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YQA9KUnI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/k8ychl3sw1itsixx8uyv.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YQA9KUnI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/k8ychl3sw1itsixx8uyv.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
Evan spends a bit of time discussing how Capital One prioritizes quality over quantity.  Evan points out that the company makes this choice, and that the decision permeates the company culture.  At Capital One, that goal permeates all engineering – not just quality engineering.&lt;/p&gt;

&lt;p&gt;Evan explains with an example from another company.&lt;/p&gt;

&lt;p&gt;“If there’s a quality engineer at Facebook, they might have a lot of test cases, and in some cases, they can stand in for the end-user with the knowledge of being a user. So, if they’re testing, say, logout from Facebook, they might think, ‘I can make that simpler for an end-user because I’d want it to be.’ And this insight empowers the quality engineers to work directly with the developers tweak a behavior for an end-user.”&lt;/p&gt;

&lt;p&gt;To this end, Evan sees that the whole of engineering contributes to quality over quantity culture. Hiring processes, skill selection, and testing approaches involve transparency that allows for a breadth of experience and diversity of perspectives on the team.&lt;/p&gt;

&lt;p&gt;And, this mix of backgrounds leads to cross-training product managers with quality engineers. Bringing both groups together inside Capital One leads, in Evan’s perspective, to better outcomes for customers.&lt;/p&gt;

&lt;p&gt;Evan gave the example of testing a set of features across multiple browsers alongside product managers. He was able to show the product managers where the different browsers handled certain functions differently as a way to build the culture of quality at Capital One.&lt;/p&gt;

&lt;h1&gt;
  
  
  Gherkin Scenarios
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Y_Tn3MyC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/k9at84p62duf14ax64z1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Y_Tn3MyC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/k9at84p62duf14ax64z1.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, Evan demonstrated the use of Gherkin Scenarios for writing stories that describe the behavior of a product. If you don’t know about Gherkin, it’s basic statements are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scenario&lt;/li&gt;
&lt;li&gt;Given&lt;/li&gt;
&lt;li&gt;When&lt;/li&gt;
&lt;li&gt;Then&lt;/li&gt;
&lt;li&gt;And&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So, for example, Evan talks about the Google home page. He imagines the product manager writes something like this:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iFvU-bwc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/n3gt7wedrinzwdqfm390.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iFvU-bwc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/n3gt7wedrinzwdqfm390.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
These scenarios have several useful properties.&lt;/p&gt;

&lt;p&gt;First, they help the product manager describe the detailed behavior a user will experience before any software gets written. Product managers can validate these stories with prospective customers and identify any issues that might impact the behavior of the product.&lt;/p&gt;

&lt;p&gt;Second, developers get an understanding of the intended user experience and outcome to plan their coding activity.&lt;/p&gt;

&lt;p&gt;Third, engineers can use their experience to determine scenarios that might not have been described and what should be the intended behavior in those situations.  And, engineers can ask relevant questions involving both design as well as behavior.&lt;/p&gt;

&lt;h1&gt;
  
  
  A Gherkin Example
&lt;/h1&gt;

&lt;p&gt;Imagine some questions that come from the scenario listed above. Here are a few:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How much time should the user experience between clicking the “Google Search” button and getting a response back?&lt;/li&gt;
&lt;li&gt;What happens when the user has 600ms latency between the browser and server?&lt;/li&gt;
&lt;li&gt;Since the scenario specifies that the test uses valid text for a Google search, define valid text.&lt;/li&gt;
&lt;li&gt;What about the scenario when the user enters not valid text?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These questions lead to more scenarios, which lead to more complete product specifications, which lead to more complete products. The larger number of scenarios also leads to more complete testing.  Quality engineers read these scenarios and begin planning their acceptance tests based on these different scenarios and conditions.&lt;/p&gt;

&lt;p&gt;So, much of the engineering-product management conversation ends up as quality engineering talking with product management about scenarios – tightening the connection between product management and quality engineering.&lt;/p&gt;

&lt;p&gt;Evan did not talk about it, but a tool like Cucumber reads in Gherkin scenarios to help build test automation.&lt;/p&gt;

&lt;h1&gt;
  
  
  Visual Validation Baselines
&lt;/h1&gt;

&lt;p&gt;From there, Evan moved on to talk about visual validation. And, for visual validation, Evan talked about Applitools – a tool he uses at Capital One.&lt;/p&gt;

&lt;p&gt;First, Evan spoke about the idea of user acceptance testing. When you run through the scenarios with product managers, you will end up with the screens that were described. You want to capture the screen images and go through them with product managers to make sure they meet the needs of the customers as understood by the product management team.&lt;/p&gt;

&lt;p&gt;So, part of the testing involves capturing images, and that means following your Gherkin scenarios to make sure you capture all the right steps. Evan showed some examples based on a Google page and describing how those test steps in the Gherkin scenarios became captured visual images.  Evan pointed out that these visual images begin to define how the application should behave – a baseline for the application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--q_DcuOiC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/2ktpq17qlwug7h9jng5c.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--q_DcuOiC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/2ktpq17qlwug7h9jng5c.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you go through the different pages, you can complete the baseline acceptance tests. Once you have a saved good baseline, you know the state of an accepted page going forward.&lt;/p&gt;

&lt;p&gt;If you find a behavior you don’t like, you can highlight the behavior and share it with the developers.&lt;/p&gt;

&lt;p&gt;You can find problems during visual testing that do not appear in development. For instance, someone realizes you need to test against some smaller viewport sizes. Or, you test a mobile app, but not the mobile browser version.&lt;/p&gt;

&lt;p&gt;So, you build your library of baseline images for each test condition. You make sure to include behaviors on target browsers, operating systems, and viewport sizes in your Gherkin scenarios. As it turns out, with Applitools, your collection of completed baselines makes your life much easier going forward.&lt;/p&gt;

&lt;h1&gt;
  
  
  Visual Validation Checkpoints
&lt;/h1&gt;

&lt;p&gt;Next, Evan dove into using Applitools for subsequent tests.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_swFMkrI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/1pl5q3iyev46atx2obhj.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_swFMkrI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/1pl5q3iyev46atx2obhj.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you develop your test cases and run them through Applitools, you have baselines. As you continue to develop your product, you continue to run tests through Applitools. Each test you run and capture in Applitools can be compared with your earlier tests. A run with no differences shows as a pass. A run with differences shows up as “unresolved.”&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--b4JzDnQx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/9m2bkg1b6l9mtovcu8ut.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--b4JzDnQx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/9m2bkg1b6l9mtovcu8ut.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Evan showed how to deal with an unresolved difference. He inspected one difference, saw it was due to an expected change, and accepted the difference by clicking the “thumbs-up” button in the UI. In this case, the checkpoint automatically becomes the new baseline image.  He inspected another difference. In this case, the difference wasn’t one he wanted. He clicked the “thumbs-down” button in the UI. He showed how this information becomes a “failed” test, and how the information of the can get routed back to developers.&lt;/p&gt;

&lt;p&gt;Unlike other visual testing tools you have used, the Applitools uses computer vision to identify visual differences. The Visual AI engine can ignore rendering changes at the pixel level that do not result in user-identifiable visible differences. And, there are other capabilities for maintaining applications, such as handling changes that displace the rest of the content on pages or managing identical changes that occur on multiple pages.&lt;/p&gt;

&lt;h1&gt;
  
  
  Quality over Velocity
&lt;/h1&gt;

&lt;p&gt;Evan went back to discuss the company culture about prioritizing quality. Capital One developed an engineering culture over time to focus on quality. Any decision to emphasize delivery over quality must be documented and validated. Release decisions at Capital One end up being team decisions, as the whole team is responsible for both the content and quality of a release.  So, the entire decision to deliver quality products brings the product management, product development, and quality engineering teams together with a common purpose.&lt;/p&gt;

&lt;p&gt;Evan noted that, in his experience, other companies approach these problems in different ways. The culture at Capital One makes this kind of prioritization possible. Cross-training makes this delivery possible because cross-training makes all team members aware of priorities, approaches, and tools used to deliver quality. The result, says Evan, is a high-performing team and consistency of meeting customer expectations.&lt;/p&gt;

&lt;h1&gt;
  
  
  Questions about Quality Engineering
&lt;/h1&gt;

&lt;p&gt;A questioner asked Evan if Quality Engineering at Capital One had sole responsibility for quality. Evan said no. Evan pointed out that he spoke from his perspective, and while Quality Engineering came up with the approach to validate product quality, the whole team – product management, development, and quality engineer – participated in the testing. The approach helped the team deliver a higher quality product.&lt;/p&gt;

&lt;p&gt;Another questioner asked about the benefit of getting customer knowledge directly to Quality Engineering. That’s valuable, Evan said. For example, during an empathy interview, a customer raises the question of a specific problem they have when trying to execute something specific. During the interviewer, the interviewer dives deeper into this issue. The result is a more complete understanding of the customer use case, the expected behavior, and the actual behavior observed.  This results in both better test cases as well as future enhancements.&lt;/p&gt;

&lt;h1&gt;
  
  
  Questions about Visual Testing and Tools
&lt;/h1&gt;

&lt;p&gt;A questioner asked if Gherkin scenarios made sense in all situations. Not always, said Evan. Gherkin scenarios make great sense when fitting into behavior for development to create and quality engineering to test. Evan thought about cases, such as technical debt cases, for which the intended behavior may not be user behavior.&lt;/p&gt;

&lt;p&gt;Another questioner asked about the value of visual testing to Capital One. Evan talked about finding ways to exercise a behavior, capture the results, and share the results with product people.  Test pass/fail results could not capture the user experience, but visual test results do so as part of the visual testing solution.  One example Evan gave was for a web app that had an unexpected break on a mobile browser, due to the different browser behavior on a different operating system.  Without visual testing, the error condition would likely not have been caught in-house. If Capital One were only using manual tests, the condition might not have been covered if the specific device version was not included in the test conditions. With the automated visual tests, they found the problem, saved the image, and used that as a new Gherkin scenario in the next release.&lt;/p&gt;

&lt;h1&gt;
  
  
  Questions about Product Management and Quality Engineering
&lt;/h1&gt;

&lt;p&gt;Next, Evan was asked about how to integrate product management and quality engineering more closely. Evan said he wasn’t sure how to do this in the general case. At Capital One, the need for engineers and product managers to collaborate on issue grooming, with the ability to capture the visual behavior during a test run, improved the ability of the engineers and product managers to agree on issues that needed to be addressed, in what priority, and for what purpose.&lt;/p&gt;

&lt;p&gt;Finally, Evan was asked how to get Product Management to involve engineering more closely. Evan focused on empathy interviews as ways to align engineering and product management, and Gherkin scenarios as tools to bring a common language for both development and test requirements. Evan also talked about his own transition from Quality Engineer to Product Manager – and how he went from being tool-and-outcome focused to customer-and-strategy focused.&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>testing</category>
      <category>leadership</category>
    </item>
    <item>
      <title>Whole Team Testing for Continuous Delivery</title>
      <dc:creator>Michael Battat</dc:creator>
      <pubDate>Thu, 26 Mar 2020 15:16:58 +0000</pubDate>
      <link>https://dev.to/michaelvisualai/whole-team-testing-for-continuous-delivery-1cfm</link>
      <guid>https://dev.to/michaelvisualai/whole-team-testing-for-continuous-delivery-1cfm</guid>
      <description>&lt;p&gt;I just completed taking &lt;a href="https://www.linkedin.com/in/lisihocke/"&gt;Elisabeth Hocke&lt;/a&gt;’s course, &lt;a href="https://testautomationu.applitools.com/the-whole-team-approach-to-continuous-testing/?utm_term=cat&amp;amp;utm_source=syndication&amp;amp;utm_medium=&amp;amp;utm_content=tau&amp;amp;utm_campaign=test-automation-university&amp;amp;utm_subgroup=devto"&gt;The Whole Team Approach to Continuous Testing&lt;/a&gt;, on &lt;a href="https://testautomationu.applitools.com/?utm_term=cat&amp;amp;utm_source=syndication&amp;amp;utm_medium=&amp;amp;utm_content=tau&amp;amp;utm_campaign=test-automation-university&amp;amp;utm_subgroup=devto"&gt;Test Automation University&lt;/a&gt;. Wow! Talk about a mind-blowing experience.&lt;/p&gt;

&lt;p&gt;Mind-blowing? Well, yes. Earlier in my career, I studied lean manufacturing best practices. In a factory, lean manufacturing focuses on reducing waste and increasing factory productivity.  Elisabeth (who goes by ‘Lisi’) explains how this concept makes sense in a continuous delivery model for software.&lt;/p&gt;

&lt;h1&gt;
  
  
  The Lean Factory
&lt;/h1&gt;

&lt;p&gt;To learn lean manufacturing, I read a book by &lt;a href="https://en.wikipedia.org/wiki/Eliyahu_M._Goldratt"&gt;Eliyahu Goldratt&lt;/a&gt; called &lt;a href="https://en.wikipedia.org/wiki/The_Goal_(novel)"&gt;The Goal&lt;/a&gt;. Written in 1984, the book helped explain the concept of waste as building up an inventory of work-in-process (WIP) that could not be manufactured. Goldratt describes walking into a factory filled with WIP inventory that was just piling up.&lt;/p&gt;

&lt;p&gt;As Goldratt describes the problem and wonders how to address it, he goes on a hike with his son’s boy scout troop. One member is always the slowest. When they put that scout at the end, he falls behind everyone else. When they put that scout in the middle, it becomes two groups – the fast group, then a gap, and then this scout and everyone else behind him. Finally, they put this one scout in the front of the troop, and everyone stays together.&lt;/p&gt;

&lt;p&gt;Goldratt describes the ‘aha!’ moment of realizing that the slowest process in the factory limits the speed of the factory. Everyone else might be busy building body parts or electronic harnesses, but if some step further in the assembly process can’t consume those parts as quickly as they are made, they build up waste.&lt;/p&gt;

&lt;p&gt;Goldratt concluded that the efficient factory designs its process to make products at the pace of the slowest process – that profitable operation creates finished products and not waste.&lt;/p&gt;

&lt;h1&gt;
  
  
  Continuous Delivery As Lean Factory
&lt;/h1&gt;

&lt;p&gt;Lisi spends the first chapter delving into the definition of continuous testing in the framework of continuous delivery. She quotes industry thought leaders like Bas Dijkstra, Jez Humble, Dave Farley, and Dan Ashby to get to some key ideas – namely, that continuous testing means ensuring that you are testing at every step in the continuous delivery process.  She even shares this image from Dan Ashby:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--T1t0mTv4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/iuzq5cpa4bkjhynnokco.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--T1t0mTv4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/iuzq5cpa4bkjhynnokco.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This diagram shows continuous delivery in a DevOps model with testing everywhere.&lt;/p&gt;

&lt;p&gt;Lisi makes the key point – success in continuous delivery means shortening feedback loops to learn early.  Every point of development and delivery needs validation.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In planning – how do you test your plans? How do you validate what you’re up to?&lt;/li&gt;
&lt;li&gt;In branching – how do you make the choice about branch design?&lt;/li&gt;
&lt;li&gt;During coding – well, is it more than unit tests? And who runs the tests?&lt;/li&gt;
&lt;li&gt;In the merge – what are you testing?&lt;/li&gt;
&lt;li&gt;Etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;She focuses on the key idea that at each step you make choices and want to know if those choices work. Every choice has consequences. Which designs work? Which design choice provides you with optimal results? How do you validate the performance impacts of a certain design choice? All of these questions result in choices you can test.&lt;/p&gt;

&lt;p&gt;If you are a test professional and not in this system, you must be wondering what this means for your role. As in – what kind of testing can you do without code to validate? Or, for a developer, how can anyone test code before code completion?&lt;/p&gt;

&lt;p&gt;Lisi walks through examples that turn you from coder or tester to scientist. Everything you do becomes a test. You can run large tests or small tests – but test early and learn to get test results early.&lt;/p&gt;

&lt;h1&gt;
  
  
  Unproductive Work As Waste
&lt;/h1&gt;

&lt;p&gt;I brought up the lean factory because Lisa raises a lean manufacturing concept in her description of work in a continuous integration project. Lisi describes waste as the enemy of continuous deployment.&lt;/p&gt;

&lt;p&gt;Any work that fails to deliver value involves waste. In a factory, work that creates unfinished parts faster than the delivery of finished goods creates waste. Usually, you see waste as piled up inventory. In software development, your processes can also create waste. Usually, though, the waste comes in the form of code in a “dev-complete” state – simply waiting to be tested. But, there are other sources of waste – uncoded designs, and unmerged code that are work that cannot be turned into the next step&lt;/p&gt;

&lt;p&gt;And, with that, we come full circle to the application team. Does your team divide the coding responsibility from the verification skill set? Do you create mini waterfalls? Or, do you build a team that tries to do something different – to reduce waste in processes, to deliver quality in software, and to deliver products more quickly to market at the pace of the team?&lt;/p&gt;

&lt;h1&gt;
  
  
  Whole Team Testing For Agile Software Delivery
&lt;/h1&gt;

&lt;p&gt;Here, Lisi turns to the Whole Team testing approach, which becomes a way of thinking about making the entire team responsible for delivering a finished product.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--10Kw81WO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/1tvcou66od7t9vgxdfi4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--10Kw81WO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/1tvcou66od7t9vgxdfi4.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Lisi uses this quote from Lisa Crispin and Janet Gregory from their 2009 book, Agile Testing: A Practical Guide for Testers and Agile Teams:&lt;/p&gt;

&lt;p&gt;“…everyone involved with delivering software is responsible for delivering high-quality software.”&lt;/p&gt;

&lt;p&gt;Lisi gives examples from her own career to back up this idea.  Her own team had gotten bogged down in undelivered code, error tickets, and general frustration. They tried the lean approach for software – “stop starting and start finishing.” Everyone pitched in to help – no matter what role they held. People helped code, test write automation. People helped the product owner – or another product owner. The team worked to finish what they had started.&lt;/p&gt;

&lt;p&gt;As a result, Lisi’s team built up trust and broke down barriers. While expertise might exist in pockets, they realized that delivering software required knowledge sharing and growing as a team. Testers know how to test software – but they cannot be the only people writing tests if the team hopes to deliver effectively. Lisi explained that, once the team had collaborated together once, they were prepared to do it again.&lt;/p&gt;

&lt;p&gt;So, Lisi asks us to think about our own teams and how we build and share knowledge. We might have silos. We might only have a single test automation expert. She contrasts that with her teams, which work to share knowledge and focus on getting things done.&lt;/p&gt;

&lt;p&gt;Her course offers lots of resources to help you move to a more team-oriented approach. One of those is the set of courses on Test Automation University, which can help you increase your team’s skillset in automated testing.&lt;/p&gt;

&lt;h1&gt;
  
  
  Organizing Your Team
&lt;/h1&gt;

&lt;p&gt;Lisi spends three chapters talking about organizing your team for success.&lt;/p&gt;

&lt;h1&gt;
  
  
  Working Solo
&lt;/h1&gt;

&lt;p&gt;First, she looks at organizations where everyone ‘works solo.’ Individuals do their own tasks and try to be as productive as possible. Tremendous amounts of specialization. Plenty of opportunities for waste. Why? Because individuals measure their productivity based on delivery relative to personal productivity goals.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--k22NGwhR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/oz6vpdm2teafw2yo9oew.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--k22NGwhR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/oz6vpdm2teafw2yo9oew.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the world of the individual practitioner, everyone works at different rates. A developer finishes unit of work ‘A’, hands ‘A’ to a tester, and begins working on unit ‘B’. The tester gets testing ‘A’ and finds a bug. When the tester hands it back to the developer, the developer has to context switch to ‘A’ to revisit the code and remember why the bug occurred to fix it. Working Solo involves lots of context switching – which introduces waste.&lt;/p&gt;

&lt;p&gt;Another common source of waste comes from team imbalance. For example, your team has sufficient numbers of developers to build an app, but you haven’t hired enough testers to ensure that the app behaves as expected on all user platforms. If you constrain individuals in their silos, your team may falsely see itself forced to choose tradeoffs that increase the likelihood that you will release untested code.&lt;/p&gt;

&lt;p&gt;If your team runs solo and you reward individuals for their personal productivity, you might be surprised at how the team doesn’t meet all the schedules that matter to you.&lt;/p&gt;

&lt;h1&gt;
  
  
  Pairing
&lt;/h1&gt;

&lt;p&gt;Next, she looks at ‘pairing.’ Engineers work together as pairs. They can be colocated or work remotely, but they work together. In some cases, they share ideas and divide tasks. In the more explicit cases, no member of the team can both come up with ideas and write code. Someone with an idea has to explain it to the other, who actually writes the code.  In pairing, the team shares ideas and experience – and learn together from each other.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2FXnQZ1o--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/5vzwywahteb5pcz0jcf1.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2FXnQZ1o--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/5vzwywahteb5pcz0jcf1.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Pairing helps the overall team write code that incorporates more robust and thoughtful design as two minds work together to deliver pieces. And, two people who work together learn from each other. Successful airing builds trust and team collaboration.&lt;/p&gt;

&lt;h1&gt;
  
  
  Mobbing
&lt;/h1&gt;

&lt;p&gt;Finally, Lisi looks at ‘mobbing.’ In mobbing, a large group of people comes together to think and create. Mobbing, one person takes the keyboard and acts as the ‘driver’. The rest of the people suggest ideas and must explain to the driver what they mean to build what they are explaining.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KjD6j1D7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/a5ow4g2mm43u8ogis3be.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KjD6j1D7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/a5ow4g2mm43u8ogis3be.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A mob with a lot of ideas can seem chaotic – and it can lead to lots of teamwork… if you are deliberate in mobbing and use some thoughtful rules, your mob can be effective. The most important parts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Keep everyone engaged – be in a place where you are either contributing or learning from others&lt;/li&gt;
&lt;li&gt;Use the “yes, and…” approach from improvisational theater&lt;/li&gt;
&lt;li&gt;When you have multiple ideas, try both – starting with the person with least experience (remember, it’s about learning)&lt;/li&gt;
&lt;li&gt;In the mob, everyone is learning – be kind and respectful&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  When To Use Which Approach
&lt;/h1&gt;

&lt;p&gt;Finally, Lisi goes back to the idea that there is no ‘one size fits all’ to collaboration styles. In some cases, teams can see that certain problems can be divided into solo work. In some, they will gravitate to pairs. And, at times, they will call for a mob.&lt;/p&gt;

&lt;p&gt;Her point, though, is that teams must know these approaches exist and have experience with all to find which will be the best for the team output in default as well as knowing how to handle the others when necessary.&lt;/p&gt;

&lt;p&gt;The key metrics involve team productivity – not just individual productivity. If you track individual productivity without tracking team waste, you don’t know where you are building inefficiencies. And, once you begin to track team productivity, you start to see which approaches work best for you as a development organization.&lt;/p&gt;

&lt;p&gt;One hidden productivity benefit of collaboration comes from interdependence. If only one person understands customer needs, or specific test approaches, workflows depend on that person always being part of the loop. While that may seem to give those people special power of knowledge, they also seem personally constrained so they cannot take a vacation or even a sick day without their absence impacting team productivity.&lt;/p&gt;

&lt;p&gt;When individuals share their knowledge, two really good things happen. As teams start tracking the flow of work through the team, they see that the expert can take time off without impacting team productivity. And, for the experts, they can continue to develop their knowledge in different areas so they don’t become a siloed specialist whose quest for new knowledge and expertise becomes blocked by solving the same problem all the time.&lt;/p&gt;

&lt;h1&gt;
  
  
  Whole Team Testing – Conclusions
&lt;/h1&gt;

&lt;p&gt;Before I get started with my other conclusions, I want to mention amazing resources. Lisi includes a generous set of links to both paid and free resources you can use to learn more and share with your team. The course serves as a starting point to what could lead to a major change in the way you work.&lt;/p&gt;

&lt;p&gt;As I took the course, I realized that ‘Whole Team” means more than just engineers. Lisi made it clear that everyone in the product delivery cycle plays a role. As team members collaborate and learn, the team can become more productive. To succeed, the team needs a willingness to experiment and tools to measure effectiveness.&lt;/p&gt;

&lt;p&gt;Lisi got me to realize that the factory view of waste, which I read about in The Goal, makes sense in a software context as well.  Creating software means creating value. The value for the software maker starts with the value that customers obtain – which means building high-value, defect-free experiences for customers as they use the product.&lt;/p&gt;

&lt;p&gt;Similarly, I had not thought about experimentation in ways to get faster team delivery, and that group collaboration styles could impact team productivity. I have worked on projects that tracked team velocity. I could easily see adding this approach to teams in the future – and not just in software development.&lt;/p&gt;

&lt;p&gt;Lisi’s point wasn’t simply that collaboration matters during coding. I inferred that we should look for opportunities to use pairing or mobbing – even during design, planning, or release. Her point, as I saw it, was that a rigid approach to team structure limits team productivity. By adding collaboration and work style skills to a team, we increase the opportunities for the team to increase team velocity and productivity.&lt;/p&gt;

</description>
      <category>testing</category>
      <category>devops</category>
    </item>
    <item>
      <title>The Caveats of Modern Functional Testing</title>
      <dc:creator>Michael Battat</dc:creator>
      <pubDate>Wed, 18 Mar 2020 22:14:04 +0000</pubDate>
      <link>https://dev.to/michaelvisualai/the-caveats-of-modern-functional-testing-4n1k</link>
      <guid>https://dev.to/michaelvisualai/the-caveats-of-modern-functional-testing-4n1k</guid>
      <description>&lt;p&gt;In reaching the conclusion of the course on Modern Functional Testing using Visual AI, I reach the key question:&lt;/p&gt;

&lt;p&gt;Can I use visual assertions to replace text-based assertions for my test infrastructure?&lt;/p&gt;

&lt;p&gt;In my prior blog posts, I have summarized each chapter of &lt;a href="https://www.linkedin.com/in/rajaraodv/"&gt;Raja Rao DV’s&lt;/a&gt; course on &lt;a href="https://testautomationu.applitools.com/modern-functional-testing/?utm_term=cat&amp;amp;utm_source=syndication&amp;amp;utm_medium=&amp;amp;utm_content=tau&amp;amp;utm_campaign=test-automation-university&amp;amp;utm_subgroup=devto"&gt;Modern Functional Testing&lt;/a&gt;, which you can take on &lt;a href="https://testautomationu.applitools.com/?utm_term=cat&amp;amp;utm_source=syndication&amp;amp;utm_medium=&amp;amp;utm_content=tau&amp;amp;utm_campaign=test-automation-university&amp;amp;utm_subgroup=devto"&gt;Test Automation University&lt;/a&gt;. We have reached &lt;a href="https://testautomationu.applitools.com/modern-functional-testing/chapter8.html?utm_term=cat&amp;amp;utm_source=syndication&amp;amp;utm_medium=&amp;amp;utm_content=tau&amp;amp;utm_campaign=test-automation-university&amp;amp;utm_subgroup=devto"&gt;Chapter 8 – Three Main Caveats&lt;/a&gt;. Both this chapter and the prior chapter help frame the real-world issues involved in modern functional testing with Visual AI. And, they help frame the answer to the question you’re undoubtedly asking yourself – will Visual AI testing work for me?&lt;/p&gt;

&lt;p&gt;Let’s discuss the caveats and then draw some conclusions&lt;/p&gt;

&lt;h1&gt;
  
  
  Caveat 1 – Dynamic Data
&lt;/h1&gt;

&lt;p&gt;We already covered dynamic data in Chapter 7, which explained how Applitools Visual AI incorporates region-based exception handling for truly dynamic parts of a web page.&lt;/p&gt;

&lt;p&gt;If all you had was a full screen or full page snapshot, dynamic data would generate differences randomly in those dynamic regions. You could not let the app run freely – you would have to constrain its behavior.&lt;/p&gt;

&lt;p&gt;However, if you can apply different match levels to different regions, you can handle dynamic parts of a page. In one case, you can simply ignore a region. Or, you can ensure that the region layout remains structured correctly – even if the content changes.&lt;/p&gt;

&lt;p&gt;So, handling dynamic data is one caveat you already know you can overcome.&lt;/p&gt;

&lt;h1&gt;
  
  
  Caveat 2 – Browser Limitations
&lt;/h1&gt;

&lt;p&gt;The second caveat involves specific browser or app limitations.  Raja gives the example of browser-native capabilities, such as native select menus or native pop-up windows. Neither of these coding structures lends itself to snapshot capture. If you cannot capture the visible behavior, you cannot run a screen comparison.&lt;/p&gt;

&lt;p&gt;If you use these kinds of structures in your app, visual AI cannot capture the behavior. You may still need to use legacy assertions to capture this behavior.&lt;/p&gt;

&lt;h1&gt;
  
  
  Caveat 3 – Test Execution Speed
&lt;/h1&gt;

&lt;p&gt;The third caveat involves test execution speed. It takes time for a browser to render a page. On top of rendering time, visual testing may be instructed to scroll and capture an entire page – which is done a screen at a time. Scroll-and-capture can cause the entire test run for an app to take considerable time.&lt;/p&gt;

&lt;p&gt;Yes, the legacy approach can seem faster. Using code to inspect elements in the DOM runs much more quickly than scrolling and capturing. But, as Raja has pointed out, the legacy approach becomes a management headache when developers add new features or change design that result in visible errors.&lt;/p&gt;

&lt;p&gt;To address this caveat, Raja talks about smart test design. As the test engineer, you know the test states for your app. If any of your tests encounter a page in an identical state, capture that page only once. Also, to reduce execution time, run your tests in parallel as much as possible.&lt;/p&gt;

&lt;h1&gt;
  
  
  Revisiting The Course
&lt;/h1&gt;

&lt;p&gt;To sum up my series, I thought I would revisit some of the takeaways that Raja shared or that I inferred through his course.&lt;/p&gt;

&lt;h1&gt;
  
  
  Less Code To Manage
&lt;/h1&gt;

&lt;p&gt;Compared with legacy functional tests, a modern functional test with Visual AI leaves you with less code to manage. And, this matters over time, because while developers can make small changes, even small changes can have a large impact on the code you have to manage.&lt;/p&gt;

&lt;p&gt;When you develop functional tests with code assertions, you trust that your code assertions will catch a programming error that causes the expected code not to be where you expect it. You figure that locators will guide you to the proper line of HTML and that the existence of the value related to that locator describes the screen behavior.&lt;/p&gt;

&lt;p&gt;Except, of course, it is possible to have locators remain the same and cause an app to become unusable. For example, your developers can introduce a small CSS change. That change does not modify your locators, but it does make your web app unusable. Would your existing legacy code expose the impact of overlapping, hidden, or invisible content?&lt;/p&gt;

&lt;p&gt;By reducing dozens of code assertions on a single page to a single visual assertion, you reduce the amount of code you have to manage. Visual assertions will catch your visual failure modes as well as your functional failure modes.  And, if developers fail to tell you about a change or new features, your visual assertions will expose those changes (and, maybe, help improve your team communications).&lt;/p&gt;

&lt;h1&gt;
  
  
  Covering More Cases
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VAzTzd6r--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/w9ir99socqv6ie049nu7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VAzTzd6r--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/w9ir99socqv6ie049nu7.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The next benefit we discussed involved problematic test cases for legacy functional test. These included:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Testing tables, where table data can change in large chunks through ordering and filtering. You can code your tests by following the code logic – and discover that your test instructions can validate code that, in fact, does not match user expectations.&lt;/li&gt;
&lt;li&gt;Running data-driven testing, where it becomes too easy to code only for expected behavior and miss unintended or uncommunicated changes.&lt;/li&gt;
&lt;li&gt;Handling dynamic content, which can change from test run to test run or app instance to app instance. In such cases, you either don’t bother testing, or you write code to test sections and structure of the page and ignore visual representation completely&lt;/li&gt;
&lt;li&gt;Testing complex HTML structures – like iFrames. These might be difficult to navigate for web element locators. Your code might be unwieldy to maintain over the long term.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For these kinds of test cases, text-based assertions are either difficult to code, difficult to recode when changes occur, or impossible to code in the first place.  If you use these types of structures in your code, you are less likely to adopt functional or layout changes – as those changes introduce a test code maintenance cost in addition to the direct coding cost.&lt;/p&gt;

&lt;p&gt;Visual assertions handle all these cases.  Tables become a single assertion instead of a set of coded assertions. Data-driven testing loses its code dependency and can catch unintended behavior or visual changes. Dynamic content you can contain and step through to treat as static or snapshot content – or you can treat those regions as ignore regions. And, finally, snapshots can handle more complex HTML code.&lt;/p&gt;

&lt;h1&gt;
  
  
  Faster To Develop, Easier to Maintain
&lt;/h1&gt;

&lt;p&gt;With less coding and less code required, visual testing with visual AI simplifies your test development process. Exercise a test – eyes.checkWindow(). Exercise another test step – eyes.CheckWindow(). You begin to manage where you capture screens – and that management step becomes the skill set you care about – rather than the code management for tests of all the visual elements you want to verify (and finding ones you missed).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4so7tON---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/4si0oav4xkvoefmncqc3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4so7tON---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/4si0oav4xkvoefmncqc3.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you have baseline images, each subsequent run becomes a new checkpoint. All identical images pass. Any changes between the checkpoint and baseline are called “Unverified.”  You get to inspect those differences and either accept or reject the changes. You can see the differences on-screen – either side-by-side or overlapping. That makes it easy to determine which version you are seeing, and whether or not you want to approve or reject the change.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3R_oejqe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/j2geh8yqqvsgqot4wcjk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3R_oejqe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/j2geh8yqqvsgqot4wcjk.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We spent some time reviewing the Automated Maintenance feature. Automated maintenance lets you maintain tests by finding a change on a single screen, accepting or rejecting it, and then using Automated Maintenance to perform the same accept or reject on all identical changes found within your test set. If you introduce a new menu item to all pages of your app, Automated Maintenance lets you handle that change quickly.&lt;/p&gt;

&lt;h1&gt;
  
  
  Exception Handling
&lt;/h1&gt;

&lt;p&gt;We spent Chapter 7 reviewing how to handle dynamic data and other exceptions. You saw how you can easily capture regions and treat them differently from the rest of the capture. You can ignore regions, just validate the layout, or just validate the content. Be careful with exceptions – they will persist as you update the app, and may not always be relevant for an upgraded app.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusions
&lt;/h1&gt;

&lt;p&gt;I started taking Raja’s course as a way to figure out what you would experience if you were to take the course. As an Applitools team member, I know more than a novice – but I don’t have all the experience that some test engineers bring to application testing.&lt;/p&gt;

&lt;p&gt;We discussed internally what we thought you might take away from this course – and what might hold you back from trying Visual AI. We recognized that anyone who thinks of functional testing as entirely code-based entry, execution, and assertion, would think of visual testing as a nice-to-have add-on to their functional tests. But, the more we thought about the limitations of functional testing with code assertions, we realized we had identified the problem plaguing application testing.&lt;/p&gt;

&lt;p&gt;In my subsequent conversations with QA practitioners, I realized that everyone has built or used some kind of snapshot tool in their QA process for years. As one of my former colleagues noted, he used snapshot comparison at our prior employer to figure out if the developers had introduced new features without telling QA.&lt;/p&gt;

&lt;p&gt;However, no one has used visual snapshots in production and test automation, because no one could automate the visual testing process with any degree of accuracy. Automated tools reported too many false positives. Visual testing indicated potential issues but could not be relied upon in production test automation. So, visual testing could only be a nice-to-have.&lt;/p&gt;

&lt;p&gt;We think we have built that production-ready test automation tool that increases your productivity as a tester. I hope you have gotten interested enough in Applitools to add Visual AI to your functional testing process. Depending on your test language and test infrastructure of choice, you can find a tutorial on applitools.com and even a course on visual testing in Test Automation University. &lt;/p&gt;

</description>
      <category>testing</category>
      <category>codenewbie</category>
      <category>functional</category>
    </item>
    <item>
      <title>How to use Visual Validation Testing with Dynamic Data</title>
      <dc:creator>Michael Battat</dc:creator>
      <pubDate>Thu, 12 Mar 2020 18:34:43 +0000</pubDate>
      <link>https://dev.to/michaelvisualai/how-to-use-visual-validation-with-dynamic-data-4k78</link>
      <guid>https://dev.to/michaelvisualai/how-to-use-visual-validation-with-dynamic-data-4k78</guid>
      <description>&lt;p&gt;You’re running functional tests with visual validation, and you have dynamic data. Dynamic data looks different every time you inspect it. How do you do functional testing with visual validation, when your data changes all the time?&lt;/p&gt;

&lt;p&gt;I arrived at &lt;a href="https://testautomationu.applitools.com/modern-functional-testing/chapter7.html?utm_term=cat&amp;amp;utm_source=syndication&amp;amp;utm_medium=&amp;amp;utm_content=tau&amp;amp;utm_campaign=test-automation-university&amp;amp;utm_subgroup=devto" rel="noopener noreferrer"&gt;Chapter 7&lt;/a&gt; of &lt;a href="https://www.linkedin.com/in/rajaraodv/?utm_term=&amp;amp;utm_source=web-referral&amp;amp;utm_medium=blog&amp;amp;utm_content=blog&amp;amp;utm_campaign=&amp;amp;utm_subgroup=" rel="noopener noreferrer"&gt;Raja Rao DV’s&lt;/a&gt; course on &lt;a href="https://testautomationu.applitools.com/?utm_term=cat&amp;amp;utm_source=syndication&amp;amp;utm_medium=&amp;amp;utm_content=tau&amp;amp;utm_campaign=test-automation-university&amp;amp;utm_subgroup=devto" rel="noopener noreferrer"&gt;Test Automation University&lt;/a&gt;, &lt;a href="https://testautomationu.applitools.com/modern-functional-testing/?utm_term=cat&amp;amp;utm_source=syndication&amp;amp;utm_medium=&amp;amp;utm_content=tau&amp;amp;utm_campaign=test-automation-university&amp;amp;utm_subgroup=devto" rel="noopener noreferrer"&gt;Modern Functional Test Automation Through Visual AI&lt;/a&gt;. Chapter 7 discusses dynamic data – content that changes every time you run a test.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F7ceaep2x1bdt6zhuzu30.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F7ceaep2x1bdt6zhuzu30.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Dynamic data pervades client screens. Just look at the screenshot above. Digital clocks. Wireless signal strength. Location services. Alarm settings. Bluetooth connectivity. All these elements change the on-screen pixels, but don’t reflect a change in functional behavior.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F2d042t09hoj3fwyr112h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F2d042t09hoj3fwyr112h.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This bank website updates the time until the local branch closes, in case you want to visit the branch. That time information will change on the screen. And Visual AI captures visual differences. So, how do you automate tests that will contain an obvious visual difference?&lt;/p&gt;

&lt;p&gt;Dynamic regions can be the rule, rather than the exception, in web apps. But, for the purposes of visual validation, dynamic elements on-screen comprise the group of test exceptions. You need a way to handle these exceptions.&lt;/p&gt;

&lt;h1&gt;
  
  
  Match Levels
&lt;/h1&gt;

&lt;p&gt;Visual validation depends on having a good range of comparison methods because dynamic data can impact visual validation.  Visual AI groups pixels into visual elements and compares elements to each other.  The strictness you use in your comparison is called the match level.&lt;/p&gt;

&lt;p&gt;Applitools Visual AI determines the element relative location, boundary, and properties (colors, contents, etc.) of each visual element. If there is no prior baseline, these elements are saved as the baseline.  Once a baseline exists, Applitools will check the checkpoint image against the baseline.&lt;/p&gt;

&lt;p&gt;Raja introduces three kinds of match levels for Applitools to compare your checkpoint against your baseline.  You can use these match levels to inspect a subset of a page, a screenful, or an entire web page.  Here are the three main match levels:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;“Strict” – Visual AI distinguishes location, dimension, color, and content differences as a human viewer would.&lt;/li&gt;
&lt;li&gt;“Content” – Visual AI distinguishes location, dimension, and content differences. Color differences are ignored as long as the content can be distinguished. Imagine wanting to see the impact of a global CSS color change.&lt;/li&gt;
&lt;li&gt;“Layout” – Visual AI distinguishes location and dimension and ignores content like text and pictures. This match level makes it easy to validate the layout of shopping and publication sites with a consistent layout and dynamic content.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You choose the match level appropriate for your page. If you have a page with lots of changing content, you choose “Layout” – which checks the existence and of regions but ignores their content.  If you just made a global color change, you use “Content.” In most cases, you use “Strict.”&lt;/p&gt;

&lt;p&gt;You set the match level in your call to open Applitools Eyes.  If you don’t specify a match level, it defaults to “Strict.”&lt;/p&gt;

&lt;h1&gt;
  
  
  Handling Exceptions – Regions
&lt;/h1&gt;

&lt;p&gt;We like to think of our applications on a page-by-page basis. Each locator points to a unique page that behaves a certain way. In some cases, the relevant application content resides on a single screen. Often, though, applications and pages can extend beyond the bottom of the visible screen.  Occasionally, content extends across wider than the visible screen as well.&lt;/p&gt;

&lt;p&gt;By default, Applitools captures a screenful – the current viewport. Raja covered this code specifically in the prior chapters when he showed us how to use:&lt;/p&gt;

&lt;p&gt;eyes.checkWindow();&lt;/p&gt;

&lt;p&gt;Using the same command, with the “fully” option, you can capture the full page, not just the current viewport. Assuming the page scrolls beyond the visible screen, you can have Applitools scroll down and across all the screens and stitch together a full page. So you can compare full pages of your application, even if it takes several screens to capture the full page.&lt;/p&gt;

&lt;p&gt;Be aware that the default comparison uses strict mode. You can choose a different mode for your comparison.  And, you can handle exceptions with regions.&lt;/p&gt;

&lt;p&gt;So, now that you know that you can instruct Applitools to capture a full page, or a viewport, what happens when you have dynamic data, or other parts of a page that could change? You need to identify a region that behaves differently.&lt;/p&gt;

&lt;p&gt;Applitools adds the concept of “regions.” As Raja describes, “region” describes a rectangular subset of the screen capture – identified by a starting point X pixels across and Y pixels down relative to the top of the page, plus W pixels wide and H pixels high.&lt;/p&gt;

&lt;h1&gt;
  
  
  Control Region Comparisons
&lt;/h1&gt;

&lt;p&gt;Once  you have a region, you can use one of the following selections to control the inspection of that region:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fuoq7u2sxyb67jnqyqxyy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fuoq7u2sxyb67jnqyqxyy.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ignore – ignore that region completely. Its contents do not matter to the test identifying differences. Useful for counters&lt;/li&gt;
&lt;li&gt;Floating – the content within the region can shift around. Text that can shift around.&lt;/li&gt;
&lt;li&gt;Strict – content that should stay in the same place and color from screen to screen&lt;/li&gt;
&lt;li&gt;Content – Content that should stay in the same place with varying color from screen to screen&lt;/li&gt;
&lt;li&gt;Layout – Content that can change but has a common layout structure from screen to screen&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Regions let you be permissive when you use a restrictive match level.  “Ignore” literally means that – ignore the content of the region.  There may be times you want to ignore a region. More often, though, you might want to ensure that the region boundary and content exists – for this you use “Layout.”&lt;/p&gt;

&lt;p&gt;Regions let you handle exceptions on a more restrictive basis as well. For example, on a page using layout mode, you can create a region and use “strict” to compare content and color that should be identical – such as header or menu bar.&lt;/p&gt;

&lt;h1&gt;
  
  
  Testers Choice
&lt;/h1&gt;

&lt;p&gt;One big point Raja makes is that you get to choose how to deploy Visual AI. Select the mode that matches the page behavior you expect, and then set the appropriate mode for handling exceptions.&lt;/p&gt;

&lt;p&gt;Raja demonstrates how you can choose to define exceptions in the UI or in your test code. You can choose to set the exceptions in the Applitools UI. Once you set a region with a specific match level, that region with that match level persists through future comparisons. Alternatively, you can add regions to handle exceptions directly in your test code. Those region definitions persist as long as they persist in your code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F8j0wntnpb0lwooeil8nz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F8j0wntnpb0lwooeil8nz.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You don’t need to capture an entire window or page. You can run your eyes.open() and use:&lt;/p&gt;

&lt;p&gt;eyes.checkRegion()&lt;/p&gt;

&lt;p&gt;Choose just to capture individual regions at an appropriate comparison level.  This kind of test can be useful during app development when you want to distinguish the behavior of specific elements you are building.&lt;/p&gt;

&lt;p&gt;If you’re really focused on using element-based checks, you can even run:&lt;/p&gt;

&lt;p&gt;eyes.checkElement()&lt;/p&gt;

&lt;p&gt;The checkElement instruction uses web element locators to find specific on-page elements.  checkElement lets you use a legacy identifier in a modern functional test approach. In general, though, checkElement adds more complexity compared with visual validation.&lt;/p&gt;

&lt;p&gt;The key understanding is that, for a given capture page, you can define your mode for the full capture and exceptions for specific regions, so that you cover the entire page.&lt;/p&gt;

&lt;h1&gt;
  
  
  Handling Expected Changes
&lt;/h1&gt;

&lt;p&gt;When you make changes, all your captures must be updated. CSS, icons, menus, and other changes can affect multiple pages – or even your entire site. Imagine having to maintain all those changes – page by page. Yikes.&lt;/p&gt;

&lt;p&gt;Fortunately, Applitools makes it easy to accept common changes across multiple pages.&lt;/p&gt;

&lt;p&gt;Whenever you encounter a difference on a single page, you are instructed to accept the change or reject it. If you reject the change – it’s an error and you can flag development. But, if you accept the change, you can also use a feature called automated maintenance to accept the change on all other pages where the change has been discovered.&lt;/p&gt;

&lt;p&gt;Update your corporate logo. Done.&lt;/p&gt;

&lt;p&gt;Install a new menu system. Easy.&lt;/p&gt;

&lt;p&gt;You can use Automated Maintenance to accept changes. You can also use Automated Maintenance to deploy regions across all the pages – such as ignore regions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fsjkr8lvbjbhlk1wolo2t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fsjkr8lvbjbhlk1wolo2t.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Of course, the more comprehensive your changes, the more challenging it is to use automated maintenance. If you make some significant changes in your layout, expect to create new baselines as well as use automated maintenance.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusions about Visual Validation and Dynamic Data
&lt;/h1&gt;

&lt;p&gt;We all want to build applications that achieve business outcomes. We often build visually interesting pages with changing content designed to keep buyers engaged. But, we also know that testing requires repeatability – meaning that dynamic may be great for business, but testing requires predictable results.&lt;/p&gt;

&lt;p&gt;Dynamic data can limit the benefits of visual validation. You need a way to handle dynamic data in your visual validation solution. Applitools gives you tools to handle dynamic parts of your application. You can handle truly dynamic sections by ignoring regions, treating those regions as layout regions, or even handling a whole page as a layout and let sections and content change.&lt;/p&gt;

&lt;p&gt;And, when you make global changes, automated maintenance eases the pain of updating all your baseline images.&lt;/p&gt;

&lt;p&gt;As Raja makes it clear, Applitools has thought not just about discovering visual changes, but handling unexpected changes that are defects, dynamic data that will produce false-positive defects, and expected global changes affecting multiple pages. All of these features make up key parts of a modern functional testing system.&lt;/p&gt;

</description>
      <category>codenewbie</category>
      <category>testing</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>How To Simplify Complex Functional Testing</title>
      <dc:creator>Michael Battat</dc:creator>
      <pubDate>Tue, 03 Mar 2020 18:53:24 +0000</pubDate>
      <link>https://dev.to/michaelvisualai/complex-functional-testing-simplified-154o</link>
      <guid>https://dev.to/michaelvisualai/complex-functional-testing-simplified-154o</guid>
      <description>&lt;p&gt;How does functional testing with visual assertions help simplify test development for complex real-world apps? Like, say, a retail app with inventory, product details, rotating displays, and shopping carts?&lt;/p&gt;

&lt;p&gt;My special blog series discusses &lt;a href="https://testautomationu.applitools.com/modern-functional-testing/?utm_term=cat&amp;amp;utm_source=syndication&amp;amp;utm_medium=&amp;amp;utm_content=tau&amp;amp;utm_campaign=test-automation-university&amp;amp;utm_subgroup=devto"&gt;Modern Functional Testing with Visual AI&lt;/a&gt;, Raja Rao’s course on &lt;a href="https://testautomationu.applitools.com/modern-functional-testing/?utm_term=cat&amp;amp;utm_source=syndication&amp;amp;utm_medium=&amp;amp;utm_content=tau&amp;amp;utm_campaign=test-automation-university&amp;amp;utm_subgroup=devto"&gt;Test Automation University&lt;/a&gt;. I arrived at &lt;a href="https://testautomationu.applitools.com/modern-functional-testing/chapter6.html?utm_term=cat&amp;amp;utm_source=syndication&amp;amp;utm_medium=&amp;amp;utm_content=tau&amp;amp;utm_campaign=test-automation-university&amp;amp;utm_subgroup=devto"&gt;Chapter 6 – E-Commerce Real World Example&lt;/a&gt;. In this review, I hope to give you an overview of Raja’s examples and how they might apply to your test challenges.&lt;/p&gt;

&lt;h1&gt;
  
  
  Real-World Challenges
&lt;/h1&gt;

&lt;p&gt;Raja starts by explaining the challenges in creating sophisticated functional tests for an e-commerce app. His demonstration site lets a shopper:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scroll inventory&lt;/li&gt;
&lt;li&gt;Select items&lt;/li&gt;
&lt;li&gt;Put items in a shopping cart&lt;/li&gt;
&lt;li&gt;Delete items from a cart&lt;/li&gt;
&lt;li&gt;Process transactions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The app includes featured items that can change each time a shopper visits the site.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--01Nz86Wg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/fw3hs1fpjlm2tlnzy89h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--01Nz86Wg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/fw3hs1fpjlm2tlnzy89h.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this case, Raja uses an e-commerce app from SAP, with sample data and a sample setup. The SAP application includes all the features of any e-commerce site, and it does have a simple look-and-feel for all that complex functionality. It’s useful for demonstrating the challenges of implementing a complicated test for the app pages and functionality, which I’ll dive into in a moment. But first…&lt;/p&gt;

&lt;p&gt;There may be those among you readers who think,&lt;/p&gt;

&lt;p&gt;“Hey, this isn’t a legitimate real-world test. He’s using a canned application. How does that compare with complex real-world apps we code and test ourselves?”&lt;/p&gt;

&lt;p&gt;I have to admit that I was thinking that for a sec. And then I remembered all the times that people hesitate in applying app upgrades – even when there’s a security issue. Why? Because nobody feels comfortable about how that app will behave in the areas outside the issue that got ‘fixed.’ How do we know how an app we have customized will look and behave after the upgrade? How many of us have felt burned once we discovered that the vendor’s testing likely didn’t include our specific configuration?&lt;/p&gt;

&lt;p&gt;Hold that thought – I’ll get back to it in a sec.&lt;/p&gt;

&lt;h1&gt;
  
  
  Challenges Testing Complex Applications
&lt;/h1&gt;

&lt;p&gt;In the first few chapters, Raja focused on individual technologies that result in complex functional tests. He covered &lt;a href="https://testautomationu.applitools.com/modern-functional-testing/chapter2.html?utm_term=cat&amp;amp;utm_source=syndication&amp;amp;utm_medium=&amp;amp;utm_content=tau&amp;amp;utm_campaign=test-automation-university&amp;amp;utm_subgroup=devto"&gt;testing tables&lt;/a&gt;, &lt;a href="https://testautomationu.applitools.com/modern-functional-testing/chapter3.html?utm_term=cat&amp;amp;utm_source=syndication&amp;amp;utm_medium=&amp;amp;utm_content=tau&amp;amp;utm_campaign=test-automation-university&amp;amp;utm_subgroup=devto"&gt;data-driven testing&lt;/a&gt;, &lt;a href="https://testautomationu.applitools.com/modern-functional-testing/chapter4.html?utm_term=cat&amp;amp;utm_source=syndication&amp;amp;utm_medium=&amp;amp;utm_content=tau&amp;amp;utm_campaign=&amp;amp;utm_subgroup=devto"&gt;testing dynamic content&lt;/a&gt;, and &lt;a href="https://testautomationu.applitools.com/modern-functional-testing/chapter5.html?utm_term=cat&amp;amp;utm_source=syndication&amp;amp;utm_medium=&amp;amp;utm_content=tau&amp;amp;utm_campaign=test-automation-university&amp;amp;utm_subgroup=devto"&gt;testing iFrames&lt;/a&gt;. With each of these technologies, a small change on a single input can have a significant change on the output. For example, how do you ensure that the sort function on a table behaves as expected?&lt;/p&gt;

&lt;p&gt;All of these tests share a common test strategy: perform an action, and then assert that the output matches expectations. Legacy functional tests require the test coder to assert each expected element exists in the DOM. Testing gets complicated by many conditions – but let’s boil them down to three:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A small change in an input makes a big change on the output that needs to be checked&lt;/li&gt;
&lt;li&gt;An app gets updated – and locators and formatting change&lt;/li&gt;
&lt;li&gt;Lots of locators exist on the app page&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And, in each of the prior chapters, Raja shows how visual assertions with Visual AI result in functional testing simplified.&lt;/p&gt;

&lt;h1&gt;
  
  
  Visual vs Coded Assertions
&lt;/h1&gt;

&lt;p&gt;The common issues involve all the coded assertions of outputs. At a root level, do you inspect the entire DOM? Or, do you just inspect the elements you expect to change?&lt;/p&gt;

&lt;p&gt;Some testers get functional test myopia – they focus only on the elements and behavior they expect to change in their tests. These testers think that checking every element on every page after every change seems silly. They make a change and look for expected behavior.&lt;/p&gt;

&lt;p&gt;When you test a table sort or some other activity that changes many elements on a page, you have your hands full just writing the assertions for your expected differences. Everything else should just take care of itself.&lt;/p&gt;

&lt;p&gt;Raja’s point in each of these early chapters shows that coded assertions for DOM inspection miss all sorts of behaviors that can vary from browser to browser or app version to app version. Visual assertions with visual AI allow users to simplify their functional test code. He shows why, compared with coded DOM checks,  visual assertions provide:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Simpler deployment&lt;/li&gt;
&lt;li&gt;Simpler maintenance&lt;/li&gt;
&lt;li&gt;More robust test infrastructure&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So, in chapter 6, Raja provides this analysis to testing an e-commerce app.&lt;/p&gt;

&lt;h1&gt;
  
  
  Testing E-Commerce App Elements
&lt;/h1&gt;

&lt;p&gt;Raja begins by pointing out that many of the elements in an e-commerce app behave like elements in his earlier chapters. We find:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A table of catalog elements with regularly placed structural parts, such as product name, description, price, and availability.&lt;/li&gt;
&lt;li&gt;Parts of the app screen that depend on previous activity (e.g. recently viewed products)&lt;/li&gt;
&lt;li&gt;App behaviors best tested as data-driven (add to cart on an in-stock vs. out of stock item)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wrOfm72E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ccoa8jwmwn1ne7skipuu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wrOfm72E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ccoa8jwmwn1ne7skipuu.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With this example, each of the prior chapters in Raja’s course comes into play.&lt;/p&gt;

&lt;p&gt;For instance, with the catalog, the legacy approach would require the tester to identify each web element locator in the catalog section, and then ensure that the value in the catalog matched the value in the test code.  With the modern approach – take a snapshot. This aligns with chapter 2 of Raja’s course.&lt;/p&gt;

&lt;p&gt;With the items in the catalog, a shopper can inspect the details of an element and then click to add it to his or her cart.  The shopper can inspect both available and unavailable items – but unavailable items generate an error when the button is clicked to add the item to the cart.  Testing this behavior reminds me of chapter 3 in Raja’s course, the chapter about data-driven testing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7ibYLxIv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/l2f7sb13sdc5ns9jbme0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7ibYLxIv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/l2f7sb13sdc5ns9jbme0.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Legacy Testing – Assert Element Locators
&lt;/h1&gt;

&lt;p&gt;Raja then walks through the examples and shows the legacy test code. Here is the legacy test code for the catalog test. Note that each output value must be validated.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Basically, for each item in the catalog, check its:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Label&lt;/li&gt;
&lt;li&gt;Price&lt;/li&gt;
&lt;li&gt;Availability&lt;/li&gt;
&lt;li&gt;Color&lt;/li&gt;
&lt;li&gt;Image&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Compare for each against its expected reply.  The code calls out to each web element locator and asserts its value.&lt;/p&gt;

&lt;p&gt;Read this code, and then think about your own tests. Consider the similarities and differences. How much of this code remains valid between app upgrades? How much must you rewrite when development adds a new function – or a new feature? Or, if the locators change?&lt;/p&gt;

&lt;h1&gt;
  
  
  Applitools Simplifies Functional Testing
&lt;/h1&gt;

&lt;p&gt;If you’re with me so far, you understand Raja’s next point – why not leave the assertions to Visual AI? In this case, the assertions let you focus on code that exercises the application and let Visual AI perform the output comparisons.&lt;/p&gt;

&lt;p&gt;For testing the tables of inventory, Visual AI lets you perform tasks like sorting and filtering and check the output. For functions like testing what happens when you encounter success versus error conditions – like trying to add an in-stock versus out-of-stock item to the shopping cart, this is the data-driven testing scenario.  As he showed in the prior chapters, and again here, Visual AI simplifies the entire test development process.&lt;/p&gt;

&lt;p&gt;When you add new behaviors, they become new checkpoints that you can choose to include in the future baseline.  And, if you encounter unexpected behaviors, you can reject those changes and send them back to your development team for repair.&lt;/p&gt;

&lt;p&gt;As we have discussed elsewhere, visual AI provides the accuracy of AI as used in self-driving cars and other computer vision technology. Visual AI breaks a sea of pixels into visual elements without relying on the DOM to identify those elements. Once you have established a baseline, every subsequent snapshot of that screen will compare the visual elements on that page versus your checkpoint.&lt;/p&gt;

&lt;p&gt;Instead of relying on a simplistic pixel comparison to determine whether the checkpoint and baseline differ, visual AI checks the boundaries of the element and determines if the element itself differs in color, font, or – in the case of a photograph or other image file – image completeness.&lt;/p&gt;

&lt;p&gt;Rather than depend on DOM differences alone, visual AI compares the rendered page with the previously rendered page.&lt;/p&gt;

&lt;h1&gt;
  
  
  Visual AI and Packaged Application Upgrades
&lt;/h1&gt;

&lt;p&gt;As I wrote earlier, how does testing a packaged application compare with testing a custom app?&lt;/p&gt;

&lt;p&gt;As a test engineer, using the legacy approach leaves you at the mercy of the vendor. You can write all the functional tests you want, but the developer can change element locators between releases and leave you with a huge coding task – rewrite all your element locators to initiate behaviors and all your locators to measure responses.&lt;/p&gt;

&lt;p&gt;You likely know the situation I’m describing. I know many companies that use a vendor’s web service for CRM, marketing, or commerce. Once you customize one of these sites for your needs, you worry that a vendor upgrade can break your customized app. The vendor may have tested the generic app, but not the version with your custom CSS, data, and layout.&lt;/p&gt;

&lt;p&gt;In my experience, test engineers loathe having to test packaged applications. Too often, test engineers must react to changes made in an app that the company doesn’t own with tools that make it difficult to expose unexpected changes. Imagine having to use legacy tests to validate that the browser rendering remains unchanged after an upgrade. And, if the browser rendering has changed, to highlight that change and pass the information back to the app owners.&lt;/p&gt;

&lt;p&gt;Visual AI bridges the gap for a packaged app and third-party app owners. If you have made no change and upgrade your app version, the rendering check with Visual AI makes it easy to test whether the pages have changed. If you make a CSS change or other local or global change, Visual AI ensures that the changes match your expectations.&lt;/p&gt;

&lt;h1&gt;
  
  
  What About Dynamic Content?
&lt;/h1&gt;

&lt;p&gt;One of the conditions Raja discusses is the case of dynamic content on a page. You find this content on media sites – with layouts showing the latest stories that update regularly. You find this content on e-commerce sites as well – showing wares of interest to prospective buyers.&lt;/p&gt;

&lt;p&gt;On the app for this chapter, it’s the content that shows featured items. In the two screenshots that follow, the “Deal of the Day” rotates to multiple images, and “Promoted Items” updates each time the page gets refreshed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FeSifZw0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/wwib2jae7vwh4yr6p9et.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FeSifZw0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/wwib2jae7vwh4yr6p9et.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZJVyRoMs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/4m5epgi0gh15orgqp283.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZJVyRoMs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/4m5epgi0gh15orgqp283.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This behavior makes sense – a seller wants to show a range of wares to the shopper.&lt;/p&gt;

&lt;p&gt;Both the legacy approach and the modern approach can’t handle dynamic content through automation.&lt;/p&gt;

&lt;p&gt;Raja describes how Visual AI allows you to highlight this region as an ignore region when comparing the pages for test automation. My takeaway here says that dynamic content requires a different kind of testing – earlier in the test development process.  For the functional behavior test, Applitools Visual AI lets you focus on what matters.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion – Visual AI Simplifies Functional Test
&lt;/h1&gt;

&lt;p&gt;In walking through Chapter 6, you see how the legacy approach leans heavily on web locators that can change from version to version. You spend lots of resources on maintaining your tests – especially when locators change between app versions. The cost of test maintenance inclines most organizations to shy away from app version upgrades and even app behavior changes. Every time you make a change, you incur unexpected costs.&lt;/p&gt;

&lt;p&gt;In contrast, Visual AI simplifies your functional test, while adding visual coverage.  Visual AI compares rendered output against rendered output. If visual differences exist, you can identify those as intended or not. Intended changes become the new baseline.  Unintended changes can go back to the app owners. And, all this is easy to manage and maintain – because you, as the tester, don’t worry about the individual identifiers you need to check along the way.&lt;/p&gt;

</description>
      <category>testing</category>
      <category>tutorial</category>
      <category>codenewbie</category>
    </item>
    <item>
      <title>Test iFrames With Visual AI</title>
      <dc:creator>Michael Battat</dc:creator>
      <pubDate>Mon, 24 Feb 2020 16:37:12 +0000</pubDate>
      <link>https://dev.to/michaelvisualai/testing-iframes-with-visual-ai-jkj</link>
      <guid>https://dev.to/michaelvisualai/testing-iframes-with-visual-ai-jkj</guid>
      <description>&lt;p&gt;As you know, I’m taking &lt;a href="https://www.linkedin.com/in/rajaraodv/"&gt;Raja Rao’s&lt;/a&gt; Test Automation University Course, &lt;a href="https://testautomationu.applitools.com/modern-functional-testing/?utm_term=cat&amp;amp;utm_source=syndication&amp;amp;utm_medium=&amp;amp;utm_content=tau&amp;amp;utm_campaign=test-automation-university&amp;amp;utm_subgroup=devto"&gt;Modern Functional Test Automation through Visual AI&lt;/a&gt;. Today we’ll discuss testing iFrames.&lt;/p&gt;

&lt;p&gt;In &lt;a href="https://testautomationu.applitools.com/modern-functional-testing/chapter5.html?utm_term=cat&amp;amp;utm_source=syndication&amp;amp;utm_medium=&amp;amp;utm_content=tau&amp;amp;utm_campaign=test-automation-university&amp;amp;utm_subgroup=devto"&gt;Chapter 5&lt;/a&gt;, Raja refers to iFrames as a necessary evil. Actually, they’re really useful.  You likely know that you can use iFrames to embed lots of functionality into your page. If you don’t know anything about iFrames, learn a little. There’s &lt;a href="https://developer.mozilla.org/en-US/docs/Learn/HTML/Multimedia_and_embedding/Other_embedding_technologies"&gt;this page from the Mozilla Developer’s Network&lt;/a&gt;. Or, you can look into a company like Elfsight that provides &lt;a href="https://elfsight.com/iframe-widgets/"&gt;iFrame widgets&lt;/a&gt; you can install to add interactivity to a web page.  In some cases, you might use iFrames for &lt;a href="https://www.bounteous.com/insights/2015/10/21/google-analytics-iframes-form-submissions/"&gt;tracking and analytics&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Actually, tracking and analytics make up many reasons why you might install iFrames. While they can look seamless on your app, they can call external functions – including third-party tracking tools. But, you can embed almost any functionality using iFrames.&lt;/p&gt;

&lt;p&gt;Like, say, a Google Map.&lt;/p&gt;

&lt;h1&gt;
  
  
  A Quick Google Map Example
&lt;/h1&gt;

&lt;p&gt;The code to embed a Google Map is an iFrame:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ukHsswDe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/nlk3v7r8g7u8op8eek1x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ukHsswDe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/nlk3v7r8g7u8op8eek1x.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Yup, this code will embed a Google Map of Alice Springs, Northern Territory, Australia. Someplace I’ve always wanted to visit. I haven’t made it, yet.&lt;/p&gt;

&lt;p&gt;You can use the tracking pieces to know where your customers go on your web app. In fact, some of these tracking iFrames are zero pixel iFrames, meaning they are invisible on your page.  The existence of zero pixel iFrames cannot be detected in your end-user and integration testing. They will exist in your DOM, but end users won’t know about their existence.&lt;/p&gt;

&lt;p&gt;What matters, then, are the pages that include iFrames for extended functionality.  They are embedded in your web app – sometimes nested. How do you test an iFrame?&lt;/p&gt;

&lt;h1&gt;
  
  
  Possible iFrame Errors
&lt;/h1&gt;

&lt;p&gt;iFrames can be built to the size of the user’s screen, or any subset of your app page. As I mentioned, some are one or zero pixels high. The challenge comes around building interactive iFrames.&lt;/p&gt;

&lt;p&gt;When you test iFrames designed for users interaction, the iFrame can behave as if is part of your app. Buttons and functionality inside the iFrame will need to be tested.  These iFrames have their own contents, buttons, and scrolling regions.  They have their own behaviors that need to be tested.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hGVssUf2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/iinangorgq1q91pg4o7n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hGVssUf2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/iinangorgq1q91pg4o7n.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You will want to test the data and functionality in your iFrames. You will want to ensure that buttons behave as expected, scrolling regions scroll, and expected information gets presented to the user. These behaviors get complicated by how you define your iframe and ensure its boundaries. For some designs, the behavior works fine on a mobile device but gets lost on a desktop or laptop scree. In other designs, the mobile device behavior might not be acceptable.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zLpKf-IB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/c76d5jy1l3t6ky1sg8ud.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zLpKf-IB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/c76d5jy1l3t6ky1sg8ud.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this example, Raja points out design considerations that your developers may not have considered when designing your app. The big problems occur with testing iframes sized smaller than their contents – which results in scroll bars on the iFrame to access all the content. Or not – some iFrame implementations are fixed with no scroll bars. How do you test the iFrame?&lt;/p&gt;

&lt;h1&gt;
  
  
  Testing iFrames with Selenium
&lt;/h1&gt;

&lt;p&gt;Raja imagines a test of a table in a nested iFrame. As the page is resized, the top-level iFrame resizes, and the content in the nested iFrame gets hidden. Depending on the design approach, this hidden data may not become visible unless the user scrolls the data manually. And, in some cases, there may not be scroll bars – meaning a user must manually resize the page to see the data.&lt;/p&gt;

&lt;p&gt;Raja walks into the legacy approach to testing iFrames – using Selenium to apply a test behavior and then measure the response of the page. There are several complications he exposes.&lt;/p&gt;

&lt;h1&gt;
  
  
  Nested iFrame Navigation
&lt;/h1&gt;

&lt;p&gt;The first is the challenge of navigating iFrames – especially if you have nested them. Each time you navigate in an iFrame, you must change the context of the iFrame you’re inspecting. You have to remember your context. And, you have to extract your context to go back to the page outside the iFrame.&lt;/p&gt;

&lt;p&gt;In the code that follows, Raja shows the redundant code you need to add to navigate the different nested iFrames. First, you go to the main page context. Next, you go to the top iFrame. Finally, you select one of the four embedded iFrames. Now, you can begin your tests.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Here is a highlight of the code used to change context:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bWG8hDDq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/6z7az7t2bn7u48yuagr4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bWG8hDDq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/6z7az7t2bn7u48yuagr4.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is a lot of redundant code needed to navigate around to each iFrame.&lt;/p&gt;

&lt;h1&gt;
  
  
  Hidden Content
&lt;/h1&gt;

&lt;p&gt;Another complication involves the hidden parts of frames. When pages resize and frame content gets hidden, the user may not know the content exists, or its values.. Hidden content can create problems in testing, because the test may not uncover the error.&lt;/p&gt;

&lt;p&gt;And, thus, the issue of scroll bars. If a scroll bar does not exist, can the user see data that does not show up on the page? How does a tester use an automated tool to expose this behavior?&lt;/p&gt;

&lt;h1&gt;
  
  
  Selenium Testing iFrame Application Versions
&lt;/h1&gt;

&lt;p&gt;Selenium can accomplish iFrame resizing and scrolling, but the efforts to do these tasks depend very much on the test engineer. Also, there are dependencies on the version of the application itself. But, validation and assertion technologies limit traditional functional test approaches.&lt;/p&gt;

&lt;p&gt;For example, a tester may get a version of the app with scroll bars in a nested iFrame. The engineer might choose not to exercise the scroll bars, and leave that task to the user later. Instead of testing the existence of scroll bars, the tester might simply use text locators to validate that the content exists in the DOM.&lt;/p&gt;

&lt;p&gt;If a later version of the app leaves out the scroll bars, then the locators still find the appropriate content, but the user has a much more difficult time accessing the content.  Depending on how you structure your Selenium tests, you might miss this change and pass the test. In fact, all functional testing that uses DOM checking will generally miss the user problem this type of issue poses.&lt;/p&gt;

&lt;p&gt;Also, things like nested iFrames cause context problems for both applying an action and asserting an outcome.  Each time a sub iFrame requires an action, Selenium must first switch context to the main iFrame, then the sub iFrame in question.&lt;/p&gt;

&lt;p&gt;Raja describes these types of issues with iFrames as generally problematic for legacy functional tests.&lt;/p&gt;

&lt;h1&gt;
  
  
  Testing iFrames with Visual AI
&lt;/h1&gt;

&lt;p&gt;Because Visual AI uses image comparison to determine the rendered output, the existence of scroll bars does not require any kind of action or assertion.  Instead, if the scroll bars are otherwise invisible, selecting the iFrame should highlight the scroll bars.&lt;/p&gt;

&lt;p&gt;In fact, much of iFrame testing with visual AI simply requires taking an action, and then a snapshot of the page. Scroll bars, iFrame contents, and iFrame formatting all can be checked by taking an action along with a snapshot.&lt;/p&gt;

&lt;p&gt;Here is a picture of the code from the course.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CLRSf-eg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/iz4guodnxuqz2dq8zh3h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CLRSf-eg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/iz4guodnxuqz2dq8zh3h.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Complicated assertions become simple implementations of:&lt;/p&gt;

&lt;p&gt;eyes.checkWindow(“iFrame”);&lt;/p&gt;

&lt;p&gt;Subsequent tests of iFrames can compare visual behavior of the page with its predecessors. Visual AI captures missing scroll bars and other changes in iFrame contents, format, and behavior. You can approve intended behavior changes with a “Thumbs-up”, andy can call out unintended behavior changes (like missing scroll bars and hidden data) with a “Thumbs-down” and a snapshot of both the behavior and the related code.&lt;/p&gt;

&lt;p&gt;Because visual assertions replace calls to locators, test engineers have an easier time writing and maintaining test code.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;This chapter makes the point that, once again, tasks that can make the life of a test engineer exceedingly difficult with legacy functional tests turn into simple problems for Visual AI.  In this case, you don’t have to instrument every iFrame behavior. Simply use Visual AI and let the visual differences guide your testing.&lt;/p&gt;

&lt;p&gt;In all honesty, your mileage may vary with this chapter. I know that some applications make heavy use of iFrames for user interaction, while others only depend on iFrames for user tracking. So, depending on the technology you deploy in your applications, you may or may not benefit from this particular value of Visual AI. But, the point Raja makes is that this is yet another task made easier with Visual AI.&lt;/p&gt;

</description>
      <category>testing</category>
      <category>tutorial</category>
      <category>beginners</category>
    </item>
    <item>
      <title>How Do You Test Dynamic Content?</title>
      <dc:creator>Michael Battat</dc:creator>
      <pubDate>Wed, 12 Feb 2020 17:52:41 +0000</pubDate>
      <link>https://dev.to/michaelvisualai/how-do-you-test-dynamic-content-2ea</link>
      <guid>https://dev.to/michaelvisualai/how-do-you-test-dynamic-content-2ea</guid>
      <description>&lt;p&gt;Imagine this. You built a page with CanvasJS, and you want to test the graphs. How do you create an automated test for the graphical representations? It’s testing dynamic content, after all.&lt;/p&gt;

&lt;p&gt;This question haunts most test developers. In lots of cases, companies do lots of manual tests on the first release to make sure everything works. After that, it’s a lot of manual spot testing without automation. Because, it’s testing dynamic content, after all.&lt;/p&gt;

&lt;p&gt;In reality, there are three approaches.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Always do manual testing – that’s the only way to validate behavior.&lt;/li&gt;
&lt;li&gt;Do spot testing – trading off coverage for the cost.&lt;/li&gt;
&lt;li&gt;Shy away from testing and hope things work.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At the radical end of handling dynamic graphical content, some organizations decide that the job doesn’t belong to the internal web development team. These organizations farm out the entire visualization process to a third-party graphing package. For example, the Federal Trade Commission concluded that their data is best visualized using an external solution focused on data graphics and analytics. As a result, they use the services of Tableau Software to create a visual representation of the FTC data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fx8nbc7t4fmmuz3bmcfaj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fx8nbc7t4fmmuz3bmcfaj.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But, if you’re building an app for which farming out data representation might expose customer or client data, you cannot give the data to a third party and hope for the best. You have to do the visualization and own the testing of the app.&lt;/p&gt;

&lt;h1&gt;
  
  
  Dynamic Content Tests With Legacy Tools
&lt;/h1&gt;

&lt;p&gt;In &lt;a href="https://testautomationu.applitools.com/modern-functional-testing/chapter4.html?utm_term=cat&amp;amp;utm_source=syndication&amp;amp;utm_medium=&amp;amp;utm_content=blog&amp;amp;utm_campaign=&amp;amp;utm_subgroup=devto" rel="noopener noreferrer"&gt;Chapter 4 of Raja Rao’s&lt;/a&gt; course, &lt;a href="https://testautomationu.applitools.com/modern-functional-testing/?utm_term=cat&amp;amp;utm_source=syndication&amp;amp;utm_medium=&amp;amp;utm_content=tau&amp;amp;utm_campaign=test-automation-university&amp;amp;utm_subgroup=devto" rel="noopener noreferrer"&gt;Modern Functional Test Automation through Visual AI&lt;/a&gt; on &lt;a href="https://testautomationu.applitools.com/?utm_term=cat&amp;amp;utm_source=syndication&amp;amp;utm_medium=&amp;amp;utm_content=tau&amp;amp;utm_campaign=test-automation-university&amp;amp;utm_subgroup=devto" rel="noopener noreferrer"&gt;Test Automation University&lt;/a&gt;, Raja walks through an example graphing app built with Canvas and asks:&lt;/p&gt;

&lt;p&gt;“How would you test this page with dynamic content?”&lt;/p&gt;

&lt;p&gt;He takes a bar chart example in an app using CanvasJS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fzb62ejwp006yexrlk2am.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fzb62ejwp006yexrlk2am.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, Raja shows what happens when he adds a dataset to the bar chart:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fe204hvm3542jxoa6n3xd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fe204hvm3542jxoa6n3xd.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What makes this problem notoriously difficult to test involves the visual nature of the behavior and the lack of handles in the DOM that correlate to the behavior. In fact, there are no links.&lt;/p&gt;

&lt;p&gt;Opening up the Inspector for this page shows a canvas link:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fl69mpciu5tixsf563seo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fl69mpciu5tixsf563seo.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, all it shows is the size of the Canvas render – not the internal content. How the heck do you test this dynamic content?&lt;/p&gt;

&lt;p&gt;With no DOM hooks, it’s impossible to know that the code above behaves as expected.&lt;/p&gt;

&lt;p&gt;How would you handle this kind of test? When we ask, we find out that most people do is either test on occasion or not at all.   After all, if you’re using a third-party package, like CanvasJS, why not just trust it and go?&lt;/p&gt;

&lt;h1&gt;
  
  
  Testing Charts with Visual AI
&lt;/h1&gt;

&lt;p&gt;As Raja points out, with Visual AI, you don’t need hooks in the DOM to capture app behavior. All you need to do is trigger the behavior, then capture the results visually.&lt;/p&gt;

&lt;p&gt;Here is the test code he uses to manipulate the test chart:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Hopefully, each step in the code reads clearly for you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open the app&lt;/li&gt;
&lt;li&gt;Capture the screen&lt;/li&gt;
&lt;li&gt;Click the add dataset button&lt;/li&gt;
&lt;li&gt;Wait to make sure the screen executes&lt;/li&gt;
&lt;li&gt;Capture the screen&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It seemed pretty straightforward to me when I went through it.&lt;/p&gt;

&lt;p&gt;When you run the code, Applitools captures the tests separately as part of the same batch test run:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F3davfzx4efwr1z9mhaht.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F3davfzx4efwr1z9mhaht.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When you start running Applitools on these tests, the first runs get stored as the baseline expected images. You can continue to execute these tests on subsequent builds and have Applitools compare the new checkpoint against the baseline. Applitools will highlight any visual differences.&lt;/p&gt;

&lt;h1&gt;
  
  
  Why Does Testing Dynamic Content Matter?
&lt;/h1&gt;

&lt;p&gt;In the past, I have been responsible for apps that display lots of data – like the central controller for a bunch of networking equipment. Lots of data and visualization. Each time we thought about improving the visualization, it was a huge headache. Testing alone would swallow up the QA team in apoplectic fits.&lt;/p&gt;

&lt;p&gt;In looking at the world of visualization, there are network operations centers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fjrdra1lb4yqcujwo3nkc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fjrdra1lb4yqcujwo3nkc.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are financial applications.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F4u0sf6g9ehue8jc8g3gy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F4u0sf6g9ehue8jc8g3gy.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There’s even weather.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fvvqx8qv03we6fkgmv50o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fvvqx8qv03we6fkgmv50o.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Whether you’re handling inventory, forecasting the future, scheduling appointments or doing any number of things with your applications, your customers likely will benefit from data visualizations. Why let the question of test automation limit your decision of whether or not to deploy a great visualization?&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;Everyone who deals with data needs to represent that data as more than a bunch of numbers. If you find yourself doing visual representations, you have a choice:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Code, test, and pray&lt;/li&gt;
&lt;li&gt;Code, test, and spot check&lt;/li&gt;
&lt;li&gt;Test visually and automate tests of dynamic content.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now that you have a way to test this dynamic content, what’s stopping you?&lt;/p&gt;

</description>
      <category>testing</category>
      <category>tutorial</category>
      <category>beginners</category>
      <category>codenewbie</category>
    </item>
  </channel>
</rss>
