DEV Community

Cover image for AskUI Best Practices
Jonas Menesklou for AskUI

Posted on • Edited on

AskUI Best Practices

Let’s be honest: Testing is a lot of fun. Not only is it extremely important for the success of your team, but it is also highly satisfying when you’ve finally found and tracked a pesky bug all the way to its original source.

However, setting up your first test case in a new environment can be tiresome, especially when every testable element has to be individually defined first. Before you know it, you are knee-deep in someone else's messy code, trying to put IDs in the right places so that you can run a simple workflow, that would take a manual tester five seconds to complete. This approach seems a little impractical, doesn’t it?

The New Way: Selector-Free Test Automation

If you feel like current testing tools are holding back your productivity - don’t worry, you are not alone. At askui, we firmly challenge the idea of selector-based testing. We believe that humans are still the best testers, because of two simple reasons:

  1. They are quick
  2. They are resistant to changes in the codebase (lol, obviously)

In order to automate this boring, repetitive work, future testing methods will need to be even better than humans. That’s why our mission is to provide human-level test quality at scale and at a fraction of the cost.

askui uses artificial intelligence to detect on-screen elements such as buttons and text fields automatically, allowing you to skip the usual lengthy setup process, so that you can start writing test cases right away.

How Does This Work?

The cool thing about askui is that you can approach writing test automation scripts as if you would manually perform the actions yourself. As a result, you get a more intuitive workflow that can save you a lot of time.

Before you continue: If you haven’t yet downloaded askui, follow this guide on installing the app on your machine

Setting Up a Test Suite

💭 Let’s say we’re on google.com and we want to search for an image of a cat and then download it to our computer.

First, we break this task down into steps that a user would take. Then we can recreate those steps in code.

  1. go to google images
  2. type “cat” in the search bar
  3. select image
  4. right-click + save the image

Then we begin creating our test suite, by creating two separate test blocks:

  1. The first one is used to get an annotated screenshot, where all of the on-screen elements are enclosed within annotated bounding boxes. This will help us select the correct elements in our test case.

    it('annotate', async () => {
        await aui.annotate()
      });
    
  2. the second test block contains our actual test case.

    xit('should click download cat image', async () => {
        await aui;
      });
    

You’ll notice, that the it function is “x’ed out”, which means, that the test block will be ignored, when we run the script. This is fine for now, because it does not contain anything yet.

Next, you’ll run the script to create an annotated screenshot. If you’re using VS Code, it will appear in your file navigation bar on the left.

📋 The annotations are basically the substitute for IDs in selector-based testing.
You can can click on them to copy them into your clipboard.

Writing and Debugging a Test Case

Now we can start to write our test case, by locating the elements and then executing an action on them. Remember the steps, that we wanted to recreate?

  1. go to google images
  2. type “cat” in the search bar
  3. select image
  4. right-click + save the image

In the end, your test case could look something like this.

import { aui } from './helper/jest.setup';

describe('jest with askui', () => {
  it('should click on text', async () => {
    await aui
      .click()
      .text().withText("Images")
      .rightOf()
      .text().withText("Gmail")
      .exec();
    await aui
      .typeIn("cat")
      .textfield()
      .below()
      .logo().withText("G00g.e")
      .exec()
    await aui
      .pressKey('enter')
      .exec()
    await aui
            .moveMouseTo()
            .image()
            .above()
            .text()
            .withText("pet guru Yuki Hattori explaiinICats")
            .exec()
    await aui
            .mouseRightClick()
            .exec()
    await aui
            .click()
            .text()
            .withText('save image as')
            .exec()
    await aui
            .click()
            .button()
            .withText("Save")
            .exec()
  });
  xit('annotate', async () => {
    await aui.annotate()
  });
});
Enter fullscreen mode Exit fullscreen mode

Debugging

It’s possible that you’ll run into problems with locating functions. For example, when creating this tutorial, we first tried to locate the image nearest to the image title, like this

await aui
            .moveMouseTo()
            .image()
            .nearestTo()
            .text()
            .withText("pet guru Yuki Hattori explaiinICats")
            .exec()
Enter fullscreen mode Exit fullscreen mode

But it turns out, that the AI uses a different metric for measuring distance between elements, which is why our script failed the first time. Then we substituted this function for above(), which fixed the problem for us.

If you have a similar issue, try to play around with the functions and see if you can tackle the problem from a different angle.

if you have a recurring or persisting issue, don’t hesitate to ask the community for help. You can be sure that your questions will be answered there. We’re excited to hear about how you apply askui on your projects.

If you have any feature requests, please feel free to post them in our featurebase board.

Best regards and happy testing!

Top comments (0)