<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Mayankbhola1</title>
    <description>The latest articles on DEV Community by Mayankbhola1 (@mayankbhola1).</description>
    <link>https://dev.to/mayankbhola1</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mayankbhola1"/>
    <language>en</language>
    <item>
      <title>Using ChatGPT for Test Automation</title>
      <dc:creator>Mayankbhola1</dc:creator>
      <pubDate>Wed, 29 Mar 2023 08:05:16 +0000</pubDate>
      <link>https://dev.to/mayankbhola1/using-chatgpt-for-test-automation-5h5k</link>
      <guid>https://dev.to/mayankbhola1/using-chatgpt-for-test-automation-5h5k</guid>
      <description>&lt;p&gt;ChatGPT broke all Internet records by going viral in the first week of its launch. A million users in 5 days are unprecedented. A conversational AI that can answer natural language-based questions and create poems, write movie scripts, write social media posts, write descriptive essays, and do tons of amazing things. Our first thought when we got access to the platform was how to use this amazing platform to make the lives of web and mobile app testers easier. And most importantly, how we can use ChatGPT for &lt;a href="https://www.lambdatest.com/automation-testing?utm_source=devto&amp;amp;utm_medium=organic&amp;amp;utm_campaign=mar29_ap&amp;amp;utm_term=ap&amp;amp;utm_content=webpage"&gt;automated testing&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The first prominent use case was related to test case generation. Stuff like writing test cases for testing google or a list of test cases for testing a login form. But one of the most important features of ChatGPT is that it can also write code. That is also in multiple programming languages, leveraging all the most popular &lt;a href="https://www.lambdatest.com/blog/best-test-automation-frameworks-2021/?utm_source=devto&amp;amp;utm_medium=organic&amp;amp;utm_campaign=mar29_ap&amp;amp;utm_term=ap&amp;amp;utm_content=blog"&gt;test automation frameworks&lt;/a&gt; and across multiple technologies.&lt;/p&gt;

&lt;p&gt;We immediately explored how we could leverage ChatGPT for automation testing and to what extent. We started by generating simple test cases and then used ChatGPT to create a complex test automation pipeline with CI/CD steps and bash code to test an application with multiple microservices.&lt;/p&gt;

&lt;p&gt;And for all that, ChatGPT gave us well-documented code with simple-to-understand instructions on how to use the code.&lt;/p&gt;

&lt;p&gt;This post will showcase our step-by-step journey on how we created an automation testing pipeline using ChatGPT. But first, for those uninitiated, let’s look at ChatGPT, a chatbot built on top of the GPT-3.5.&lt;/p&gt;

&lt;p&gt;Also, before we move further, it’s important to highlight that all the code generated below is not perfect. They are pretty close to runnable code in most cases; however, there are still issues, ranging from syntax errors to missing crucial steps because of context gaps. But experienced developers can provide enough of the boilerplate that it becomes easy to tweak, debug and run independently.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is ChatGPT?
&lt;/h3&gt;

&lt;p&gt;On 30th November 2022, OpenAI released a conversation language processing chatbot built on the latest Generative Pre Training (GPT) language model. The chatbot, named ChatGPT, built on top of the GPT-3.5 can create coherent and well-articulated responses to queries across a wide domain of topics.&lt;/p&gt;

&lt;p&gt;GPT-3.5, the language model working behind the scenes in ChatGPT, is an auto-regressive language model that predicts the next words in a sequence of words based on the context of the words that come before it. GPT uses cutting-edge natural language processing and deep learning technologies to create such high-quality language outputs that it blurs the line between machine and human responses.&lt;/p&gt;

&lt;p&gt;Learn how ChatGPT, which is built on ChatGPT-3.5, can be used for Test Automation by watching this video.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/YQbbHAdFBAo"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;This tutorial dive deep into&lt;/em&gt;&lt;/strong&gt; &lt;a href="https://www.lambdatest.com/learning-hub/web-testing/?utm_source=devto&amp;amp;utm_medium=organic&amp;amp;utm_campaign=mar29_ap&amp;amp;utm_term=ap&amp;amp;utm_content=learning_hub"&gt;&lt;strong&gt;&lt;em&gt;web testing&lt;/em&gt;&lt;/strong&gt;&lt;/a&gt; &lt;strong&gt;&lt;em&gt;to help you understand its life cycle, elements, angles, the role of automation, and more.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Starting with a simple Automation Test Script using ChatGPT
&lt;/h3&gt;

&lt;p&gt;We started with a very simple prompt. Let’s create a simple code using &lt;a href="https://www.lambdatest.com/blog/selenium-with-java/?utm_source=devto&amp;amp;utm_medium=organic&amp;amp;utm_campaign=mar29_ap&amp;amp;utm_term=ap&amp;amp;utm_content=blog"&gt;Selenium with Java&lt;/a&gt; that can open &lt;a href="http://Google.com"&gt;Google.com&lt;/a&gt; and print out its Title. And most importantly, the code should be able to run on the LambdatTest &lt;a href="https://www.lambdatest.com/selenium-automation?utm_source=devto&amp;amp;utm_medium=organic&amp;amp;utm_campaign=mar29_ap&amp;amp;utm_term=ap&amp;amp;utm_content=webpage"&gt;cloud Selenium Grid&lt;/a&gt;. So we opened out the platform and typed in our first question.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: Write a Selenium and Java code to open&lt;/strong&gt; &lt;a href="http://google.com"&gt;&lt;strong&gt;google.com&lt;/strong&gt;&lt;/a&gt; &lt;strong&gt;and print its title. The code should run on LambdaTest&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ChatGPT:&lt;/strong&gt; To write a Selenium test in Java to open &lt;a href="http://Google.com"&gt;Google.com&lt;/a&gt; and print its title, which will run on LambdaTest, you can use the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeDriver;
import org.openqa.selenium.remote.RemoteWebDriver;
import org.openqa.selenium.remote.DesiredCapabilities;

public class Test {
  public static void main(String[] args) {
    // Set your LambdaTest username and access key
    String username = "YOUR_USERNAME";
    String accessKey = "YOUR_ACCESS_KEY";

    // Set the desired browser and operating system
    DesiredCapabilities caps = new DesiredCapabilities();
    caps.setCapability("browserName", "chrome");
    caps.setCapability("version", "80.0");
    caps.setCapability("platform", "WIN10");

    // Create a webdriver instance using the LambdaTest grid
    WebDriver driver = new RemoteWebDriver(
      new java.net.URL("https://" + username + ":" + accessKey + "@hub.lambdatest.com/wd/hub"),
      caps
    );

    // Open Google.com
    driver.get("https://www.google.com");

    // Get the title of the page
    String title = driver.getTitle();

    // Print the title
    System.out.println(title);

    // Close the webdriver instance
    driver.quit();
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This test will open &lt;a href="http://Google.com"&gt;Google.com&lt;/a&gt; in a Chrome browser on a Windows 10 operating system, get the title of the page, and print it to the console. To run the test on LambdaTest, ensure you have the Selenium Java library installed, and then compile and run the code using a Java compiler such as javac and java. The test will be executed on the LambdaTest grid, and the results will be displayed in your &lt;a href="https://accounts.lambdatest.com/dashboard?utm_source=devto&amp;amp;utm_medium=organic&amp;amp;utm_campaign=mar29_ap&amp;amp;utm_term=ap&amp;amp;utm_content=webpage"&gt;LambdaTest Dashboard&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The above response shows that the platform automatically added complex LambdaTest capabilities like hub URL, desired capabilities, and the prompt for adding access keys and user names. All of which is required to run the test at LambdaTest.&lt;/p&gt;

&lt;h3&gt;
  
  
  Let’s up the stakes: ChatGPT for a complex Automation Test Case
&lt;/h3&gt;

&lt;p&gt;Now we know that writing simple automation scripts is a cakewalk for ChatGPT. However, simplicity doesn’t always cut it. When we’re performing &lt;a href="https://www.lambdatest.com/learning-hub/web-automation?utm_source=devto&amp;amp;utm_medium=organic&amp;amp;utm_campaign=mar29_ap&amp;amp;utm_term=ap&amp;amp;utm_content=learning_hub"&gt;web automation&lt;/a&gt;, we are often forced to face a complex automation testing scenario, be it around testing APIs or &lt;a href="https://www.lambdatest.com/blog/how-to-test-a-microservice-architecture-application/?utm_source=devto&amp;amp;utm_medium=organic&amp;amp;utm_campaign=mar29_ap&amp;amp;utm_term=ap&amp;amp;utm_content=blog"&gt;testing microservices&lt;/a&gt;. Can ChatGPT, a fine-tuned version of the GPT-3.5 series, make life easier if we hand over a complex automation test case? Let’s find out!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: Assume that I have a&lt;/strong&gt; &lt;a href="https://www.lambdatest.com/selenium-automation?utm_source=devto&amp;amp;utm_medium=organic&amp;amp;utm_campaign=mar29_ap&amp;amp;utm_term=ap&amp;amp;utm_content=webpage"&gt;&lt;strong&gt;Selenium testing&lt;/strong&gt;&lt;/a&gt; &lt;strong&gt;platform that allows users to point their Selenium tests on&lt;/strong&gt; &lt;a href="http://hub.lambdatest.com"&gt;&lt;strong&gt;hub.lambdatest.com&lt;/strong&gt;&lt;/a&gt;&lt;strong&gt;. This platform consists of 4 microservices responsible for the following details.&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;The first microservice accepts Selenium tests and calls the second microservice to create a test entity in the database.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The first service then connects with the third service, which provides it with a Linux docker container where Selenium jars and web browsers are present.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Upon receiving the details for the docker container, the first service proxies the HTTP requests to the docker container.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The fourth service collects &lt;a href="https://www.lambdatest.com/learning-hub/test-execution?utm_source=devto&amp;amp;utm_medium=organic&amp;amp;utm_campaign=mar29_ap&amp;amp;utm_term=ap&amp;amp;utm_content=learning_hub"&gt;test execution&lt;/a&gt; logs from the docker container and exposes APIs to return different logs for every Selenium test.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let the first service be called a hub_service, the second service be called test_management_service, the third service is linux_container_service, and the fourth service is log_management_service. Write a Cucumber definition to test this platform.&lt;/p&gt;

&lt;p&gt;So we asked this test scenario as a question to ChatGPT and this is what we got:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Feature: Selenium testing platform
  As a user of the platform
  I want to be able to point my Selenium tests to the platform
  So that I can run my tests on a variety of different environments and configurations

  Scenario: Successfully run a Selenium test
    Given I have a Selenium test ready to run
    When I send the test to the hub_service
    And the hub_service creates a test entity in the database using the test_management_service
    And the hub_service connects to the linux_container_service to get a docker container for running the test
    And the hub_service proxies the HTTP requests to the docker container
    And the log_management_service collects test execution logs from the docker container
    Then the test should run successfully
    And I should be able to retrieve the test logs from the log_management_service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;ChatGPT:&lt;/strong&gt; This Cucumber definition covers the main functionality of the platform, including sending a Selenium test to the hub_service, creating a test entity in the database, getting a docker container for running the test, and collecting test execution logs. It also includes assertions to ensure that the test runs successfully and that the test logs can be retrieved.&lt;/p&gt;

&lt;p&gt;As needed, you can add additional steps and scenarios to cover more specific functionality or edge cases.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HfeQktfm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/800/0%2ASOEUHx9q4Sx8B5LB.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HfeQktfm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/800/0%2ASOEUHx9q4Sx8B5LB.png" alt="image" width="800" height="1159"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Upload your app for testing within seconds using the LambdaTest cloud and perform a&lt;/em&gt;&lt;/strong&gt; &lt;a href="https://www.lambdatest.com/mobile-app-testing?utm_source=devto&amp;amp;utm_medium=organic&amp;amp;utm_campaign=mar29_ap&amp;amp;utm_term=ap&amp;amp;utm_content=webpage"&gt;&lt;strong&gt;&lt;em&gt;mobile app test&lt;/em&gt;&lt;/strong&gt;&lt;/a&gt; &lt;strong&gt;&lt;em&gt;right away. Find bugs early on, improve performance, quality, and user experience and make the most of mobile application testing on LambdaTest.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Adding Negative Test Cases with ChatGPT
&lt;/h3&gt;

&lt;p&gt;Speaking of edge cases, it’s important to note that the Cucumber definition we received from ChatGPT for our complex test scenario doesn’t consider the negative test cases. Let’s ask ChatGPT, which is built on top of ChatGPT-3.5, to include negative test cases well.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q. Generate negative tests when test_service and linux_container_service fail&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ChatGPT:&lt;/strong&gt; Here are some additional negative test scenarios that can be added to the Cucumber definition to test for failures in the test_management_service and the linux_container_service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Scenario: Test management service failure
Given I have a Selenium test ready to run
When I send the test to the hub_service
And the test_management_service fails to create a test entity in the database
Then the test should not run
And an error message should be displayed indicating the test_management_service failure

Scenario: Linux container service failure
Given I have a Selenium test ready to run
When I send the test to the hub_service
And the test_management_service successfully creates a test entity in the database
And the linux_container_service fails to provide a docker container for running the test
Then the test should not run
And an error message should be displayed indicating the linux_container_service failure
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These scenarios test for failures in the test_management_service and the linux_container_service, respectively, and include assertions to ensure that the test does not run and that an error message is displayed indicating the failure. You can add similar negative test scenarios for other microservices as needed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VXdblYuA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/800/0%2AulzG_vGZKAzcKxWn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VXdblYuA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/800/0%2AulzG_vGZKAzcKxWn.png" alt="image" width="794" height="700"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Let’s Mix Positive and Negative Test Cases in Cucumber
&lt;/h3&gt;

&lt;p&gt;Courtesy of ChatGPT, we now have both positive and negative test cases for a complex automation test scenario. We can go one step further now and create revised Cucumber test definitions for positive and negative test cases. So we asked ChatGPT (built on top of ChatGPT-3.5) to do the same.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q. Write Cucumber tests for both positive and negative cases for all four services&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ChatGPT:&lt;/strong&gt; Here is a revised Cucumber definition that includes test scenarios for both positive and negative cases for all four services:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Feature: Selenium testing platform
  As a user of the platform
  I want to be able to point my Selenium tests to the platform
  So that I can run my tests on a variety of different environments and configurations

  Scenario: Successfully run a Selenium test
    Given I have a Selenium test ready to run
    When I send the test to the hub_service
    And the hub_service creates a test entity in the database using the test_management_service
    And the hub_service connects to the linux_container_service to get a docker container for running the test
    And the hub_service proxies the HTTP requests to the docker container
    And the log_management_service collects test execution logs from the docker container
    Then the test should run successfully
    And I should be able to retrieve the test logs from the log_management_service

  Scenario: Test management service failure
    Given I have a Selenium test ready to run
    When I send the test to the hub_service
    And the test_management_service fails to create a test entity in the database
    Then the test should not run
    And an error message should be displayed indicating the test_management_service failure

  Scenario: Linux container service failure
    Given I have a Selenium test ready to run
    When I send the test to the hub_service
    And the test_management_service successfully creates a test entity in the database
    And the linux_container_service fails to provide a docker container for running the test
    Then the test should not run
    And an error message should be displayed indicating the linux_container_service failure

  Scenario: Hub service failure
    Given I have a Selenium test ready to run
    When the hub_service fails to process the test
    Then the test should not run
    And an error message should be displayed indicating the hub_service failure

  Scenario: Log management service failure
    Given I have a Selenium test ready to run
    When I send the test to the hub_service
    And the hub_service successfully creates a test entity in the database and gets a docker container for running the test
    And the log_management_service fails to collect test execution logs from the docker container
    Then the test should run
    But I should not be able to retrieve the test logs from the log_management_service
    And an error message should be displayed indicating the log_management_service failure
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This Cucumber definition includes test scenarios for both positive and negative cases for all four services. The positive scenario covers the main functionality of the platform, while the negative scenarios test for failures in each of the microservices. These scenarios include assertions to ensure that the platform behaves as expected in positive and negative cases.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Q_pnHVs2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/800/0%2ApwwAwwTRNauetbVB.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Q_pnHVs2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/800/0%2ApwwAwwTRNauetbVB.png" alt="image" width="693" height="1600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Run your&lt;/em&gt; &lt;a href="https://www.lambdatest.com/selenium-automation?utm_source=devto&amp;amp;utm_medium=organic&amp;amp;utm_campaign=mar29_ap&amp;amp;utm_term=ap&amp;amp;utm_content=webpage"&gt;&lt;em&gt;Selenium Automation&lt;/em&gt;&lt;/a&gt; &lt;em&gt;Testing scripts on the LambdaTest cloud grid. Test on 3000+ desktop &amp;amp; mobile environments. Try it for free.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  It’s time to Playwright!
&lt;/h3&gt;

&lt;p&gt;Just like that, we got out Cucumber test files generated instantly for a highly complex test scenario. Let’s get an assist from ChatGPT on automating the Cucumber cases with the &lt;a href="https://www.lambdatest.com/playwright-testing?utm_source=devto&amp;amp;utm_medium=organic&amp;amp;utm_campaign=mar29_ap&amp;amp;utm_term=ap&amp;amp;utm_content=webpage"&gt;Playwright&lt;/a&gt; framework. And so we asked:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q. Generate Playwright test implementation for the above Cucumber cases&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ChatGPT&lt;/strong&gt;: Here is a sample Playwright implementation of the Cucumber definition provided in the previous response:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const { playwright } = require('playwright');

// define test variables
const TEST_URL = 'https://www.example.com';
const TEST_LOGS_API = 'https://www.logs.com/api';

describe('Selenium testing platform', () =&amp;gt; {
  let browser;
  let context;
  let page;

  beforeEach(async () =&amp;gt; {
    // launch browser and create new page
    browser = await playwright.chromium.launch();
    context = await browser.newContext();
    page = await context.newPage();
  });

  afterEach(async () =&amp;gt; {
    // close browser
    await browser.close();
  });

  it('should successfully run a Selenium test', async () =&amp;gt; {
    // send test to hub_service
    await page.goto(TEST_URL);
    // create test entity in database using test_management_service
    // connect to linux_container_service to get docker container
    // proxy HTTP requests to docker container
    // collect test execution logs using log_management_service
    const logs = await page.evaluate(() =&amp;gt; fetch(TEST_LOGS_API).then(res =&amp;gt; res.json()));

    // assert that test ran successfully and logs are retrieved
    expect(logs).toBeDefined();
  });

  it('should not run test if test_management_service fails', async () =&amp;gt; {
    // send test to hub_service
    await page.goto(TEST_URL);
    // simulate failure of test_management_service
    const errorMessage = await page.evaluate(() =&amp;gt; {
      try {
        // create test entity in database using test_management_service
        throw new Error('test_management_service failure');
      } catch (error) {
        return error.message;
      }
    });

    // assert that test did not run and error message is displayed
    expect(errorMessage).toEqual('test_management_service failure');
  });

  it('should not run test if linux_container_service fails', async () =&amp;gt; {
    // send test to hub_service
    await page.goto(TEST_URL);
    // create test entity in database using test_management_service
    // simulate failure of linux_container_service
    const errorMessage = await page.evaluate(() =&amp;gt; {
      try {
        // connect to linux_container_service to get docker container
        throw new Error('linux_container_service failure');
      } catch (error) {
        return error.message;
      }
    });

    // assert that test did not run and error message is displayed
    expect(errorMessage).toEqual('linux_container_service failure');
  });

  it('should not run test if hub_service fails', async () =&amp;gt; {
    // simulate failure of hub_service
    const errorMessage = await page.evaluate(() =&amp;gt; {
      try {
        // send test to hub_service
        throw new Error('hub_service failure');
      } catch (error) {
        return error.message;
      }
    });

    // assert that test did not run and error message is displayed
    expect(errorMessage).toEqual('hub_service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  What if we want to Run The Test on LambdaTest Cloud?
&lt;/h3&gt;

&lt;p&gt;Kudos to ChatGPT for handing out a ready-to-use &lt;a href="https://www.lambdatest.com/playwright-testing?utm_source=devto&amp;amp;utm_medium=organic&amp;amp;utm_campaign=mar29_ap&amp;amp;utm_term=ap&amp;amp;utm_content=webpage"&gt;Playwright test&lt;/a&gt; script based on Cucumber test cases. However, this Playwright script might be less useful to someone using a cloud-based test execution platform such as LambdaTest. Why? The script might need changes in declaring capabilities and authenticating to the LambdaTest cloud.&lt;/p&gt;

&lt;p&gt;So it would be interesting to find out if ChatGPT is capable of helping us out with a ready-to-use Playwright script for the LambdaTest cloud. And my goodness, we were stunned.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q. Point the above Selenium tests to&lt;/strong&gt; &lt;a href="http://hub.lambdatest.com"&gt;&lt;strong&gt;hub.lambdatest.com&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ChatGPT&lt;/strong&gt;: To point the above Selenium tests to &lt;a href="http://hub.lambdatest.com"&gt;hub.lambdatest.com&lt;/a&gt;, you will need to modify the TEST_URL variable to point to the appropriate URL on LambdaTest. For example:&lt;/p&gt;

&lt;p&gt;1 const TEST_URL = ‘&lt;a href="https://hub.lambdatest.com/wd/hub"&gt;https://hub.lambdatest.com/wd/hub&lt;/a&gt;';&lt;/p&gt;

&lt;p&gt;You may also need to provide additional configuration options, such as the desired browser and platform, as well as your LambdaTest credentials. This can typically be done using the playwright.chromium.launch function or by passing additional parameters to the page.goto function.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For example:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// launch browser and create new page
browser = await playwright.chromium.launch({
  headless: false,
  executablePath: '/path/to/chrome',
  args: ['--window-size=1366,768'],
});
context = await browser.newContext();
page = await context.newPage();

// navigate to LambdaTest URL with desired configuration options
await page.goto(TEST_URL, {
  waitUntil: 'networkidle2',
  timeout: 30000,
});

// set LambdaTest credentials
await page.setExtraHTTPHeaders({
  'user': 'your_username',
  'access_key': 'your_access_key',
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You may also need to modify the test scenarios to reflect the specific functionality provided by LambdaTest. For example, you may need to update the steps for creating a test entity in the database or collecting test execution logs.&lt;/p&gt;

&lt;p&gt;This is a sample implementation and may only cover some of the necessary steps for LambdaTest for &lt;a href="https://www.lambdatest.com/blog/playwright-framework/?utm_source=devto&amp;amp;utm_medium=organic&amp;amp;utm_campaign=mar29_ap&amp;amp;utm_term=ap&amp;amp;utm_content=blog"&gt;Playwright automation testing&lt;/a&gt;. Refer to the LambdaTest documentation for more information on configuring and running Playwright tests on the platform.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0g2jymgU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/800/0%2Atvh6HrZyagGlFc6W.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0g2jymgU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/800/0%2Atvh6HrZyagGlFc6W.png" alt="image" width="800" height="1206"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Inspect web elements to help developers and testers to debug UI flaws or make modifications in HTML or CSS files. Learn&lt;/em&gt;&lt;/strong&gt; &lt;a href="https://www.lambdatest.com/software-testing-questions/how-to-inspect-on-macbook?utm_source=devto&amp;amp;utm_medium=organic&amp;amp;utm_campaign=mar29_ap&amp;amp;utm_term=ap&amp;amp;utm_content=webpage"&gt;&lt;strong&gt;&lt;em&gt;how to inspect on Mac&lt;/em&gt;&lt;/strong&gt;&lt;/a&gt;&lt;strong&gt;&lt;em&gt;-Book&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Can we do this for every Microservice?
&lt;/h3&gt;

&lt;p&gt;So far, so good. However, wouldn’t it be great if we could get the above test script modified for all the four microservices we discussed in our complex test scenario, i.e., &lt;strong&gt;hub_service, test_management_service, linux_container_service, and log_management_service.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: Modify the above test cases to query directly all four microservices&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ChatGPT&lt;/strong&gt;: To modify the above test cases to query the four microservices directly, you will need to update the TEST_URL and TEST_LOGS_API variables to point to the appropriate URLs for each microservice.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For example:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const HUB_SERVICE_URL = 'https://www.hub.com';
const TEST_MANAGEMENT_SERVICE_URL = 'https://www.test-management.com';
const LINUX_CONTAINER_SERVICE_URL = 'https://www.linux-container.com';
const LOG_MANAGEMENT_SERVICE_URL = 'https://www.log-management.com';

describe('Selenium testing platform', () =&amp;gt; {
  let browser;
  let context;
  let page;

  beforeEach(async () =&amp;gt; {
    // launch browser and create new page
    browser = await playwright.chromium.launch();
    context = await browser.newContext();
    page = await context.newPage();
  });

  afterEach(async () =&amp;gt; {
    // close browser
    await browser.close();
  });

  it('should successfully run a Selenium test', async () =&amp;gt; {
    // send test to hub_service
    await page.goto(HUB_SERVICE_URL);
    // create test entity in database using test_management_service
    await page.goto(TEST_MANAGEMENT_SERVICE_URL);
    // connect to linux_container_service to get docker container
    await page.goto(LINUX_CONTAINER_SERVICE_URL);
    // proxy HTTP requests to docker container
    // collect test execution logs using log_management_service
    const logs = await page.evaluate(() =&amp;gt; fetch(LOG_MANAGEMENT_SERVICE_URL).then(res =&amp;gt; res.json()));

    // assert that test ran successfully and logs are retrieved
    expect(logs).toBeDefined();
  });

  it('should not run test if test_management_service fails', async () =&amp;gt; {
    // send test to hub_service
    await page.goto(HUB_SERVICE_URL);
    // simulate failure of test_management_service
    const errorMessage = await page.evaluate(() =&amp;gt; {
      try {
        // create test entity in database using test_management_service
        throw new Error('test_management_service failure');
      } catch (error) {
        return error.message;
      }
    });

    // assert that test did not run and error message is displayed
    expect(errorMessage).toEqual('test_management_service failure');
  });

  // ... additional test scenarios ...
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This revised implementation queries the four microservices directly using the URLs specified in the “HUB_SERVICE_URL”, “TEST_MANAGEMENT_SERVICE_URL”, “LINUX_CONTAINER_SERVICE_URL”, and “LOG_MANAGEMENT_SERVICE_URL” variables. You can update these variables to the appropriate URLs for your specific implementation.&lt;/p&gt;

&lt;p&gt;You may also need to update the test scenarios to reflect the specific functionality provided by each microservice. For example, you may need to add additional steps to create a test entity in the database or collect test execution logs.&lt;/p&gt;

&lt;p&gt;Remember that this is just a sample implementation and may not cover everything necessary.&lt;/p&gt;

&lt;h3&gt;
  
  
  Verify Test Entity Creation in the Database
&lt;/h3&gt;

&lt;p&gt;As pointed out by ChatGPT, we need to specify a URL for the test_management_service. Let’s point that to &lt;a href="http://testservice.lambdatestinternal.com"&gt;testservice.lambdatestinternal.com&lt;/a&gt;. Also, it would be interesting to see if we can lean on ChatGPT, built on top of ChatGPT-3.5 to verify if we’re getting a hit in the database by adding a touch of MySQL.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: Assume that test_management_service is pointed at&lt;/strong&gt; &lt;a href="http://testservice.lambdatestinternal.com"&gt;&lt;strong&gt;testservice.lambdatestinternal.com&lt;/strong&gt;&lt;/a&gt;&lt;strong&gt;. Add additional test cases to verify if the test entity is successfully created inside the database. Assume that test_management_service uses MySQL&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ChatGPT&lt;/strong&gt;: Here are some additional test cases that can be added to verify if the test entity is successfully created inside the database by the test_management_service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; it('should create test entity in database', async () =&amp;gt; {
    // send test to hub_service
    await page.goto(HUB_SERVICE_URL);
    // create test entity in database using test_management_service
    const response = await page.goto(TEST_MANAGEMENT_SERVICE_URL);

    // assert that test entity was created successfully
    expect(response.status()).toEqual(200);
  });

  it('should verify test entity in database', async () =&amp;gt; {
    // send test to hub_service
    await page.goto(HUB_SERVICE_URL);
    // create test entity in database using test_management_service
    await page.goto(TEST_MANAGEMENT_SERVICE_URL);
    // query database to verify test entity was created
    const result = await page.evaluate(() =&amp;gt; {
      // connect to MySQL database and execute query to retrieve test entity
      // return result of query
    });

    // assert that test entity was found in database
    expect(result).toBeDefined();
  });
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These test cases cover the creation and verification of the test entity in the database using the test_management_service. The first test case sends a request to the test_management_service and asserts that the response is successful, indicating that the test entity was created.&lt;/p&gt;

&lt;p&gt;The second test case queries the database to verify that the test entity was created. You will need to modify the page.evaluate block to include the code for connecting to the MySQL database and executing the appropriate query.&lt;/p&gt;

&lt;p&gt;Remember that this is just a sample implementation and may only cover some of the necessary steps for interacting with the test_management_service and the MySQL database. Refer to the test_management_service and MySQL documentation for more information on how to access and query the database.&lt;/p&gt;

&lt;h3&gt;
  
  
  Refining the MySQL Touch with ChatGPT
&lt;/h3&gt;

&lt;p&gt;We even got a hit in the database for the right microservice. It’s now time to connect the dots with the database using MySQL. And so, we asked the ChatGPT to:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: Add logic to connect and query Mysql in the above test cases&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ChatGPT&lt;/strong&gt;: To add logic for connecting and querying MySQL in the above test cases, you can use the mysql npm package to establish a connection to the database and execute queries.&lt;/p&gt;

&lt;p&gt;Here is an example of how this can be done:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const mysql = require('mysql');

describe('Selenium testing platform', () =&amp;gt; {
  // ... existing test cases ...

  it('should verify test entity in database', async () =&amp;gt; {
    // send test to hub_service
    await page.goto(HUB_SERVICE_URL);
    // create test entity in database using test_management_service
    await page.goto(TEST_MANAGEMENT_SERVICE_URL);
    // query database to verify test entity was created
    const result = await page.evaluate(() =&amp;gt; {
      // create MySQL connection
      const connection = mysql.createConnection({
        host: 'localhost',
        user: 'your_username',
        password: 'your_password',
        database: 'your_database',
      });

      // execute query to retrieve test entity
      const query = 'SELECT * FROM test_entities WHERE id = ?';
      connection.query(query, [1], (error, results) =&amp;gt; {
        if (error) {
          throw error;
        }
        return results;
      });
    });

    // close MySQL connection
    connection.end();

    // assert that test entity was found in database
    expect(result).toBeDefined();
  });
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This implementation uses the mysql.create&lt;/p&gt;

&lt;h3&gt;
  
  
  Let’s add Resiliency to the Code
&lt;/h3&gt;

&lt;p&gt;Awesome! Now that we have the MySQL query, our immediate concern is ensuring the tests are resilient. In case a microservice becomes temporarily unavailable, the test should automatically retry. So let’s add resilience to it. ChatGPT, would you be so kind?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: Add resilience in the above test cases where test_management_service gets unavailable temporarily by adding retries.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ChatGPT&lt;/strong&gt;: To add resilience to the above test cases where the test_management_service may become unavailable temporarily, you can add retries using a loop and a delay between each retry.&lt;/p&gt;

&lt;p&gt;Here is an example of how this can be done:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const mysql = require('mysql');

describe('Selenium testing platform', () =&amp;gt; {
  // ... existing test cases ...

  it('should verify test entity in database with retries', async () =&amp;gt; {
    let result;
    let retries = 0;
    const maxRetries = 3;
    const retryDelay = 1000; // 1 second

    while (!result &amp;amp;&amp;amp; retries &amp;lt; maxRetries) {
      // send test to hub_service
      await page.goto(HUB_SERVICE_URL);
      // create test entity in database using test_management_service
      await page.goto(TEST_MANAGEMENT_SERVICE_URL);
      // query database to verify test entity was created
      result = await page.evaluate(() =&amp;gt; {
        // create MySQL connection
        const connection = mysql.createConnection({
          host: 'localhost',
          user: 'your_username',
          password: 'your_password',
          database: 'your_database',
        });

        // execute query to retrieve test entity
        const query = 'SELECT * FROM test_entities WHERE id = ?';
        connection.query(query, [1], (error, results) =&amp;gt; {
          if (error) {
            throw error;
          }
          return results;
        });
      });

      // close MySQL connection
      connection.end();

      if (!result) {
        retries++;
        await new Promise(resolve =&amp;gt; setTimeout(resolve, retryDelay));
      }
    }

    // assert that test entity was found in database
    expect(result).toBeDefined();
  });
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This implementation includes a loop that retries the test up to maxRetries times, with a retryDelay of 1 second between each retry. The test will fail if the test entity is not found in the database after maximum retries. This can ensure that the test will still pass even if the test_management_service becomes unavailable temporarily.&lt;/p&gt;

&lt;p&gt;Remember that this is just a sample implementation and may only cover some necessary steps for adding retries to the test. You may need to adjust the retry logic and delay based on your specific needs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Make it more Secure through Environment Variables
&lt;/h3&gt;

&lt;p&gt;Can ChatGPT (fine-tuned version of ChatGPT-3.5) help us add the MySQL connection values as the environment variables? Let’s see.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: I want to pass the mysql connection values via environment variables&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ChatGPT&lt;/strong&gt;: To pass the MySQL connection values as environment variables in the above test cases, you can use the process.env object to access the values at runtime.&lt;/p&gt;

&lt;p&gt;Here is an example of how this can be done:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const mysql = require('mysql');

describe('Selenium testing platform', () =&amp;gt; {
  // ... existing test cases ...

  it('should verify test entity in database with retries', async () =&amp;gt; {
    let result;
    let retries = 0;
    const maxRetries = 3;
    const retryDelay = 1000; // 1 second

    while (!result &amp;amp;&amp;amp; retries &amp;lt; maxRetries) {
      // send test to hub_service
      await page.goto(HUB_SERVICE_URL);
      // create test entity in database using test_management_service
      await page.goto(TEST_MANAGEMENT_SERVICE_URL);
      // query database to verify test entity was created
      result = await page.evaluate(() =&amp;gt; {
        // create MySQL connection using environment variables
        const connection = mysql.createConnection({
          host: process.env.MYSQL_HOST,
          user: process.env.MYSQL_USER,
          password:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sUmuICBJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/800/0%2AAq0RNdOtUu50c6o3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sUmuICBJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/800/0%2AAq0RNdOtUu50c6o3.png" alt="image" width="793" height="786"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Looking for an exact solution for testing the&lt;/em&gt;&lt;/strong&gt; &lt;a href="https://www.lambdatest.com/safari-browser-for-windows?utm_source=devto&amp;amp;utm_medium=organic&amp;amp;utm_campaign=mar29_ap&amp;amp;utm_term=ap&amp;amp;utm_content=webpage"&gt;&lt;strong&gt;&lt;em&gt;Safari browser for Windows&lt;/em&gt;&lt;/strong&gt;&lt;/a&gt;&lt;strong&gt;&lt;em&gt;? Forget about emulators or simulators and try real online Safari browsers with LambdaTest for free.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Schedule a GitHub Action and fetch Test Results with LambdaTest API
&lt;/h3&gt;

&lt;p&gt;Interesting how ChatGPT is making life easier across functions. From writing an automation script for &lt;a href="https://www.lambdatest.com/learning-hub/front-end-testing?utm_source=devto&amp;amp;utm_medium=organic&amp;amp;utm_campaign=mar29_ap&amp;amp;utm_term=ap&amp;amp;utm_content=learning_hub"&gt;front-end testing&lt;/a&gt; to generating a database query and connecting the dots. Wouldn’t it be a fairy tale if we could get ChatGPT to help us with &lt;a href="https://www.lambdatest.com/learning-hub/continuous-testing?utm_source=devto&amp;amp;utm_medium=organic&amp;amp;utm_campaign=mar29_ap&amp;amp;utm_term=ap&amp;amp;utm_content=learning_hub"&gt;continuous testing&lt;/a&gt; by configuring a GitHub action for our test scenario, scheduling testing, and fetching test results from the LambdaTest platform using the LambdaTest APIs? So we asked the ChatGPT to:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: Write a GitHub action to run all the above tests every day at 9 AM and point them to LambdaTest. Additionally, write a step to fetch the test result from LambdaTest using their APIs&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ChatGPT:&lt;/strong&gt; Here is an example of a GitHub Action that runs the above tests every day at 9 AM and points them to LambdaTest:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Selenium tests

on:
  schedule:
    - cron: '0 9 * * *'

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v2
    - name: Set up Node.js
      uses: actions/setup-node@v2
      with:
        node-version: 14.x
    - name: Install dependencies
      run: npm install
    - name: Run tests
      env:
        TEST_URL: https://hub.lambdatest.com/wd/hub
        USERNAME: ${{ secrets.LAMBDATEST_USERNAME }}
        ACCESS_KEY: ${{ secrets.LAMBDATEST_ACCESS_KEY }}
      run: npm test
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This action is triggered by a scheduled event every day at 9 AM, as specified on the block. It performs the following steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Check out the repository using the actions/checkout action.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Set up Node.js using the actions/setup-node action.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Install dependencies using npm install.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Run the tests using the npm test.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The TEST_URL, USERNAME, and ACCESS_KEY environment variables are passed to the action to configure the tests to run on LambdaTest. The USERNAME and ACCESS_KEY values are retrieved from the repository secrets, which can be configured in the repository settings.&lt;/p&gt;

&lt;p&gt;To fetch the test results from LambdaTest using their APIs, you can use the http-client npm package to make HTTP requests to the LambdaTest API.&lt;/p&gt;

&lt;p&gt;Here is an example of how to fetch the test results:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const { HttpClient } = require('http-client');

async function getTestResults(sessionId) {
  const client = new HttpClient();
  const username = process.env.LAMBDATEST_USERNAME;
  const accessKey = process.env.LAMBDATEST_ACCESS_KEY;
  const response = await client.get(
    `https://api.lambdatest.com/automation/api/v1/sessions/${sessionId}`,
    {
      auth: {
        username,
        password: accessKey,
      },
    },
  );
  return response.body;
}

// example usage
const testResults = await getTestResults('12345');
console.log(testResults);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This function makes a GET request to the LambdaTest API to retrieve the test results for the specified session ID. The username and accessKey values are passed as HTTP basic authentication credentials. The response body is returned by the function and can be accessed to retrieve the test results.&lt;/p&gt;

&lt;p&gt;You can include this function in your test script and call it after running the tests to retrieve the test results. You can also add error handling and retries to the function in case&lt;/p&gt;

&lt;h3&gt;
  
  
  A Package.json file with the above steps
&lt;/h3&gt;

&lt;p&gt;Next step was to make this whole process portable. Make a package.json file that will download all the dependencies and create the environment to run the above-mentioned test scripts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: Write a package.json file with different scripts to run the above tests and fetch the results from LambdaTest APIs.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ChatGPT:&lt;/strong&gt; Here is an example of a package.json file with different scripts to run the above tests and fetch the results from LambdaTest APIs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "name": "selenium-tests",
  "version": "1.0.0",
  "scripts": {
    "test": "playwright run --chromium --spec=*.spec.js",
    "fetch-results": "node fetch-results.js",
    "run-tests": "npm run test &amp;amp;&amp;amp; npm run fetch-results"
  },
  "dependencies": {
    "playwright": "^2.0.0",
    "http-client": "^1.0.0"
  },
  "devDependencies": {
    "jest": "^26.6.3"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This package.json file includes the following scripts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;test: Runs the tests using Playwright.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;fetch-results: Fetches the test results from LambdaTest using the fetch-results.js script.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;run-tests: Runs the tests and fetch the results in sequence.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To run the tests and fetch the results, you can use the run-tests script by running npm run run-tests. This will execute the test and fetch-results scripts in sequence.&lt;/p&gt;

&lt;p&gt;You will need to create the fetch-results.js script to contain the logic for fetching the test results from LambdaTest, as shown in the previous example.&lt;/p&gt;

&lt;p&gt;Remember that this is just a sample implementation and may not include.&lt;/p&gt;

&lt;h3&gt;
  
  
  Let’s put all of this on the GitHub Repo
&lt;/h3&gt;

&lt;p&gt;We now have the beginning of a complete automation testing workflow. As the next steps, we want to take all of this to GitHub with a complete GitHub Actions YAML file to automate the workflow over GitHub.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: Create a GitHub repository with all the Cucumber files, test files, nodejs, and GitHub action code&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ChatGPT:&lt;/strong&gt; To create a GitHub repository with all the necessary files for running the Selenium tests, you can follow these steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Create a new repository on GitHub.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Clone the repository to your local machine.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create a directory for the Selenium tests, such as tests/.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create the Cucumber feature files and test files in the tests/ directory.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Initialize the repository as a Node.js project by running npm init in the repository’s root directory.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Install the necessary dependencies, such as Playwright and the http-client npm package, by running npm install –save-dev playwright and npm install –save http-client.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create the package.json file with the scripts for running the tests and fetching the results, as shown in the previous example.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create the GitHub Action workflow file in the .github/workflows/ directory, as shown in the previous example.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Commit and push the changes to the repository.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once the repository is set up, you can run the tests and fetch the results by pushing a commit or manually triggering the GitHub Action workflow. The tests will be run on LambdaTest, and the results will be fetched using the LambdaTest API.&lt;/p&gt;

&lt;p&gt;Remember that this is just a sample implementation and may not include all the necessary steps for setting up a complete Selenium testing platform. You may need to adjust the steps based on your specific requirements.&lt;/p&gt;

&lt;h3&gt;
  
  
  A Bash Script to run everything locally on Mac
&lt;/h3&gt;

&lt;p&gt;Now that we have a complete workflow setup, we want to run all of that locally on our Mac machine. We needed a Bash script to clone the repo and automate all the above steps. Another challenge here we observed in the ChatGPT is that sometimes it does not create the whole script and misses out on crucial steps. So we had to prompt again to create a complete Bash script.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: Write a Bash script to automate the above steps on my Mac&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ChatGPT:&lt;/strong&gt; To automate the steps for creating a GitHub repository with Selenium tests on a Mac, you can write a bash script that performs the following actions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Create a new repository on GitHub.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Clone the repository to the local machine.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create a directory for the Selenium tests, such as tests/.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create the Cucumber feature files and test files in the tests/ directory.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Initialize the repository as a Node.js project by running npm init in the repository’s root directory.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Install the necessary dependencies, such as Playwright and the http-client npm package, by running npm install –save-dev playwright and npm install –save http-client.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create the package.json file with the scripts for running the tests and fetching the results, as shown in the previous example.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create the GitHub Action workflow file in the .github/workflows/ directory, as shown in the previous example.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Commit and push the changes to the repository.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here is an example of a Bash script that performs these actions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# create new repository on GitHub
curl -u "USERNAME" https://api.github.com/user/repos -d '{"name":"selenium-tests"}'

# clone repository
git clone https://github.com/USERNAME/selenium-tests.git

# navigate to repository directory
cd selenium-tests

# create tests directory
mkdir tests

# create cucumber feature files and test files in tests directory
touch tests/feature1.feature tests/feature1.spec.js
touch tests/feature2.feature tests/feature2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Q: Complete the above Bash file and do not generate the explanation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ChatGPT:&lt;/strong&gt; Here is a completed bash script that automates the steps for creating a GitHub repository with Selenium tests on a Mac:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# create new repository on GitHub
curl -u "USERNAME" https://api.github.com/user/repos -d '{"name":"selenium-tests"}'

# clone repository
git clone https://github.com/USERNAME/selenium-tests.git

# navigate to repository directory
cd selenium-tests

# create tests directory
mkdir tests

# create cucumber feature files and test files in tests directory
touch tests/feature1.feature tests/feature1.spec.js
touch tests/feature2.feature tests/feature2.spec.js

# initialize repository as Node.js project
npm init

# install dependencies
npm install --save-dev playwright
npm install --save http-client

# create package.json file with scripts for running tests and fetching results
cat &amp;gt; package.json &amp;lt;&amp;lt;EOF
{
  "name": "selenium-tests",
  "version": "1.0.0",
  "scripts": {
    "test": "playwright run --chromium --spec=*.spec.js",
    "fetch-results": "node fetch-results.js",
    "run-tests": "npm run test &amp;amp;&amp;amp; npm run fetch-results"
  },
  "dependencies": {
    "playwright": "^2.0.0",
    "http-client": "^1.0.0"
  },
  "devDependencies": {
    "jest": "^26.6.3"
  }
}
EOF

# create GitHub Action workflow file
mkdir -p .github/workflows
touch .github/workflows/selenium-tests.yml

cat &amp;gt; .github/workflows
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Bash Script to GitHub Workflow&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now that we have a complete Bash script to run and set up all our pipeline, we want the same to run over GitHub as well. So we needed a GitHub workflow file to add to the repo to run the setup.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: Modify the above Bash script to generate the GitHub workflow file as well&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ChatGPT:&lt;/strong&gt; To modify the Bash script to generate the GitHub Action workflow file as well, you can include the contents of the workflow file in the script and use the echo command to write it to the file.&lt;/p&gt;

&lt;p&gt;Here is an example of how this can be done:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# create new repository on GitHub
curl -u "USERNAME" https://api.github.com/user/repos -d '{"name":"selenium-tests"}'

# clone repository
git clone https://github.com/USERNAME/selenium-tests.git

# navigate to repository directory
cd selenium-tests

# create tests directory
mkdir tests

# create cucumber feature files and test files in tests directory
touch tests/feature1.feature tests/feature1.spec.js
touch tests/feature2.feature tests/feature2.spec.js

# initialize repository as Node.js project
npm init

# install dependencies
npm install --save-dev playwright
npm install --save http-client

# create package.json file with scripts for running tests and fetching results
cat &amp;gt; package.json &amp;lt;&amp;lt;EOF
{
  "name": "selenium-tests",
  "version": "1.0.0",
  "scripts": {
    "test": "playwright run --chromium --spec=*.spec.js",
    "fetch-results": "node fetch-results.js",
    "run-tests": "npm run test
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Now finally let’s modify the Cucumber Definitions
&lt;/h3&gt;

&lt;p&gt;So as the final step, now we want to change the Cucumber definitions so that the Log Management microservice can output responses in both XML and JSON.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: Modify the Cucumber definitions to include cases where log_management_service can output a response in XML as well as JSON&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To modify the Cucumber definitions to include cases where the log_management_service can output the response in XML as well as JSON, you can use the Content-Type HTTP header to specify the desired format.&lt;/p&gt;

&lt;p&gt;Here is an example of how this can be done:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Feature: Selenium testing platform
  As a user of the platform
  I want to be able to retrieve test execution logs in different formats
  So that I can use the logs in different applications

  Scenario: Retrieve test execution logs in JSON format
    Given I have sent a selenium test to the hub_service
    And the test has been executed on a Linux docker container
    When I request test execution logs from the log_management_service in JSON format
    Then I should receive a valid JSON response with the test execution logs

  Scenario: Retrieve test execution logs in XML format
    Given I have sent a selenium test to the hub_service
    And the test has been executed on a Linux docker container
    When I request test execution logs from the log_management_service in XML format
    Then I should receive a valid XML response with the test execution logs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To implement the request to the log_management_service; you can use a tool such as a request npm package to send an HTTP request with the Content-Type header set to application/json or application/xml, depending on the desired format.&lt;/p&gt;

&lt;p&gt;Here is an example of how this can be done:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const request = require('request');

describe('Selenium testing platform', () =&amp;gt; {
  // ... existing test cases ...

  it('should retrieve test execution logs in JSON format', async () =&amp;gt; {
    const response = await new Promise((resolve, reject) =&amp;gt; {
      request(
        {
          method: 'GET',
          uri: LOG_MANAGEMENT_SERVICE_URL,
          headers: {
            'Content-Type': 'application/json',
          },
        },
        (error, response, body) =&amp;gt; {
          if (error) {
            reject(error);
          } else {
            resolve(response);
          }
        },
      );
    });
    expect(response.headers['content-type']).toEqual('application/json');
    expect(response.status
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;&lt;em&gt;In this article, we take a look at some aspects of simulation and discuss some ways through which we can use iPhone Simulator, an&lt;/em&gt;&lt;/strong&gt; &lt;a href="https://www.lambdatest.com/blog/iphone-simulators-on-windows/?utm_source=devto&amp;amp;utm_medium=organic&amp;amp;utm_campaign=mar29_ap&amp;amp;utm_term=ap&amp;amp;utm_content=blog"&gt;&lt;strong&gt;&lt;em&gt;iOS emulator for PC&lt;/em&gt;&lt;/strong&gt;&lt;/a&gt;&lt;strong&gt;&lt;em&gt;, on Windows.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  What we learned when we created Automation Tests with ChatGPT
&lt;/h3&gt;

&lt;p&gt;The whole experience was fun. We had goosebumps every time the platform exceeded our expectations. But here’s a realistic learning from all of the experimentation above. Both good and bad.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pros of ChatGPT for Test Automation
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Adding Resiliency&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;One of the things that testers miss out on or consider as overburden is adding resiliency to the automation code. It is usually deprioritized as priority2-priority3 item. However, as we saw in our above example, we can easily create code to add resiliency to the automation script&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Ensuring Security&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Adding security-oriented best practices should be prioritized from day 1. However, if you have unsecured legacy code or a shortage of time to find security vulnerabilities in the code, ChatGPT — a chatbot built on ChatGPT-3.5 can help out. In addition, we saw in the above examples that some of the code pieces that ChatGPT created were secure by default. For example, leveraged the GitHub Secrets feature. But sometimes, you may have to prompt the platform to create secure code. For example, we had to ask ChatGPT to use environment variables deliberately.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Enabling Testers to overcome the learning curve&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;ChatGPT is an excellent tool for overcoming limitations around knowledge of a particular technology. For example, if you are a Selenium expert but are not well-versed in GitHub pipelines, you can use ChatGPT to at least get started and create starter codes to help you create GitHub workflows. We demonstrated the same in the above examples as well.&lt;/p&gt;

&lt;p&gt;However, a word of caution is that ChatGPT is not perfect or foolproof. It’s just another tool to make your life easier, but if you want to truly succeed, you cannot be completely dependent on ChatGPT. You would have to do a deep dive into the technology.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Accelerating debugging of code&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When it comes to debugging, ChatGPT is a useful addition to any software developer’s toolkit. There are examples you can find on the Internet where people have copy-pasted their code in ChatGPT and got back the exact reason for failure as an output response. Again this is not 100% foolproof, and ChatGPT may miss out on obvious issues, but still, it can help you get started or give you a new perspective while debugging the code.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cons of ChatGPT for Test Automation
&lt;/h3&gt;

&lt;p&gt;While ChatGPT, built on ChatGPT-3.5 does have many advantages and uses, there are a few disadvantages to be aware of.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;It is based on statistical patterns and does not understand the underlying meaning&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;ChatGPT is built on top of GPT-3.5, which is an autoregressive language model. However, one of the biggest challenges in this approach is that it is very heavily dependent on statistical patterns.&lt;/p&gt;

&lt;p&gt;This learning model predicts using statistical models on what should be the next words based on what words have been used prior before. However, it does not have an underlying understanding of the meaning of those words.&lt;/p&gt;

&lt;p&gt;This means that it cannot be used as effectively in situations where the user’s questions or statements require an understanding of a context that has not been explained before.&lt;/p&gt;

&lt;p&gt;While these may seem minor limitations, it’s a big thing if you depend on ChatGPT for testing. For example, the accuracy of ChatGPT will drastically decrease if you have to create test cases that require a prior deep understanding of the System-under-test.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Learning gaps&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The underlying technology in ChatGPT, the GPT-3.5 language model, is a deep learning language model that has been trained on large data sets of human-generated content. Here we are assuming that it has also learned Code as text; therefore, it has been able to create such accurate codes.&lt;/p&gt;

&lt;p&gt;That means it cannot accurately respond to things it has not learned before or may give wrong information if its learning has not been updated.&lt;/p&gt;

&lt;p&gt;For example, if its last learning phase was on a framework that has since deprecated half of its methods, then the code it will create will use those deprecated methods. So the user would have to ensure that the final code they are using is up to date.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Incomplete code&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Another challenge of creating code through ChatGPT is that you have to deal with partially written code. This is also evident in our examples above, where we had to deliberately ask for complete code for bash automation.&lt;/p&gt;

&lt;p&gt;So if you are dependent on ChatGPT-based code, you would first have to understand the incomplete code, finish it, or modify it to suit your needs. And as you can imagine, this is often a challenging thing to do as there are so many things that could go wrong. Even if you manage to get what you want, the final product will likely not be as good as if you were to write the code from scratch. But on the flip side, sometimes extending the code or debugging the code may be easier than creating repetitive code from scratch.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Assumptions&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;ChatGPT is dependent on assumptions. Software testers are trained to identify these hidden factors that could potentially cause an app to fail, and when they do so, they can build into their test cases ways to check for these issues. But what happens when your testers aren’t given enough time to test all their assumptions? What if the information needed to validate an assumption isn’t available? When other teams are involved in building the product, like QA or Development, this can be difficult to control.&lt;/p&gt;

&lt;p&gt;This same problem is there with ChatGPT as well. The platform starts with many assumptions about the use case you inputted. Most of the time, these assumptions are evident, and it’s easy to work around them, but often these assumptions lead to very inaccurate code that would not help make your life easier.&lt;/p&gt;

&lt;p&gt;ChatGPT is still in its early stages, and constant updates are being made to add features or fix bugs. In addition, it’s a constantly learning model, so as more and more people use it and find out issues in the platform, the better and better it becomes. Its accuracy will continue to increase, and its learning gaps will continue to fill up.&lt;/p&gt;

&lt;p&gt;This means users must stay on top of these changes to continue using ChatGPT efficiently.&lt;/p&gt;

&lt;h3&gt;
  
  
  Future of testing with AI: Will it replace testing teams?
&lt;/h3&gt;

&lt;p&gt;ChatGPT, a chatbot built on top of ChatGPT-3.5 and similar AI technologies have tremendous potential in the field of testing and automation testing. They can make the life of a tester much much easier and have the potential to accelerate their efforts significantly. However, saying that they will replace testing teams is still not possible.&lt;/p&gt;

&lt;p&gt;Also, as stated at the start of the post, all the code generated could be better. When you try to run it, they will throw errors. But even with that said, they are good starting points that can be polished and taken further for better implementation.&lt;/p&gt;

&lt;p&gt;This tool, if used correctly, will enable the teams to get started with testing tasks much earlier and faster. Proper tooling created with this technology in the background will empower testers to not worry much about automation but focus on test cases themselves.&lt;/p&gt;

</description>
      <category>chatgpt</category>
      <category>automationtesting</category>
      <category>cloudtesting</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Only The Best For Our Beloved Open Source Community</title>
      <dc:creator>Mayankbhola1</dc:creator>
      <pubDate>Fri, 07 Oct 2022 16:30:56 +0000</pubDate>
      <link>https://dev.to/mayankbhola1/only-the-best-for-our-beloved-open-source-community-3o9b</link>
      <guid>https://dev.to/mayankbhola1/only-the-best-for-our-beloved-open-source-community-3o9b</guid>
      <description>&lt;p&gt;We, at LambdaTest, are big-time believers of open source. From our inception to date, we’ve relied a lot on open-source tools. We’ve also given back to the community in our own little ways, but now, we think it is time we really show how much the open source community means to us.&lt;/p&gt;

&lt;p&gt;We want to give away 10 parallel sessions of &lt;a href="https://www.lambdatest.com/hyperexecute?utm_source=devto&amp;amp;utm_medium=organic&amp;amp;utm_campaign=oct07_sd&amp;amp;utm_term=sd&amp;amp;utm_content=webpage"&gt;HyperExecute&lt;/a&gt; and complete no-restrictions access to &lt;a href="https://www.lambdatest.com/test-at-scale?utm_source=devto&amp;amp;utm_medium=organic&amp;amp;utm_campaign=oct07_sd&amp;amp;utm_term=sd&amp;amp;utm_content=webpage"&gt;Test-at-Scale (TAS)&lt;/a&gt; to any open source project that would find it useful.&lt;/p&gt;

&lt;p&gt;HyperExecute is our IP-led next-gen smart test orchestration platform that helps run end-to-end Selenium tests at the fastest speed possible. It is up to 70% faster than traditional Selenium Grid clouds and can help projects achieve quicker time-to-market.&lt;/p&gt;

&lt;p&gt;If HyperExecute is about speed, TAS is all about test intelligence and will help teams accelerate testing.&lt;/p&gt;

&lt;p&gt;We explain how open-source projects can benefit from these offerings below.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Also read&lt;/strong&gt; — &lt;a href="https://www.lambdatest.com/blog/top-selenium-open-source-projects-on-github/?utm_source=devto&amp;amp;utm_medium=organic&amp;amp;utm_campaign=oct07_sd&amp;amp;utm_term=sd&amp;amp;utm_content=blog"&gt;The Top 52 Selenium Open Source Projects On GitHub&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is HyperExecute?
&lt;/h2&gt;

&lt;p&gt;LambdaTest started as a &lt;a href="http://lambdatest.com?utm_source=devto&amp;amp;utm_medium=organic&amp;amp;utm_campaign=oct07_sd&amp;amp;utm_term=sd&amp;amp;utm_content=webpage"&gt;cross browser testing&lt;/a&gt; tool, and over a period of time, we became a test execution platform. All this while, we’ve constantly spoken to customers to understand their various pain points. One of the biggest challenges developers/testers face is test execution speed and the lack of smart test orchestration.&lt;/p&gt;

&lt;p&gt;Legacy cloud infrastructure isn’t as fast as a local setup. But, on the other hand, local setups don’t have the features that can be offered on the cloud. It has always been a trade-off.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Ue1EiFYY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2AokJ80z_1WIYX1JmV.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ue1EiFYY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2AokJ80z_1WIYX1JmV.png" alt="" width="880" height="473"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While this was the core of our problem statement, we also had a few other strands that together formed our hypothesis:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Achieve parity between sophisticated tooling available to developers and testers&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Lack of an integrated testing environment&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Our ultimate aim was that we wanted to let the users focus on writing test cases and just let us handle the rest. We wanted to make test execution as easy and fast as possible.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/ekNCo_MnC9k"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  HyperExecute vs Traditional Cloud Grids
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;TRADITIONAL GRIDS&lt;/th&gt;
&lt;th&gt;HYPEREXECUTE&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Triggered test scenarios are first sent to the Hub, which in turn routes it to the best-suited Node, resulting in unnecessary lag.&lt;/td&gt;
&lt;td&gt;Merges all the components into a single execution environment, thereby ensuring all the components ‘talk’ to each other just like they do in a local network.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Multiple network hops increase test flakiness, a factor that can be a huge hindrance to time-to-market.&lt;/td&gt;
&lt;td&gt;Achieves native-like performance on the cloud! Brands also have the flexibility of enabling all the available features in a cloud grid.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Traditional grids are slow, brittle, expensive, and flaky.&lt;/td&gt;
&lt;td&gt;Fastest test cloud-based test execution and orchestration platform with zero flakiness.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Debugging is a nightmare. Users have to do all the heavy lifting to decide what and how to run tests.&lt;/td&gt;
&lt;td&gt;HyperExecute orchestrates tests with intelligent test auto-splitting, smart ordering, and retries.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Legacy test clouds are an ETL nightmare.&lt;/td&gt;
&lt;td&gt;HyperExecute has a single automation dashboard view with both terminal logs of test commands and complete test execution logs. All test execution logs like terminal logs, test run video, network logs, commands, exceptions, and more are all available in one place.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1Abq7hhD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2Ac52ves4g5ovxX77a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1Abq7hhD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2Ac52ves4g5ovxX77a.png" alt="" width="880" height="473"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Also read&lt;/strong&gt; — &lt;a href="https://www.lambdatest.com/blog/why-did-we-build-hyperexecute/?utm_source=devto&amp;amp;utm_medium=organic&amp;amp;utm_campaign=oct07_sd&amp;amp;utm_term=sd&amp;amp;utm_content=blog"&gt;Why did we build HyperExecute?&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Test your web and mobile apps on &lt;a href="https://www.lambdatest.com/android-emulator-online?utm_source=devto&amp;amp;utm_medium=organic&amp;amp;utm_campaign=oct07_sd&amp;amp;utm_term=sd&amp;amp;utm_content=webpage"&gt;online Emulator Android&lt;/a&gt;. Ensure your apps are compatible across latest and legacy Android operating systems, devices, and browsers.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  HyperExecute’s Feature list
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Fastest End-to-End Test Execution Cloud
&lt;/h3&gt;

&lt;p&gt;HyperExecute is up to 70% faster than any cloud-based test execution grid. It is highly scalable, and its intelligent features ensure that you focus on writing your test cases rather than worrying about the execution.&lt;/p&gt;

&lt;h3&gt;
  
  
  Real-Time Console and Execution Logs
&lt;/h3&gt;

&lt;p&gt;HyperExecute gives a single automation dashboard view with both terminal logs of test commands and complete test execution logs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Runs on Linux Containers, macOS, and Windows
&lt;/h3&gt;

&lt;p&gt;HyperExecute platform is designed with hosted runners for every major OS, including Windows, macOS, and Linux Containers. These runners come with pre-installed support for major programming languages and frameworks, including Node.js, Java, PHP, Python, C#, GO, Ruby, and Rust.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LEpar4_E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2A5X6xMd39556BXP_J.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LEpar4_E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2A5X6xMd39556BXP_J.png" alt="" width="880" height="473"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Intelligent Test Orchestration
&lt;/h3&gt;

&lt;p&gt;HyperExecute comes with intelligent test auto-splitting, smart ordering, and retries. This will help cut down developer feedback time drastically.&lt;/p&gt;

&lt;h3&gt;
  
  
  Automatic Report Generation
&lt;/h3&gt;

&lt;p&gt;With all test execution data available in a single place, HyperExecute can help teams analyze the quality of their builds on a single platform through automatic reports.&lt;/p&gt;

&lt;h3&gt;
  
  
  Massive Parallel Testing
&lt;/h3&gt;

&lt;p&gt;With HyperExecute, you can run your tests in parallel at a massive scale. Teams can run the same test across multiple browser environments at scale via matrix-based multiplexing. This results in reduced test creation times.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--a0BCBd9y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2APpNwveK8aGf8vHag.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--a0BCBd9y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2APpNwveK8aGf8vHag.png" alt="" width="880" height="170"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  TAS-What’s that?
&lt;/h2&gt;

&lt;p&gt;While HyperExecute is all about brute speed, Test at Scale (TAS) brings in the much-needed intelligence layer. Even with all the automation, engineering teams still find it hard to speed up their development because a lot of pointless tests are run, thereby resulting in clogged pipelines, low productivity, high resource costs, and obviously slow release cycles. And not to forget, all the losses incurred due to flaky tests getting in the pipeline.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--r6fr7B3C--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/3446/0%2A-yTiHk5qdpYB_p_I.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--r6fr7B3C--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/3446/0%2A-yTiHk5qdpYB_p_I.jpg" alt="" width="880" height="460"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;TAS will help accelerate the whole pipeline from development to release, get quick feedback on code changes, manage flaky tests and help in keeping the master green for teams.&lt;/p&gt;

&lt;p&gt;TAS’ smart features include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Smart Test Selection to intelligently interpret and run only the relevant subset of tests that are impacted, resulting in up to 95% reduction in testing durations.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Flag, quarantine, and manage flaky tests.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Data intelligence layer to give complete visibility into KPIs like Mean Time to Repair (MTTR), Mean Time Between Failures (MTBF), flake rate, failed tests, and more.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;TAS was recently beta-launched for the open-source community.&lt;/p&gt;

&lt;p&gt;With the potent combo of HyperExecute and TAS, testing is no longer a hard task. Teams need not run tests anymore. They can orchestrate them and make data-backed decisions to ensure quality standards.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/pfzA5bsxf_E"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Test on Latest Safari Desktop and Mobile Browsers for Cross Browser Compatibility. Test on real Mac machines running real &lt;a href="https://www.lambdatest.com/test-on-safari-browsers?utm_source=devto&amp;amp;utm_medium=organic&amp;amp;utm_campaign=oct07_sd&amp;amp;utm_term=sd&amp;amp;utm_content=webpage"&gt;Safari browser online&lt;/a&gt;.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What’s in it for the Open-Source Community?
&lt;/h2&gt;

&lt;p&gt;As stated at the beginning of this blog, we are firm believers in the &lt;a href="https://www.lambdatest.com/open-source?utm_source=devto&amp;amp;utm_medium=organic&amp;amp;utm_campaign=oct07_sd&amp;amp;utm_term=sd&amp;amp;utm_content=webpage"&gt;open source community&lt;/a&gt;. We owe a big chunk of our success to it. We genuinely want to give back to the community by opening up our biggest product launches for free.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--62NhAzRb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/3482/0%2ANEOqZt0ALXApksFt.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--62NhAzRb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/3482/0%2ANEOqZt0ALXApksFt.jpg" alt="" width="880" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Any open source project will get 10 parallel sessions of HyperExecute and unlimited access to TAS.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What’s more?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You also get priority support as a part of this offering. You are just a form fill away from experiencing the future of testing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aPOzCUrt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2AkJsWWa9dmIKSqxVc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aPOzCUrt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2AkJsWWa9dmIKSqxVc.png" alt="" width="880" height="170"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Check this out: &lt;a href="https://www.lambdatest.com/?utm_source=devto&amp;amp;utm_medium=organic&amp;amp;utm_campaign=oct07_sd&amp;amp;utm_term=sd&amp;amp;utm_content=webpage"&gt;Browser test&lt;/a&gt; &amp;amp; app testing cloud to perform both exploratory and automated testing across 3000+ different browsers, real devices and operating systems.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We want to use this opportunity to learn from the community as well. The open-source community is vibrant and experimentative, and who better to give us valuable feedback! With your comments, feedback and ideas, we want to make both HyperExecute and TAS better.&lt;/p&gt;

&lt;p&gt;Looking forward to a fruitful collaboration.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/F8BAEIgG6Kw"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>showdev</category>
      <category>webdev</category>
      <category>community</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Why did we build HyperExecute?</title>
      <dc:creator>Mayankbhola1</dc:creator>
      <pubDate>Thu, 06 Oct 2022 16:44:14 +0000</pubDate>
      <link>https://dev.to/mayankbhola1/why-did-we-build-hyperexecute-30kj</link>
      <guid>https://dev.to/mayankbhola1/why-did-we-build-hyperexecute-30kj</guid>
      <description>&lt;p&gt;Testing isn’t an easy job.&lt;/p&gt;

&lt;p&gt;Trust us, we know.&lt;/p&gt;

&lt;p&gt;Given today’s rapid pace of innovation, it is what distinguishes the best companies from the rest of the pack. Any brand today must offer the best customer experience across all possible types of devices, versions and browsers, and more importantly, at record speeds.&lt;/p&gt;

&lt;p&gt;Sadly, testing infrastructure is still lagging behind. There hasn’t really been any noteworthy innovation that has been optimized for speed. Our company has built a cloud-based test execution infrastructure, and one of the constant asks from our customers has been that cloud infrastructure isn’t as fast as a local setup. But, on the other hand, local setups don’t have the features that can be offered on cloud. We knew there had to be an answer to this rather glaring problem.&lt;/p&gt;

&lt;p&gt;We thought long and hard about it. We began working on a germ of an idea almost six months ago and after a lot of building, re-building and of course, a lot of back and forth discussions, we finally launched HyperExecute.&lt;/p&gt;

&lt;p&gt;HyperExecute is a next-gen smart testing platform to help businesses run end-to-end Selenium tests at the fastest speed possible.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iYWkQuvp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2A8f9-Alm7yJ1EbE45.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iYWkQuvp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2A8f9-Alm7yJ1EbE45.png" alt="" width="880" height="473"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Check this out: &lt;a href="https://www.lambdatest.com/safari-browser-for-windows?utm_source=devto&amp;amp;utm_medium=organic&amp;amp;utm_campaign=oct06_sd&amp;amp;utm_term=sd&amp;amp;utm_content=webpage"&gt;Safari browser for windows&lt;/a&gt;- Say No To Safari VM! Perform Cross Browser Compatibility Testing On All Safari Browser Versions Across Real Browsers And Operating Systems.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why HyperExecute?
&lt;/h2&gt;

&lt;p&gt;We had a few thoughts that formed the base on which HyperExecute was conceptualized. They were:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Achieve parity between sophisticated tooling available to developers and testers&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Lack of an integrated testing environment&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Customers had to compromise on speed when they moved to the cloud and this was not fair.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The goal was to let the users focus on writing test cases and leave the rest to us. Our motto for HyperExecute was — you write tests and let us do the rest. It sounds a bit too simple, but that’s the ultimate aim of the product — make testing really that easy and fast.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xIbOy00A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2A_owUJQBSj_wm1dts.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xIbOy00A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2A_owUJQBSj_wm1dts.png" alt="" width="880" height="473"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In short, HyperExecute brings together the speed of a local setup and the best features of the cloud — speed + best-in-class intelligence + zero maintenance + Autoscaling.&lt;/p&gt;

&lt;p&gt;HyperExecute drastically cuts the feedback time for developers by executing tests at hyper speed. The tooling does the heavy lifting instead of testers modifying their code every time.&lt;/p&gt;

&lt;p&gt;Wait, so how do we really achieve this?&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/ekNCo_MnC9k"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Taking a step back, existing automation testing platforms are inherently slow because of lots of network hops that happen during each test. In the traditional approach, triggered test scenarios are first sent to the Selenium hub, which in turn are scheduled to run on the best-suited Selenium node. This results in unnecessary latency as many network components are involved in the entire process. Also, multiple network hops with separated components result in increased test flakiness, a factor that is a big hindrance to time to market.&lt;/p&gt;

&lt;p&gt;HyperExecute unifies all the components into a single execution environment that eliminates these network hops and cuts down on test execution times. This enables businesses to test code and fix issues at a much faster pace and hence achieve an accelerated time to market.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9fm-P_q---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2ABtLET0Io52upxSZ7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9fm-P_q---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2ABtLET0Io52upxSZ7.png" alt="" width="880" height="473"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Introducing Test At Scale — a test intelligence and observability platform that shortens the testing durations &amp;amp; provides faster feedback. It enables devs to get early insights into flaky tests and enhances their productivity to ship code frequently with confidence. Start FREE Testing&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/wSyrKgy9FTg"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Check this out: &lt;a href="https://www.lambdatest.com/blog/emulator-vs-simulator-vs-real-device/?utm_source=devto&amp;amp;utm_medium=organic&amp;amp;utm_campaign=oct06_sd&amp;amp;utm_term=sd&amp;amp;utm_content=blog"&gt;Emulator vs Simulator&lt;/a&gt; vs Real Device Testing: Key Differences&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why are we bullish on HyperExecute?
&lt;/h2&gt;

&lt;p&gt;ITE. That’s why!&lt;/p&gt;

&lt;p&gt;Integrated Testing Environment.&lt;/p&gt;

&lt;p&gt;HyperExecute provides CI/CD-type native features for testing as well. When there can be an Integrated Development Environment, there certainly must be an Integrated Testing Environment.&lt;/p&gt;

&lt;p&gt;It basically means that all the operations that are required to conduct a test should be available at a single place and most of them must be automated.&lt;/p&gt;

&lt;p&gt;Let me give you an example of what HyperExecute can do. Let’s say a tester ran 10 tests yesterday and the eighth one failed. So, when this tester tries to run these tests again the next day, HyperExecute will intelligently reorder them. It will run the eighth test first because the tester wants the feedback on that one quickly. S/he isn’t waiting to see the passed tests, pass again. HyperExecute will move it to the top so that you can get the fastest feedback possible. Also, let’s say, if you have 1000 tests and want to run them on 10 virtual machines, we will intelligently group them according to their latest execution times so that the large test cases are distributed evenly.&lt;/p&gt;

&lt;p&gt;These are just a few examples of what’s possible with HyperExecute. When we were building it, our core product was ready quite early, but we knew that we had to give our customers something more than just the raw power. We packaged it with smart features like real-time console logs for test execution, intelligent grouping of tests to reduce total build time, artifacts management, automatic reporting, and automatic retries on failures.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---qs_Ad5A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2Ab5qURbLVcV02sAEa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---qs_Ad5A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2Ab5qURbLVcV02sAEa.png" alt="" width="880" height="473"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We wanted to make HyperExecute the best choice for users so much that they’d never need hand holding and explanations, they should just use it and see value instantly.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/DzeJjKv8n3c"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;&lt;a href="https://www.lambdatest.com/blog/playwright-framework/?utm_source=devto&amp;amp;utm_medium=organic&amp;amp;utm_campaign=oct06_sd&amp;amp;utm_term=sd&amp;amp;utm_content=blog"&gt;Playwright Automated Testing&lt;/a&gt;: Getting Started With Playwright Framework&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What’s the path ahead?
&lt;/h2&gt;

&lt;p&gt;We launched HyperExecute a few weeks ago and we are already seeing tremendous response for it. While it supports Selenium today, we want to bring in more frameworks in the days to come. We hope HyperExecute redefines test execution and gives digital brands a smart test orchestration platform that can truly match their speed of innovation!&lt;/p&gt;

&lt;p&gt;Reach out to us &lt;a href="https://www.lambdatest.com/contact-us?utm_source=devto&amp;amp;utm_medium=organic&amp;amp;utm_campaign=oct06_sd&amp;amp;utm_term=sd&amp;amp;utm_content=webpage"&gt;here&lt;/a&gt; to know more about HyperExecute.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XE8suJGR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2A8Y4FQMlhB5F677WV.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XE8suJGR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2A8Y4FQMlhB5F677WV.png" alt="" width="880" height="151"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>showdev</category>
      <category>webdev</category>
      <category>beginners</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
