DEV Community

Cover image for Skyrocket Your Cross Browser Testing with Minimal Effort
Harshit Paul for LambdaTest

Posted on • Originally published at lambdatest.com

Skyrocket Your Cross Browser Testing with Minimal Effort

One thing that is evident with developers is their preferences for IDE, Operating System, Browser, etc. If you take the case of web developers, a majority of them have an affinity towards certain types of browsers; due to that preference, they prefer cross browser testing their source code only on ‘browsers of their choice’. After testing, the functionalities programmed by the web developer may work fine on specific browsers, but the situation in the real world is completely different. The users of your web app or website might come from different parts of the world and may have different preferences regarding browsers (or browser versions). Some customers may even prefer to use completely outdated browsers which are having a minuscule market share in the browser market. How can you & your team deal with such kind of a situation? It is not possible to test the functionalities on all ‘existing browsers’ running on the various OS, and it is not recommended to verify the code on a subset of browsers.

Based on the target market, your development & marketing team will have the breakup of the browsers being used by the users in the market, along with details about device types & operating systems. Hence, the ideal approach is to test all the popular browsers available in the market, along with the browsers that are widely used by the audience in the target market. In the first scenario, where you verify the functionalities on all the available browsers (immaterial of their usage) would require ‘infinite resources’, whereas the second approach is more calculated since cross browser compatibility check is performed on browsers that are being used by the majority of your user-base. The cross browser testing process should be an iterative process i.e. testing on the ‘shortlisted browsers, operating systems, devices, etc.’ should be performed before the changes are pushed to the production server. This process should be repeated before every new release.

Cross-browser Compatibility Testing — No replacement for manual testing

It is a known fact that before any developer pushes the code (either to the development environment or Staging environment before migrating to the Production environment), he would be performing unit testing on the code changes that he has made. For unit testing, developers have a variety of unit-testing frameworks to choose from. JUnit and Jasmine are the most popular unit-testing frameworks. Other types of tests performed at a module/package level are functional tests and visual regression tests. Cucumber is a popular choice for Behavior-Driven Development (BDD) or functional testing, and a visual screenshot comparison tool named Wraith is preferred for performing visual regression testing.

Manual cross browser testing cannot cover all the scenarios, and even if it does, you cannot deploy resources for tests that do not require much intelligence since the tests are mostly repetitive in nature. Many organizations following continuous integration and continuous delivery are now implementing automation testing for validating cross browser compatible scenarios that are mostly linear in approach, and test results are then logged in a report. However, many user scenarios are non-linear/unpredictable as far as web-app or websites are concerned. Automation tests might not be able to figure out whether an image is being displayed properly on a ‘particular category of Android device’ or not. Hence, Automation testing or testing via AI Bots would be adapted to a larger extent if it could replicate user scenarios (or ad-hoc scenarios).

Though there are different types of testing — Accessibility Testing, Usability Testing, Automation Testing, Regression Testing, and so on, none of them are replaceable, and the same also applies for cross browser testing, which is ideally used for ‘polishing’ the product so that it is usable on different types of browsers running across operating systems installed on various devices.

Did you know? Here is a free online tool to convert Hex To Decimal.

Primary Objectives of Cross Browser Testing

When you think about creating a cross browser testing matrix, the main intent of performing different types of tests is to unearth bugs, get them fixed, and make the product better across as many different devices and browsers or browser versions. Though everyone in your team might vouch for this intent, the purpose of cross browser testing is different when it is looked at from the lens of a developer & a tester. Though developers test their module to unearth bugs and fix them, their verification is limited to their module, whereas testers verify the product from an end-user point of view. Testers are normally elated when they come across a bug since it makes their efforts counted, but it is important to understand whether the test is performed keeping the ‘target audience’ in mind or it was done only to increase the ‘overall bug count’. It becomes important to outline the ‘primary objective of cross browser testing and highlight the ‘corner scenarios and scenarios that can be ignored’.

Based on the target audience of your web app/website, you can devise a cross browser testing strategy that can cover major scenarios. The important question to be asked is ‘should your team work on making the product better by finding bugs on the bases of test cases performed on popular/latest browsers?’ You have an option to ignore the bug/fix the bug & submit the code changes after re-testing the changes on the most preferred browsers/fix the bug, or you could submit the code changes after re-testing the changes on all the available browsers (including browsers that are not popular or rarely used by the users of your target market). There are pros & cons associated with each approach. You cannot simply ignore the bug, but you need to ‘prioritize fixing of the bug’. If the development team fixes the bugs & tests on all possible browsers, there might be a delay in shipping the product, and if they fix bugs on ‘certain browsers’; there is a possibility of shipping a defective product. This is a complex scenario, and this situation needs to be fixed before cross browser testing is performed.

How do you plan your cross browser testing activity, and how can you expedite the activity? Below are some of the pointers that can be used to plan & execute the cross-browser testing.

Identification of Browser, OS, and Device combinations

The first & foremost step of development is identifying the requirements of the customer. Along with the requirements, you should also have a deep level of understanding of the customer and market segmentation. Market segmentation would include understanding the traits of the consumer, preferable browsers in that particular market, the category of devices (phones/tablets) being used by consumers, etc.

Once you have identified these requirements, it becomes important that you test your web app/website thoroughly against those browsers, devices, and operating system configurations. Companies that develop browsers (Chrome, Firefox, Opera, etc.) also push fixes to the browsers. Hence your development & test team needs to take a pragmatic approach to test and prioritize the browser versions on which testing should be performed after major fixes are done by the development team. Once the website goes live, you can get more details about the user preference using Google Analytics or other analytics engines that are integrated into your website. You can also use cross browser testing checklist for your website before it goes live.

Did you know? HTML Escape helps escape an HTML file, removing certain characters that could be wrongly interpreted as markup.

Preliminary Research

Cross Browser Testing your product on browser & device variations is synonymous with planning & executing attacks on the enemy on a battlefield. Just like a battlefield, where you have to plan the attack and use your arms/ammunition wisely, it is imperative to keep in mind your priorities with respect to the resources used for cross browser compatible web development while testing the product for cross-browser compatibility. During the initial phase of testing, you should spend a minimal amount of time in order to test the product. The next level of attack is termed as ‘Raid’ where the attack becomes fiercer, and this is also the next level in your testing activity. Testing would become more intense in this phase, and the test team has to spend much more time in order to unearth bugs (which are relatively uneasy to find). After these two levels of testing, the test team needs to take the final march in order to defeat their enemies i.e. bugs in the product. These bugs are mostly results of corner cases not being handled by developers or tests being performed on browsers that are used by much smaller user base.

This grueling cross-browser testing process ensures that the product is relatively free of bugs and it is more robust in nature. Now that you have tested your product across the maximum number of browsers, the next & and most important step is to ‘narrow the context’ of your test and test the functionalities of your product across browsers that are preferred more by your target audience/market. This step is a crucial step in performing the ‘sanity testing’ of your product, after which you can confidently claim that your product would work perfectly fine with X% of the customer base.

Analytics and Insights into the customer’s preferences

In the section titled ‘Identification of Browser, OS, and Device combinations’, we looked into the importance of understanding customer’s insights so that the development & test team can spend effort in solving issues that are faced by your customers. There is no sense in solving a product issue for a browser that is ‘never/rarely used’ by your target audience.

Google Analytics, Kissmetrics, MixPanel, etc. are some of the popular web analytics tools that can be used to get the nitty-gritty about your customers. Your team can gather valuable insights about the customers e.g. browser used for visiting your website, the operating system being used, the location of the customer, etc. Though you will get a lot of information from the analytics, you should make use of the data that matters the most to your team. Your testing team can also take the support of web developers who can help you in prioritizing the information e.g. In many cases, the operating system on which the browser is being used may not matter much, so it is recommended that their knowledge and understanding is utilized to focus on OS + Browser + Device combinations that are important for your product.

Did you know? HTML Unescape helps unescape an HTML file, removing certain characters that could be wrongly interpreted as markup.

Test on Browsers that ‘matter the most’

Though popular browsers (Google Chrome, Mozilla Firefox, Safari, Opera, Yandex, Edge, and IE10+) are available for different operating systems and devices, it is rare that you come across ‘browser issues’ that are largely dependent on the operating system. Unless the browser code/design goes through a major overhaul, there is a minor difference between browser versions e.g. Firefox 45, Firefox 46, etc. While gathering information about browser statistics to decide on your cross browser testing needs, it makes sense to combine all the desktop versions, except Internet Explorer (IE), since the old versions of IE lack support for the latest web technologies e.g. HTML5.

The argument discussed above also holds good for mobiles & tablets. Handheld devices have been on the rise, and mobiles are one medium that can no longer be ignored. In fact, many organizations are working on web products with a mobile-first agenda. Along with the popular browsers, many devices have in-app browsers e.g. Samsung Internet on Samsung devices, Mi Browser on MI devices, etc. In order to gain market share, device manufacturers make their in-house developed ‘native browsers’ the default browser, and many times, these in-app browsers become more important from a cross browser testing perspective than other popular browsers as you are targeting the customer base of the entire mobile company itself.

Hence, based on the market being targeted and the priority of the medium (mobile, tablet, desktop, etc.), you should have organized and prioritized testing conducted on browsers that are popular in your target market. GS StatsCounter can give you detailed information about the browser market share from a ‘region’ & ‘device’ perspective. For example, below is a snapshot of the mobile browser market share in Asia.

As seen from the figure above, the market share of the Samsung browser is more when compared to other browsers on the Android platform, and hence becomes critical that the web developers & test team spend the right amount of development & testing effort on browsers that matter the most (for the product).

Along with the details about browser usage statistics, you should also have a detailed view of the ‘test priorities of the browsers’. This would help in prioritizing the cross-browser testing & development effort. For example, if only a small percentage of your website visitors use Opera, testing on Opera should be a lower priority than other browsers which are used by a majority number of users who use your web app/website. However, it is crucial to include those low-priority browsers in your testing checklist, as they may bring your power leads.

Now that we have touched upon some of the basic elements to make your cross-browser testing efficient, let’s look at the points in more detail.

1. Focus on bugs that are ‘not browser dependent

Before the source code is pushed to the Development environment/QA environment/Production environment, you should make perform a ‘regression testing’ on the ‘browser of your choice’ or the browser that you use frequently for browsing. The primary advantage of following this approach is the testing would be performed on the latest browser (as by default, ‘auto-update’ is enabled in all the browsers) and the focus would be to ‘unearth bugs & fix bugs’ that are browser agnostic. There might be cases that your website/web-app is not rendering correctly on certain browsers/devices, but the issue might not be related to the viewport or screen-size of the device. It could be a bug that could be browser-agnostic and spending too much time on ‘testing the functionality on a certain device/browser category’ could hamper the speed of the product development.

Some pointers to figure out browser-agnostic bugs are

  • Check whether the website renders properly after disabling CSS and JavaScript from the code.

  • Use browser developer debug options to view the website/web-app for different viewports.

  • Try testing ‘interactive use cases’ on the website/web-app.

  • Check whether the web-app/website works fine in scenarios where there is throttling of internet speeds. You could even use the admin login of your ISP (Internet Service Provider) to limit the internet speed. There are online tools (like webpagetest, etc.) that can also be used in order to test this scenario.

2. Achieve maximum testing throughput by focusing on the ‘RIGHT Browsers’

The approach mentioned in step (1) would be useful in unearthing bugs that are browser agnostic, but the cross browser testing would anyways be performed on ‘one of the popular browsers’ that is installed on the developer’s machine. There is a possibility of a regression defect i.e. a fix that was made for a particular browser might result in breakage of the functionality on some other browser e.g. Fixing an issue that occurred on Chrome might result in a side-effect for Internet Explorer or any other browser. Another possibility is that a fix on one browser could fix the issue on all the other browsers.

When the test team reports a bug, they would also mention the ‘browser on which the issue occurred, along with the scenario. Based on the bug pattern, the development team can come up with a table that lists the browsers based on the ‘Risk v/s Returns’ matrix, where ‘Risk’ indicates the impact of a bug-fix on cross-browser compatibility that one type of browser may have on other browsers on which the testing is performed. Browsers can be categorized in different levels (risk-1, risk-2, risk-3) where risk severity rises with each level, and ‘Returns’ would indicate the impact of a bug-fix (across different browsers).

Top comments (0)