Testing Tools

Sealights | Tips to improve Selenium on Sauce Labs results

Every developer is familiar with the if(browsertype) statement. Different browsers render applications differently, so web applications need to be able to detect on which browser they are running and adjust their app code accordingly. Successfully testing all browsers and all versions are no small feat which is exactly why Sauce Labs built their solution on Selenium. To enable QA teams to execute Selenium based automation suites on multiple permutations, operating systems, and versions, for multiple browsers and browser versions.

At first glance, this seems like we’re done and this is the perfect solution to achieve complete application matrix coverage. Unfortunately, nothing is that simple, and upon digging deeper, it is apparent that not all environments are available for certification. You will have some critical use case gaps, there’s no way around it. So what are they and how do you get around them?

Mobile: Emulators vs. Real Devices

Appium, which is basically Selenium for mobile, is the most common library that allows Selenium to run on mobile, including mobile browsers. However, when running Appium mobile browser tests on Sauce Labs, tests do not run on actual devices but only on device emulators. This means that not only might the device not behave in the same manner when integrated with the browser (such as integrating with other device applications or battery consumption), but also that the browser might behave differently inside the emulator than it would on a real mobile device. For example, in a web app with a contact form that includes a phone number, with the planned functionality that a click on the number will open a dialing app, emulator testing is problematic because redirecting from a browser to a different app is not supported. So when the test runs, but nothing happens, we will receive a false negative. Moreover, when we publish the app’s support matrix, we won’t be able to say for sure which devices are supported.

Now that we understand where the issue is, what’s the work around? There is no external tool or code fix that can be applied to solve this just good old fashion planning. The key is to differentiate tests covering UI responsiveness from functionality and user experience. The first set has no problem running on Sauce Labs the other two must be identified in advance and allocated real mobile devices for testing. There are several solutions for real device testing, such as Amazon Device Farm, and many on-premise solutions are available as well.

Selenium WebDriver Consistency

WebDriver is a constant challenge for Selenium developers. To run Selenium tests on a browser, a special driver for each and every browser you need to test must be located on the machine where the tests are executed. The problem is that browsers themselves release new versions almost on a monthly basis, and therefore the WebDriver must also be updated to the newest version or sometimes to a completely new WebKit driver, like GeckoDriver for Firefox 47 and above. Problems begin to arise when you want to run the same test on multiple browser versions.

To combat this Sauce Labs supplies a considerable matrix that covers most of the common browser types out there. But if your tests were designed on one browser version with a specific WebDriver version (most times the driver will be added to the project as a reference or a dependency), these tests might not run on most of the machines in the matrix Sauce Labs supplies. For example, the below pom file, with Firefox webdriver dependency, will run on Firefox version up to 47, but as we explained, the 47 and above version requires the new Marionette driver.

Selenium Browser profiles

One way to solve this problem is to enhance the pom file, so the web driver reference isn’t bound to a specific version but will initialize from an external source. This is a bit more complicated than the usual version tag but can help with the multiple versions issue. Another way of approaching this problem is using Maven profiles. Profiles give you the ability to have the same dependency types, with different values for every execution. The question that automatically pops up is, “How do I know which profile to use on which machine?” Here is the fun part,  profiles can be triggered according to the OS they run on, and even based on environment variables defined by the OS. You can tell Maven to decide on its own what profile to use, as Maven can detect which OS is running it:

Selenium OS Profiles

You can also use a specific environment variable that provides relevant information, like Path. If the relevant WebDriver is added to the Path system variable, Maven can retrieve the correct webdriver and run the test with it. Use this variable in the pom to get the value of the location of the WebDriver:

${env.variable_name}

What About Performance Testing?

Most teams build their performance testing suite on several simple scenarios, based on the testers’ familiarity with the app, and will probably never test nor experience the performance glitches that real application users encounter during routine use. A relatively new trend in the QA world is to use functional tests as performance testing scenarios and kill two birds with one stone. Test the functionality of the feature in the real customer environment and under stress, and validate that the app performs and behaves correctly in real life use cases.

One of the tools that performance testing is based on is headless browsers. Unlike most common browsers, a headless browser is a web browser without a graphical user interface. They usually run faster and consume significantly fewer machines resource, such as CPU and memory. This allows for multiple instances of headless browsers to be run on the same machine, thus saving time and cost on complicated performance environments. The problem with this solution that Sauce Labs doesn’t have headless browsers in their matrix. The result is that when you are planning the coverage suite of your app and how it will be executed, you need to separate functional and performance tests. This sucks because it might result in tracking issues and double efforts on maintenance, which is exactly what Sauce Labs should be helping us with.

Now please note that these are partial solutions or solutions with downsides but, here is how you can execute the tests as part of your performance suite:

  1. First but less recommended, is to have the performance tests run on Chrome, Firefox, Internet Explorer, and Safari. This will significantly increase resource usage, but at the same time should provide a user experience that is closer to the real thing.
  2. The other option is to use Chrome and Firefox in headless mode. This mode will provide a user experience that is somewhere in the middle between real browsers (those that display GUI) and headless browsers.

Consolidated Reporting

Your running tests in multiple environments with multiple permutations. Now wouldn’t it be great to receive one consolidated report from Sauce Labs with all the relevant data? Think again, you now need to aggregate ALL the information, about all the different runs and try to figure out which tests failed in which environment and on which browsers. Sauce Labs does not give a detailed report about the suite pass/fail status and coverage, especially not the type of report managers require to understand application quality. For this, you’ll need an external tool, which integrates with the automation code and knows how to gather all this information and display it in a clear and convenient way to understand and use.

A big part of what SeaLights does is consolidate build and test metrics across environments and tools to generate build quality dashboards this was an easy fix for us, and you can see below how we monitor our Selenium tests.

Conclusion

Let’s start by stating that we use Selenium and Sauce Labs, so obviously, for us, this was all about optimization. We addressed these issues in the following order:

  1. Consolidated Reporting: as SeaLights is a Continuous Testing quality management platform this made the most sense for us to solve first. It has a direct relation to our product and the most impact on how we make business decisions from a quality perspective. It allowed us to understand where we needed to invest the most efforts to improve our customer’s user experience.
  2. WebDriver Consistency: was the next step for us just because applying the Maven fix is relatively simple and fast.
  3. Mobile: the reason we didn’t prioritize is as high quality is because our product is currently web-based and therefore insensitive to a specific device and OS versions.
  4. Performance Testing: for us, performance testing has low priority as our service is not performance dependant. For a taxi hailing app, however, this would be an entirely different story and performance testing would have taken priority.

To summarize, when using Selenium to test your application you need to address additional factors in addition to the framework and tests themselves, specifically when planning how to execute the tests in an environment that is out of your control. There are solutions for almost every problem, but only when you have all the information and are aware of the shortages of each solution can you prioritize what to fix and where to invest your efforts.

Note: Re-published on request. This article was first published on Sealights!

Leave a Reply

Your email address will not be published. Required fields are marked *