DEV Community

loading...
Cover image for Building a Software Survey using Blazor - Part 9 - End to End tests

Building a Software Survey using Blazor - Part 9 - End to End tests

redfolder profile image Mark Taylor Originally published at red-folder.com ・5 min read

I've decided to write a small survey site using Blazor. Part of this is an excuse to learn Blazor.

As I learn with Blazor I will blog about it as a series of articles.

This series of articles is not intended to be a training course for Blazor - rather my thought process as I go through learning to use the product.

Earlier articles in this series:


While in the last artice I talked about unit testing with bUnit - in this article I want to talk about end to end testing.

I find a lot of value in automating at least the happy path test for user journeys.

Firstly it save me running through the journey manually every time I make a code change - great for spotting regression. And secondly, if possible, I like to use the same test for synthetic testing of production systems. Using the test to run through the production journey on a regular interval as a way of highlighting any problems.

Thus, for my Software Survey, I wanted to implement a simple happy path test that exercised every page of the survey and then validated the survey results has been correctly persisted to Azure Cosmos DB.

To SpecFlow or not to SpecFlow

Normally I'd use SpecFlow to provide the test in its Gerkin style language.

In this instance, as it is only me working on the project, I've just used an xUnit test. I may add SpecFlow over the top at some point - but all it would add is increased readability - which has a lot of value when taking a client through the test - but not so much when it’s just for myself.

Nothing special

If I'm honest, there is nothing particularly special in creating the end to end test.

The test is written in xUnit and uses Selenium Web Driver to interact with the website.

Which is quite a positive.

A Blazor Server is producing a HTML, CSS & JavaScript website, there appears to be no reason that the Selenium Web Driver can't be used to navigate it. I would also suspect that Cypress should work - but I don't have as much experience with it, so stuck with the known combination of xUnit & the Selenium Web Driver.

So, the test should be easy right?

Here be dragons ...

I have to admit that the end to end tests possibly cased me more problems than anything else with Blazor.

The initial version of the test seemed to work fine.

I could run the test locally - it worked.

I could run the test as part of Continuous Deployment - it worked.

I could setup to run on an hourly schedule against production for synthetic testing - it sometimes worked.

And this “sometimes” became a huge frustration to me. It could work for hours - if not days, then the test would start reporting failures. I would check manually, and it would look to be working fine.

If you look at the actual test class you'll see I've added various helper methods when trying to access page elements.

Mainly the helpers are set to retry if the application is not in the expected state:

        private async Task RetryActivity(By by, Func<IWebElement, bool> activity, string activityDescription)
        {
            _testOutputHelper.WriteLine($"Attempting '{activityDescription}'");

            // Allow time for Blazor to do its thing (fail after 60 attempt - 30 seconds)
            for (int i = 0; i < 60; i++)
            {
                await Task.Delay(500);

                try
                {
                    var element = _driver.FindElement(by);

                    if (activity(element)) return;

                    _testOutputHelper.WriteLine("Activity failed to return true");
                }
                catch (NoSuchElementException)
                {
                    _testOutputHelper.WriteLine("NoSuchElementException received");
                    continue;
                }
                catch (StaleElementReferenceException)
                {
                    _testOutputHelper.WriteLine("StaleElementReferenceException received");
                    continue;
                }
                catch (Exception ex)
                {
                    _testOutputHelper.WriteLine($"Exception {ex.GetType().Name} encountered - {ex.Message}");
                    throw ex;
                }
            }

            _testOutputHelper.WriteLine("Maximum retries reached");
            throw new Exception($"Maximum retries reached while attempting '{activityDescription}'");
        }

                private async Task WaitForElement(By by)
        {
            // Allow time for Blazor to do its thing (fail after 60 attempt - 30 seconds)
            for (int i = 0; i < 60; i++)
            {
                await Task.Delay(500);

                try
                {
                    _driver.FindElement(by);
                }
                catch (NoSuchElementException)
                {
                    continue;
                }
                catch (StaleElementReferenceException)
                {
                    continue;
                }
                return;
            }
        }
Enter fullscreen mode Exit fullscreen mode

As you can see ... lots of waits ... lots or retries.

And this was because Selenium was struggling to find elements reliably.

But, as discussed in part 4, I was looking in the wrong place.

Render Mode

As I talk about in part 4, the ServerPreRendered mode will initially serve a static copy of the page and then hydrate it once the Blazor (including SignalR) is fully available.

And this was at the heart of my problem.

For some reason the hourly tests seemed to be more suseptable to it than any other method of running the test - possible because the survey site was colder to start.

Once I added the PreRenderLoadingMessage to the survey site, the end to end tests consistently passed.

And getting to that gem cost me a considerable amount of time - and sanity.

Tidy up

I should be able to remove a fair amount of my retry logic from the end to end test now that I've found the true source of the problem.

To be honest though, I don't see any benefit in doing that. It is all working, so I'm inclined to leave it as it is.

And that's it for the end to end tests.

For something that was actually so easy to setup and operate, the overhead of understanding the ServerPreRendered mode really caught me out.

But I know it for next time. This is why we try new things.

Thank you for reading. See you in the next article where I introduce Bicep – a DSL for configuring Azure.

Discussion (0)

pic
Editor guide