My most recent web team did write front end (FE) tests. Unit for JS logic and user acceptance story style tests using Jest and Puppeteer. We even ended up making a Docker Image to run the tests in a headless manner.
Unit tests were reasonable easy to write in Jest. I am historically a backend dev. but was able to write some Jest unti tests with little frustration.
The process to take user stories from product/project owners and convert them to Jest logic was fairly straight forward for the type of stories we had. Leveraging the Docker image we were able to execute the tests via Jenkins as part of our continuous deployment process.
Docker + Jest + Puppeteer (Headless Chrome) : hub.docker.com/r/davidjeddy/docker...
Anything that can go wrong will go wrong. - Murphey's law
Everyone in my team writes UI tests, partly because we built UI-licious, a tool to automate UI testing for web apps.
Is that with the way the testing tools work today, tests are wired to the actual design and implementation of the UI, and quickly become obsolete.
This means that you need to create multiple tests to test desktop and mobile designs for the same test scenario. And whenever the design or code is changed, the tests needs to be updated. It's a colossal waste of time to be going back and forth to fix broken tests.
Plus the use of CSS selectors, XPATHs and magic waits make the test horribly unreadable and difficult to maintain.
Tests should be independent from implementation because it is supposed to test if the implementation follows the specs. It makes no sense at all for tests to be wired to implementation. And why does it even matter what front-end framework - Angular/React/VueJS you use?
What then, does specs mean for the front-end then? The user's journey. That's how your app deliver's value to the consumers.
That's why I ended up creating UI-licious, because:
The UI-licious test engine does the heavy lifting of deciding how to correctly interact with any UI given simple commands, even in ambiguous scenarios where multiple elements are similar. Under the hood, there's some dynamic code analysis and good old-school AI involved to do this.
In any case, front-end code is still code, it changes, it has smells, it has bugs. Is there a good reason not to test it besides the lack of good tools to do so?
If the "CORE THING" of an application is well tested, and with that I mean the most important part: The Business Layer (and its inner Data Layer). Then focussing on testing the upper layers (UI, Front, UI Components...) is for sure good. But only to be relevant once FIRST the inner layers are widely tested (Business and Data).
And this implies front-end and back-end. If you are following a proper architecture, for instance, in a client-server projection for an SPA consuming an API, where you have created an abstraction of your Business Layer in both the client and the server sides, following a correct Separation of Concerns, and isolated, then that should be the initial target to test.
[Front-end] -> [Logic (business)] -> [API clients] ··· (XHR / HTTP) ···> [Back-end] -> [Logic (business)] -> [Data]
And once having fully assured those Logic (business) components are fully tested, then proceed to bring effort on pure UI / Front / Components testing.
This principle also follows the way of thinking of the Test Pyramid, in which not only deciding which parts of a software should be tested first is important, but what's the effort and COST on them. For sure, testing the inner-lower layers (Business, Data) requires a lot of coding, but in general is much more faster and secure to first being compliance with that, and later on with the upper layers.
Here's also a good article that concerns on this: The Practical Test Pyramid
Exactly how depends on the application. Web apps are tested using Selenium right now but our testing team is evaluating some other, more comprehensive, apps. One of the criteria we have for a new testing tool is one that will perform well with web and desktop apps
Our desktop apps don't currently have automated tests but we have a documented test plan for the testing team to follow. This gets to be a bit tricky since to do complete tests, we have to have do them on a manufacturing line that has all the devices operational (barcode readers, lasers, PLCs, etc).
I've actually heard more people be opinionated that the front-end code is more important to test than back-end code. The idea is that if the back breaks, the front breaks. If the front breaks, the back could still be passing. Front tests breaking will tell you if the front or back is broken, which at least gives a better picture of the application.
My workplace shoots for 70% unit test coverage (yes, coverage is a BS metric... it's a large company) of all code. Front, back, whatever. And every feature gets automated end-to-end tests as well.
My E2E stack is Protractor and Jasmine, since all the front ends at work are Angular. I haven't had any issues with the tooling other than some quirks around how every extra package seems to make it more likely that Protractor is going to have timing or intermittent issues. Vanilla Angular plays fine with it, though, and Jasmine is 👌
And by quirks, I mean running supertest tests from the Protractor run, sending MySQL queries during beforeAlls, and other things Protractor totally wasn't meant to do. But also the Angular app having 3rd party libraries which work manually but make Protractor give up.
Yes, and I'm amazed that this is even something anyone would think to ask in 2018 (No offense Ben).
Some people say that front-end unit testing won't catch bugs... and it won't. Not for new development anyway, just like if you don't write tests for edge cases you don't know about in the back-end it won't catch those either. The beauty of a robust and comprehensive testing platform isn't that it catches the bug in the story you're working on TODAY (Although it WILL and you won't think about it because you expect those tests to fail until you're done building). It will catch the bug in the story 6 months from now when a completely different developer has no idea what you did and changes something important to this functionality.
This is why there are so many people who argue AGAINST unit testing. They don't understand WHY you do it. It's just one part of the trifecta of tested, modular and separated concerns. It keeps your app stable. You do all three, and you might never see a broken unit test... not because they're useless, but because you built the thing well (and yes you can have modular code with poor separation of concerns... It will make you want to cry).
Our unit test coverage is actually (much) higher in the front-end than back-end right now, and we have both Protractor and Selenium integration and E2E tests. For almost a decade front-end applications have had just as much application logic that can be easily unit tested as any back-end service. We basically just treat it like any other microservice, but with quirks. A team thinking that they don't need to test the front-end tells me that either they have a very simple front-end (so how complex could the back-end be?) or they are stuck in 2005 with a 'just the front-end' bias.
Neither is a great sign for a modern web application, leaves a TON of logic untested and is going to fail to draw top talent your way. All of that from a simple interview question 'Do you do front-end testing?'.
Not at present.
I can think of some things that it would be good to test. For example, user-entered data producing the appropriate results.
But when I think of bugs we fixed so far (Elm app), writing tests would not have caught most of them up front. They were fundamentally because two pages or controls inappropriately shared state -- either squashing the other's changes or leaving a mess the other wasn't expecting. But initial tests would have been for each thing in isolation and so would not have caught it. Testing all permutations is not really feasible due to finite resources such as money and human lifespans.
Our general policy when these bugs crop up is to separate concerns and, where possible, make the illegal state unrepresentable. This can entail some restructuring of the UI model. It takes perhaps longer than writing a test for the failure and patching in defensive code. But it prevents regressions as well as improving the flow and organization of the code. It is not too terrible to do in Elm as the compiler is great at telling you when you structurally break things.
We're adding front-end testing slowly and tentatively. We've added facilities for front-end and back-end testing using Jest, and we're starting to play with Cypress for end-to-end testing. The advantage of Cypress is that we can use the same assertion frameworks for all our testing. The downside is a lack of support for testing various browsers/versions. (Selenium has the advantage there.)
Colleague of mine at Snipcart met a guy doing a presentation on Cypress in a WinnipegJS meetup. Very cool framework (I agree with the mentioned downside though). We ended up collaborating with the guy in question on a post tackling frontend testing w/ Cypress (e2e more specifically). In case it's any help/interest:
Modern Frontend Testing with Cypress.io Framework
Modern Frontend Testing with Cypress.io Framework
With our latest front end app, we've been writing jest tests. There's a learning curve for sure; we're still not very good at doing anything more complicated than pretending to click on a button.
I've been enjoying the snapshots, though. Nice regression warning there
I prefer to write Unit tests as well as E2E(Integration or Acceptance) tests for a frontend. Usually, there are a lot of flows which can confuse the user. Unit tests grant stability, so you can be sure that component will work if input data is valid. And E2E allows you to check the whole user flow.
Most complex - is to test interface (CSS) and check if interface corresponds to the design.
In web-land it definitely seems more common to test back-end code than front-end.
In web-land it definitely seems more common to test back-end code than front-end.
Everything should be automated no matter frontend or backend. If it will save your time - automate it.
It's great to see faults in your code be highlighted while you are typing, instead of having to run all kinds of test processes first.
On mobile but I can write more on this later if folks are interested. TLDR:
We have an in-house package I created that basically puts data tags on React components, and the QA team writes tests that check for & interact with the page by selecting elements with those data tags.
Absolutely yes. With typescript and angular, it’s pretty easy to write tests, mock things etc. writing E2E tests is a bit harder but it’s definitely worth it.
As I stated in by article about testing: I always wonder how many people do not write tests. Front is code as well, so why on earth would you not test it? If your code is hard too test, then the code is probably badly designed. That’s a smell and it should be fixed.
We use Ember and write a lot of tests for that to make sure that front-end behavior is working how it's supposed to. Right now we use Ember QUnit which works pretty well for most cases in Ember, especially since they recently unified their testing api to be easier to use and understand.
With great difficulty! There are unit tests for the client FE code - which are relatively painless and standard, but these change a lot depending on the frameworks adopted. The last project I was working on was an Angular based web app, the next is likely to be a React based web app - each comes with its own testing framework.
But we do try and use Protractor to do some browser testing for our Front End as well as running End to End testing, however, browser versions were the undoing! We spent a lot of time updating the end to end tests/component tests to work with the latest browser versions. Sometimes it was as simple of upgrading Protractor, others involved some refactoring - but it does add overhead!
I was just discussing with a colleague this morning the plethora of front end testing frameworks there are: Jasmine, Jest, Mocha, Chai the list goes on! (And sometimes you might use a combination of them!)
Something that has worked well for us is that we have used Cucumber/Gherkin for our component and End To End testing. This has meant that there was a common language of testing regardless of the language/frameworks adopted for the actual components. Something I would definitely recommend to others to consider.
Yes. For various reasons we actually have more FE unit tests than BE unit tests. The aim is to have reasonable coverage for both. They're supported by end-to-end API tests (they cover a lot of transactional logic and system integration that unit tests can't cover) and a small number of end-to-end UI tests.
Yes! I try to push for very high levels of coverage on state management, state selectors, etc. Unit testing in this manner is a really good sense of security and lets us refactor without fear. Component testing is a little more lax, but the core functions must still testable - make sure things mount, spy on dispatchers to see that they're called when you simulate clicks, etc.
It's well worth writing at least functional tests for frontend code so that you can refactor easier should you find in the future that you have some code smell.
For interviewing people, you can often find confusion about simple things in the online interview. I've recently switched to presenting frontend candidates with a CoderPad, pre-populated with an empty function and a test suite against what it should do. This highlights unit tests abilities to serve as documentation - you can easily write tests against the functional requirements of a system. Since doing so, candidates have had an easier time understanding what should happen, even with a language barrier.
Yes, started writing extensive test for front end when our team started using react. It helps if the framework you are using has good libraries to support testing. From what I have used react has enzyme and vue has vue test utils.
It can be tedious in the beginning, and can increase feature delivery time, but over the long run very beneficial. And of course it makes development FUN.
Hi, I m kind of assuming you also use redux.
If so, what s the strategy you guys adopted to test connected components (redux)?
Do you test the component with or without a mock provider?
We don't strive for 100% code coverage, but we do have unit tests for all things redux and any other code that is not actual UI.
For UI, we use Storybook to build/test components in isolation (we're a React/TypeScript shop). I consider Storybook to be part of tests as well, even if only visual.
Aside from that, where relevant, snapshot testing. I haven't had time, but Storybook has Storyshots which allows you to incorporate snapshot testing right in Storybook.
When I worked at McAfee on the True Key password manager, E2E testing was crucial as we had to ensure the browser extension worked correctly.
At the moment, where I'm at, we do not have any end to end testing in place. It's something I'm pushing for. I've been looking into Cypress. That's one of the things I'd like to get in place this year.
Besides that, we have a long rich tradition on using end-to-end tests; indeed, unit and integration tests are a somewhat recent development! In this case, we have a nice XML-based language (POSHI) that uses Selenium under the cover. It is great (we get so many things that we would miss!) but those tests are quite slow. So, JS unit tests are still useful.
This situation can be approached from the possible consequences of failure.
If we talk about the "typical" client app talking to backend services, then usually, front-end is more forgiving than backend is, when a problem arises.
In the end, whatever technology you use in backend, it can have many types of clients, and barely cares about them, whether they are a web app, a native mobile app, an IOT device, etc...
That said, web front-end, unless a terrible mistake is made, is allowed to fail and continue. Should it be? That's another debate. But the strategy on the web is ship fast, fail fast, fix fast, repeat.
A user can easily survive a misaligned button, an ugly interface, a blue screen of death XD (app crash). Even a temporary "I click but nothing happens".
"Restart the computer, phone, app", "use another browser" are half solutions but quickly done and in the hands of the user. Things can be fixed quickly.
Now, for the backend, a "stupid" mistake can bring the whole business to a halt. Data loss or data corruption, that's one thing you don't want. Security breach isn't better and you don't want a core functionality like "payments" to die on you. If millions of users are affected in an instant, that's a serious reputation risk and a business failure.
These were my thoughts about why backend is more tested, or maybe "should be" more tested.
That said, I am more of a front-end person, and I do want apps to be tested. It's quite easy to write unit-tests nowadays. Especially in case of refactor, having good tests in place can save you a lot of time and from realising too late that the wrong data is displayed. On the other hand, E2E tests for the most crucial user stories will save the day far more than people think.
A last thought that comes to mind: the rise and goal of micro-services in backend is to avoid complete failure. The business can continue by allowing non-critical services to fail. Again, this is designing for failure. Tests should reflect where the risks are. It can happen that the front-end value is far superior to the back-end one. Test accordingly.
Definitely, we do write test. I like selenium , too. I use the awesome extension they have in Firefox dev edition. Testing the FE code is just as important as user exp testing IMHO. Also, I really like using phantom with Angular, Jasmine with Karma as test runner. The output shows up via Jasmine's HTML reporter and that's nice too. Here is a link to a sample run:
and here is a link to the tests:
This is a screenshot of the selenium extension:
We do for the web and, surprisingly, it's not too intrusive / unwieldy although there has been some extra work in refactoring the tests as requirements change. We're using Karma / Jasmine unit testing several Angular sites. Time in CI for unit testing is about 30% of the build (~60% is npm updates).
We do not unit test the front end components for desktop apps used in-house (WPF).
At work - no.
At the hobby project - yes, we are getting into it. Cypress and jest + react-testing-library, both of which I find great. They encourage to test how components or the app behave, not the implementation details. I find it very helpful.
Yes. We write E2E tests because it is the easiest way to verify Front and Backend work together. For web applications it is nowadays really easy todo.
If you have PHP in your stack Checkout codeception. There is a Chrome Extension where you can literally click together your test cases.
No, but we use Selenium to take screenshots in a production site, so that's something.
Crawling search engine results and returning highlighted visuals.
Yes we do with Jest and enzyme.
No. Goals are to split the monolith, redesign, fix identity management, and then cypress.
Yes, including visual regression tests running in headless chrome, driven by puppeteer. They automatically fail if any components differ from the expected snapshot.
If I could find an agnostic service for reliable front-end integration testing, I would do more of it. Especially one that could be used on static sites.
I know that, with tools like Selenium, that is doable. But I do not.
Yeah, we use Create React App which has Jest configured. Right now we are sitting around 250 tests. We shoot for 85% coverage overall with 100% coverage of core logic like state and reducers.
We have the unit tests for our Vue app and end to end tests (selenium) that does all the interaction through our app, so I guess the answer is yes.
No, we don't. But we are probably updating our front end soon, I'm not sure if we are going to add tests. The js tooling is a nightmare right now D:
unfortunately no, does anyone have any good advice for swaying my superiors?
We're a place where coders share, stay up-to-date and grow their careers.
We strive for transparency and don't collect excess data.