With the rise of low-code and no-code solutions, I've been thinking about the problem they are trying to solve.
This lead me to think wider about ...
For further actions, you may consider blocking this person and/or reporting abuse
Thought proking article Benjamin. It makes me wonder what the next level for testers would look like. This sounds like one kind where a hyper-empowered QA Engineer can assemble a realistic scenario. Of course, I value people that can achieve a high-quality testing process cobbled together without requiring my whole budget. This might be a tough question but where do you see a QA Engineer adding value in an enterprise given this maximalist view of testing?
Us humble QA Engineers often have to manage being a lot of things to a lot of people.
If we are lucky, we are well supported, respected and don't need to wear all of these hats at the same time.
The larger the organisation and the bigger the teams. The more opportunity there is to specialise and add value doing fewer things well.
But there is great power in small teams. I work in a squad as the sole QA Engineer with 6 Developers, a solutions architect, a DevOps engineer and a product owner. I don't do all the things all the time, and I get the opportunity to pair with others in the team to work together to achieve our goal of delivering high quality solutions.
Like I said, making software is a team sport.
Benjamin and Alan, could you both clarify "maximalist description describes pretty much an end to end or system integration test." and "next level for testers would look like."
I did not view this as end-to-end testing but rather integration testing at all layers.
Alan you've introduced a concept of testing levels, I don't think this article established a leveling system but could be wrong.
I read into it. I work on Kubernetes infrastructure things now and problems I would have highlighted do not exist anymore. There are more different emergent problems and I wonder if people are aware that the skills in this kind of role might change dramatically in the next few years.
I tend to find some problems I think have been solved by the industry, reoccurring. What I mean is, there are lots of solved problems that are unsolved for teams and companies that are not at the same level of maturity.
This isn't a bad thing as such, sometimes solving the be same problems again we come up with better and better solutions.
Well now I'm curious, what you feel is solved and what are new challenges. I have some thoughts on the subject of Kubernetes, but would like to know where you are going with it?
It starts to get a bit complicated to talk in general terms at this point without discussing teams I've worked with and their problems. And I to want to avoid naming names.
As a general trend, Docker is making my life steadily easier in terms of setting up isolated test environments that have controlled test data.
But, if I hit something that cannot easily live in Docker for licensing or technical reasons, then I'm back to hosted environments I'm not in control of.
I test a lot of APIs so mocks fill in some of these gaps.
But now I've got a problem that my Integration Tests are not always triggered by builds or before merged and reporting and monitoring isn't trivial. And if something goes wrong, debugging is harder.
So you make gains in some areas, but end up with gaps. That with static environments were solved problems. Like the SUT staying around to debug.
All the new problems are solvable. It just all comes down to time, and sometimes licensing or infrastructure costs.
Most of the software problems stem from human communication issues. Conway's law. It is a belabored topic but ever-present and essential.
Anything related to server setup, repeatability, scalability. I wouldn't spend a lot of time unless I'm aware of non-functional requirements depending on your environment. I think that in a large enough business with shifting business models it might be nice to have someone thinking through and telling me: "This concept doesn't make sense anymore". That's fanciful, I'm also bored to death with UI automation ;)
As our microservices architecture matures and working with skilled developers, I find fewer straight code bugs.
Problems with configuration and deployment remain plentiful as do bigger picture issues.
Being able to explore, learn and exercise the full stack end to end, from concept to production is privilege I enjoy as a quality expert.
I test and automate very few UIs. So there is that at least.
I love the repeatability of containers. Having the ability to stand up something for testing and each tester having their own. So many manual install fails.
I am definitely still very busy though.
I was contrasting system integration testing, that I consider End to End testing to be part of, with what I would call partial automation, or using tools and code to assist Exploratory Testing.
I admit I casualty threw the term End to End in there without any discussion. Really it's a whole topic or its own.
I discussed End to End Testing over at The Club:
club.ministryoftesting.com/t/where...
And this is a summary of what I found out: