DEV Community

Arlene Andrews
Arlene Andrews

Posted on • Originally published at on

Out of the Comfort Zone (Or: Why I love the testing community)

Warning sign -Reality check ahead

As some of you may know, I am a fairly-active person on Twitter - if you count retweets of things I think others will find interesting or useful. I started into it, simply to insure that some highly-useful articles got out to the wider community, and it's gone from there, all the way through two rounds of #100DaysOfCode. Having a focus shift to testing - my current area - has brought me even more wonderful ideas and people: I have more things to think on, and apply. Plus wonderful people that, at unexpected times, challenge me.

I got a message from someone - with a potential issue, and was asked for opinions on how I would solve it. A typical thing in this community- can I think outside of the formulas that exist? Okay, I have opinions - and you want them?

The challenge was presented as: a new office is opening, and management is afraid that there will be quality problems. They are planning on hiring a huge batch of people - over a multi-month time frame - none of whom are anticipated to have much experience with insuring quality, or even testing. The desire is to insure that those are part of the culture in the office, and they seem committed to having us as a partner to make sure the teaching, training, and coaching happens, and stays on track.


At first, I was iffy about responding: I'm new to the testing world, and my HR experience happened likely before a decent proportion of you were born. People don't change that much, over the years, but this situation was one that made me convinced that I never wanted to manage again. However, I must admit, I was flattered to be asked. And the more I thought about it, I found I actually did have some hopefully-intelligent things to say on the subject of creating culture in a medium-large group.

The first thing I did - and some of you who know me will smile - is to list a batch of current resources, and places to go to find more information. I suspected that the person asking knew of these: but having them all in one spot, to make sure that they are there, and part of the teaching, was important to me. Having current information is vital - and sometimes the distance of the Internet helps people ask questions, and may encourage them to look for things that they might not in a more-public group.

Then, of course, I suggested insuring there was someone who would be willing to push back on following the guidelines, whatever they turned out to be. Plus an automated check in either commits to the project, or before release. A person may be gotten around - and a machine cannot check to make sure that tests actually test anything. Between the two, however - it feels like having both would encourage the quality checks before creating anything further. If this is established day one, and then continues to be a focus, it will eventually become part of the culture.

Or one hopes it would.

Having to hire - then let people go - is a strain on any business, and the teams that make it up. New hires, and offering guidance to them, changes the team's dynamics in ways that are usually stressful. I am a firm believer in the fact that changing one person on the team will result in a new team, and that the new team may function completely differently than the previous one. Add to this changing personnel, and you have the potential of intentions getting lost, or being ignored.

So how to teach, and make sure everyone is participating, in the training? With the ongoing possibility that you will have someone completely new tomorrow join in? I love teaching, and making sure people understand and can use the information to improve. Watching someone successfully tackle a challenge; especially one that they swore a week ago was beyond them - is a delight. So how to make this a culture - given that there are as many different style of learning as there are people: this is a battle every manger, everywhere, throughout time has faced.

All right, let me think about what I, personally, would do, or at least would like to try.

This took a bit - I wasn't a bubbly companion at lunch that day, admittedly. The following is my answer, in a slightly expanded form, to this question. I made no budget or distance restrictions, nor many of the myriad things that would have to be considered if doing this in real life.

One of the assumptions I did make was they the company would be using some form of version control. Not only would this allow for having a single place to run final tests through, it would allow for some basic guideline enforcement, and prevent potential issues if something didn't work. Or the ever-present potential of a virus making its way onto a computer, and then into the code. Plus, there would be the ability to talk about any issues that came up, and give everyone a chance to see and test new code that came in. I know there have been many times that I've not understood why something was happening, until I looked at the code, and found out why it worked that way.

As much as I hate them, they can easily become either a crutch or effectively ignored, a checklist is something start with. Broader strokes, since these are adults - not every detail, but "Your first commit must include a test, and you must contribute to other commits, data or scenarios that will actually attempt to make their test fail in one or more ways." That, if enforced, might eliminate a lot of expect(true).toEqual(true) type of tests.

With potentially only a small team setting these standards, and a distant office that may not allow those teaching and enforcing these new things a built-in backup system, some of the standard solutions may be less-effective, once resistance to new ways (or even simply ignoring them, in hopes that these aren't firm stances) starts showing in small ways. A good group and mob programming, or remote pairing, can be a motivation all by itself, but if there is even one person that thinks this is the wrong answer, it can quickly lose its effectiveness.

These would be great for establishing the habit of giving quality, but might be tougher than expected to implement. Short meetings, as people are hired, and show by example that these are what is expected, and call out the positives that have been done. Small steps will be long-term benefit, but that may not address current management concerns. Nor be motivating for a while. With that many new people, it's going to be tough to get a team culture of 'this is how we do things' when you may be adding new people every month, and shifting experienced folk to the new teams.

I originally wrote about selecting a metric, and making sure that the teams ere involved in in. After some discussion, I wish I hadn't - selecting a single item, that may be influenced by anything from a power brownout to the office coming up against an issue they didn't anticipate is a n ineffective way of showing how much well things are working. I think now I would suggest a chart, or other visual item, that would show *the whole department*, as individuals, and how they feel things are going. Everyone should be armed with the understanding that it isn't always going to go up, and may go down often.

Some of you might notice that there has not yet been a mention of technologies that might be used, other than charts. This is simply because *the tech isn't the hard part.* Getting people to work together, and having a culture where the answers aren't known to anyone , but everyone can learn, if they are willing, is the part that is tough to have. I have seen it, and have been lucky enough to join some of the groups. It is hard work, for all involved, in addition to the computer-based work that is expected. And a delicate balance to keep.

But it is possible, and a joy to work in and watch it grow.

So, from the first day - communicate The Goals. Period. These are the focus - not the tech, not the thing that gets added that you think is neat, and certainly not the area you are expert in. And make it in a way that all feel okay with *over* communicating with the team, managers, and you. No one can read your mind, and taking the time to explain the issue in detail, and then reading those details before asking questions, will save everyone frustration. And that it is respectful of the business.

Then, learning the way of thinking "how can this be misused?" or "what would happen if..?" This is the tough part, since most folks presume that things will only be used in the way they are designed. LOADS of apps that prove otherwise, but some folks forget that a misused application may prove more popular because it fills a need - and they should be ready to accept that, and decide if they are going to support that new use.

Finding that testing mindset can be hinted at, only, from the checklist.The teams need to find ways to trust the testing, and the knowledge that everyone is doing the writing of test and looking at each other's work – a series of tests that truly don't add value will cause this to be a futile effort. So the process of working together to find these issues – maybe code comments, something to show that those tests are usable, and valuable (but this is a beast to police, until everyone understands that these are highly visible to everyone on the team, as well as management) – and getting input on how to address them while keeping in mind The Goals needs to be practiced. I'm sure you can find someone to write some code that isn't perfect, and/or focuses on a non-Goal area that can be used as an example – that way, no one feels like it is their code being talked about at first, and ideas may flow better.

The big question I saved for last – is management prepared to deal with the ramp-up time and costs? I know they are committed now, but will they accept that this is an ongoing expense, resource, and time drain? Have they made time adjustments if there is a release-stopping issue? Or is this going to be a disposable item if time gets tight? That will help you plan where the testing needs to take place, and how much the teams will actually be involved in helping decide "What is quality?" as well as being comfortable giving feedback and - the very most important thing - owning the product, and the changes that may need to be made in the future.

Top comments (0)