<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Maaret Pyhäjärvi</title>
    <description>The latest articles on DEV Community by Maaret Pyhäjärvi (@maaretp).</description>
    <link>https://dev.to/maaretp</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/maaretp"/>
    <language>en</language>
    <item>
      <title>Making Releases Routine</title>
      <dc:creator>Maaret Pyhäjärvi</dc:creator>
      <pubDate>Wed, 22 May 2024 21:09:35 +0000</pubDate>
      <link>https://dev.to/maaretp/making-releases-routine-b9a</link>
      <guid>https://dev.to/maaretp/making-releases-routine-b9a</guid>
      <description>&lt;p&gt;Moving organizations from infrequent to frequent releases has been my signature move for a decade. While the successes of moving from 30 days to 30 minutes of release timeframe are things I have learned from, nothing teaches you like a good old failure. After a streak of improvements, last year I faced an interruption to the streak with a four month stabilization phase.&lt;/p&gt;

&lt;p&gt;We learned from the failure as a team, and I learned from the failure as an individual. With the odd chance that you could try your own mix of failing instead of repeating ours, let's dissect our experience together. Having just completed the 7th release of year 2024 in the team, I can fairly certainly say we made releases routine, again.&lt;/p&gt;

&lt;p&gt;Key takeaways:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What is the necessary tester brain rewire for approach to change, trust, certainty and time&lt;/li&gt;
&lt;li&gt;How in practice we turn testing continuous and releases routine&lt;/li&gt;
&lt;li&gt;Importance of conceptual separation of the pipeline and the compliance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ll4ddnv694co7dch2l4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ll4ddnv694co7dch2l4.png" alt="Image description" width="720" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In my three decades of working in software projects with a testing emphasis, I have changed jobs a lot. I believe that while practice does not make you perfect, it makes you better, and one of my signature moves I have had plenty of practice with is shortening the release cycles. &lt;/p&gt;

&lt;p&gt;I joined this organization 4 years ago, and have been through the signature move here multiple times already. This talk however focuses on the latest product, and my 2,5 years on it, and the deeper lessons I drew from that time. &lt;/p&gt;

&lt;p&gt;The first of the three years, we went through the motions. We built a continuous testing capability with increasing amount of test automation, but particularly a pipeline in which we could capture the moves needed. Saying we went from a month of release testing to 30 minutes of release testing is simplification, because we went from not getting releases out to getting them out regularly, and pretty routinely. The routine relied on me as the team's principal test engineer. &lt;/p&gt;

&lt;p&gt;We had many good things. I prioritized information I acquired by testing on the fly, and fed it to the right developers at right time. I prioritized for timeliness of information. I created a systematic process but one that relied on conversational approach rather than rules of using tools. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feda2f0hp7hgodry0wzr5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feda2f0hp7hgodry0wzr5.png" alt="Image description" width="720" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since I had been through this numerous times before, I repeated the basic maneuvers. I taught people that releasing frequently is a game-changer, both for the whole team but particularly from a testing perspective. My go-to story was to talk about how "Pool is not a bigger bathtub". It was not about just doing the same moves faster and more frequently, but we'd have new purposes the new practice enables. You can imagine things like pool party, and swim guard by the pool, but try placing those two to the bath tub and it's more amusing than useful. They are both containers of water, but they enable entirely different purposes. Just like infrequent and frequent releases are both releases, but the entire flow of how teams work gets to change. &lt;/p&gt;

&lt;p&gt;Explaining people that there is a fundamental adjustment to our approach to change, trust, certainty and time turns out to be particularly core to any of these transformations. &lt;/p&gt;

&lt;p&gt;Having had the pool, I did not want to go back to bathtubs. I liked the steady pace without having to push myself to test in the end, even on weekends to hit schedules. The entire frame of how testing is set up and managed is different. I like how it's different. Even if we did not deliver the "release" to customer environment, having it available is magic. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyydx06d01elkxn9ak06e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyydx06d01elkxn9ak06e.png" alt="Image description" width="720" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From metaphors, we'd go to sampling some of the practices. Moving testing down in the decomposition chain, more tight collaboration and shared ownership, having to pay attention to purposes of parts in the architecture and hiding things you deploy but want to keep hidden from the customers with feature flags. &lt;/p&gt;

&lt;p&gt;Practice made us better. Doing releases was not particularly painful. I was ready to change teams again. But this time around instead of leaving, I answered a &lt;em&gt;what would have to be different for you to stay&lt;/em&gt; question, and became development manager for the team. We made agreements of having no tester in the team. I was confident that would work just fine. After all, the developer colleagues I had worked with were great and I had thoroughly enjoyed our collaboration. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn6qk8u06pcbbqd58hbx1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn6qk8u06pcbbqd58hbx1.png" alt="Image description" width="720" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It turns out things don't always go as planned. I found myself entangled in taking in a lot of new responsibilities while holding to a lot of the old, and unforeseen circumstances made us go back on no testers -policy since we had a great product expert in need of repositioning. &lt;/p&gt;

&lt;p&gt;In more ways than one, 2nd year surprised me. In autumn of 2023, I had to face the reality. Over the timeframe of what was just a few months, we had a version of the product where nothing worked and releases were a pipe dream. We went from frequent releases to struggling with the one release we needed to have that year, and making one took more than 3 months. It was the most painful release I had experienced as long as I could remember.&lt;/p&gt;

&lt;p&gt;The first two releases of the year were routine, and I took care of the routine on the side of picking up lots of new work as the manager. The developers did just as well as they had before on keeping release on a short leash. Then I needed to introduce a tester into the team. More of my time went things other than hands-on testing, especially since I needed to make space for the new tester to take up testing work. &lt;/p&gt;

&lt;p&gt;With a track record and identity on the signature move, failing with this was not the easiest of experiences. There was no easy and simple explanation. If there was, I would have been able to pass that information to the team I was working with, and a three month stabilization phase would not be what we were experiencing. &lt;/p&gt;

&lt;p&gt;A few months to stabilize is not a major thing in perspective of an organization. But from perspective of someone who considers avoiding this usual and typical thing in projects, it was more relevant. We know how to come back from it. Address the uncertainty buildup. Test and fix. But that uncertainty makes you do more heroic efforts to hit the schedule than the usual continuous routine would. And quality at the end is not on the level of certainty that you get from a steady stream of growing products. &lt;/p&gt;

&lt;p&gt;Failing is ok though, and failing being acceptable needs to be modeled. Three months in an organization that wasn't releasing frequently before may be inconvenient to my aspirations, but it lead us to learning. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fckatfii20u0l7q4b8dpq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fckatfii20u0l7q4b8dpq.png" alt="Image description" width="720" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Mistakes were made, and most definitely by me. Then again, we were not out to find a single culprit, but to understand what was taking us to the entirely different experience. &lt;/p&gt;

&lt;p&gt;We realized we had simultaneously let go of two essential key ingredients to frequent releases. We tested continuously, but we also accrued a results gap in testing continuously, as we didn't complete things before there was more to test. We got more to test since whenever product owner would ask about capacity, a free developer would pick up new work over fixing bugs. &lt;/p&gt;

&lt;p&gt;In a few short months, we accrued uncertainty in scale that made testing difficult, and sharing fixing work broke down completely. A lot of this breakdown was related to practices I had as a tester, that I had not taught the new tester, as some intricate ways of testing and communicating is invisible to a seasoned tester, and a more structured approach is needed to support someone learning the finesse. &lt;/p&gt;

&lt;p&gt;I had not realized how different it is to have someone learning testing and to have someone who has a lot of experience in it. I undervalued need of structure. I now redirect back to a great essee from feminist movement on avoiding structure as a response to experience of oppression, and realize a lot of my troubles are about, as the essee calls it &lt;em&gt;Tyranny of the Structurelessness&lt;/em&gt;. Power has an impact to not needing structure, whereas lack of power means that the structure is more necessary. Seniority is power. &lt;/p&gt;

&lt;p&gt;I have often quoted Cem Kaner on "Tester that does not report bugs well is like a refrigerator light that is only on when the door is closed". My established conversational style of feeding requests for fixes without prioritizing them through a product owner wasn't possible for others. I had levels of status, already when I was not a manager - after all I was the only principal tester in the organization. And I had not realized how much uncertainty we accrued before I put my hands again in the software. &lt;/p&gt;

&lt;p&gt;I should add, there was also a new kind of testing (performance) that I injected while we were still in the fixing cycles. It turned out to reveal a bug where we experience significant performance degradations where root cause was hardware. Limit WIP, also from adding new testing capabilities point of view. Well, I did not. &lt;/p&gt;

&lt;p&gt;We dug ourselves out of the self-made hole by very traditional bug reporting and fixing cycle. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzeglag241yhq61ygbs3k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzeglag241yhq61ygbs3k.png" alt="Image description" width="720" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We were barely out of the hole, and had retrospective conversations with the team. There were many reasons and the conversation was not an easy one due to the systemic nature of the problem, but the core output of the retro was a common commitment: "Make Releases Routine". We decided that routine required that when a month was over, a release would happen. Month would be the minimum, we would prefer 2 weeks. We would not let ourselves go on a longer leash anymore. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdnimjuozwl34xikgpn8j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdnimjuozwl34xikgpn8j.png" alt="Image description" width="720" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Energized by the failure and learning, we modeled the tasks that were involved in making a release. Our particular flavor had 18 tasks. Team members paired with me on those tasks. And we learned that many - most - of those tasks were things only I knew how to do. &lt;/p&gt;

&lt;p&gt;We grouped the work to see patterns in the tasks, and minimized the team's tester's focus on continuous system testing rather than everything I had done for the releases.&lt;/p&gt;

&lt;p&gt;Developers set out to automate more into the pipeline. We improved capability and reliability of the pipelines significantly. We learned that being reminded that something was unreliable every two weeks did wonders on wanting to fix it. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvvr6cg6135e71qp6bg5u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvvr6cg6135e71qp6bg5u.png" alt="Image description" width="720" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With more rounds of practice, we realized we needed to solidify and establish continuous testing capability. We did this by detailed coaching for the tester to make good selections on priorities and need of collaboration. Every time moving through the routine gave us a point of reflection and possibility of experimentation. &lt;/p&gt;

&lt;p&gt;We pulled out to separate cadence all things that did not need to be bundled with release. All things compliance tend to be this. &lt;/p&gt;

&lt;p&gt;We looked at the tasks we were doing critically, and started dropping things intentionally. What we ended up with looking at the emergent structure, is a pipeline producing a release candidate, and a set of compliance work. &lt;/p&gt;

&lt;p&gt;Latest of the releases took 30 minutes. Admittedly, that included not doing compliance work but only the parts in the pipeline. Exact times matter less than direction that guides designing things with changes to get to a better, more comfortable routine. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiokj1ahpg8vnie0skesl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiokj1ahpg8vnie0skesl.png" alt="Image description" width="720" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There was a lot of work under continuous system testing that was a relic of the old ways of testing, from time when we had less test automation we could lean on. Building that capability in test automation allowed for compliance-related need of testing to move fully to the principle of seeing green and packaging programmatic tests with the release as evidence. I have been looking at a growing trend in numbers and code coverage, as well as the trend of things found that automation misses. &lt;/p&gt;

&lt;p&gt;Smaller increments keep uncertainty buildup at bay as long as we include testing in all the increments. Not just any testing, but resultful testing. And that tends to mean a balance of attended and unattended testing. &lt;/p&gt;

&lt;p&gt;While I say that green is all that matters, that is not how I behave. I am the person who knows exactly how many tests we have on various technologies of unit tests, component tests, subsystem tests, system tests for basic pipeline acceptance and systems tests for reliability assessment. Well, can't know exact number of the last, since it is model based generated tests and we can add more just from walking model through page objects. I know the number because our commitment is forward and better, and knowing when you're not true to your agreements shows up in numbers. We had only about 3000 tests, and that's very little compared to the 14000 system level tests we'd execute and analyze with a previous team. But that's a whole other story. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9pkay0fyygist505eh2z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9pkay0fyygist505eh2z.png" alt="Image description" width="720" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Over three calendar years, we had quite big differences. New people in the team. New base knowledge. I would like to think that having failed and learned solidifies the practice. Time forward with me away from the team will show if we established a way that scales. &lt;/p&gt;

&lt;p&gt;From this experience, I went back to thinking about what I wish I would have explained better. It's more than a thinking change that I picked up from this that I feel is core enough to teach forward to anyone and everyone who would listen. &lt;/p&gt;

&lt;p&gt;I'd start with what I still keep coming back to explain. The work to change release cycles to shorter is worth it and we may find ourselves with agreements outside our scope - that are really in our scope of making changes. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Filrm93s0r5n27u5mn2bb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Filrm93s0r5n27u5mn2bb.png" alt="Image description" width="720" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I still keep hearing that releases are overhead. Many teams push against frequent releases. They are still stuck in that bathtub and don't even recognize the party they could have if they had a pool.&lt;/p&gt;

&lt;p&gt;But also, they are not wrong. We do place a lot of overhead to the concept of releasing. And some of that disentanglement of things that don't have to move together is necessary for the change. &lt;/p&gt;

&lt;p&gt;And it may be that your customers don't want more frequent releases. Not being able to give them one, and not giving them one that you have ready are two different things though. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fee5xww80quyh8pnti6p0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fee5xww80quyh8pnti6p0.png" alt="Image description" width="720" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What is this work then? &lt;/p&gt;

&lt;p&gt;First of all, it's somewhat of an investment to build and maintain a pipeline that does builds and releases. But when that exists, it does a lot of heavy lifting for the team. &lt;/p&gt;

&lt;p&gt;Second, there is process - requirements on how things must be done, and people with power of blocking reviews. Sending an email and waiting for a response can be surprisingly much work, and especially calendar time. Admittedly, the acceptance and waiting work may ease up with unintended impacts of organizational change. Having 3/4 people you would ask permission leaving organization without nominating next people to do the same with the same knowledge changes how things move. I've been one to approve master test plans, test designs and plans, traceability matrix and test reports, and I am heading out. &lt;/p&gt;

&lt;p&gt;Third, a lot of teams still pile up many kinds of testing activities to the concept of the release. It may be exploratory testing sessions by the whole team. It may be manual regression testing. It may be daily test automation with failure analysis activity. What it shows up as though is a gap in testing results and the larger the scope, the larger the uncertainty. &lt;/p&gt;

&lt;p&gt;It's not a static listing, but all of these are probably areas where your aspired capability and your current capability are not quite on par. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1b1czitppujdrrexz7ad.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1b1czitppujdrrexz7ad.png" alt="Image description" width="720" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The sense of overwhelm with release work, for us at least, did not get easier by looking at the details. We minimized our themes checklist to 8, containing the 22 required high level checks the process asked for, with the 3579 tickets for specific compliance checks. And yes, I have actually read these through. Sometimes I think the only other people who have are the ones who wrote them. It's quite a time investment even with the structure that may help you skip some of it. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fil5xjrq42ppd4pt4zxm2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fil5xjrq42ppd4pt4zxm2.png" alt="Image description" width="720" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The results gap of testing is the most devious one though. It's like an empty paper of results where magic ink turns the text visible. And the magic ink is not available to us all. It is still common that next groups in line (acceptance) have information and understanding that your immediate team may lack. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9uz1zumvtkavqpv4wnpj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9uz1zumvtkavqpv4wnpj.png" alt="Image description" width="720" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This boils down to a question: &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What must be different to shorten release testing from 30 days to 30 minutes?&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;Think about it for a moment. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftrqmf06oqjhqmavfzyuf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftrqmf06oqjhqmavfzyuf.png" alt="Image description" width="720" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It's not the automation alone that must be different. Having automation is important. More important than test automation though is build automation. I have successfully done daily releases without any test automation for an earlier project. We overemphasize test automation. &lt;/p&gt;

&lt;p&gt;You won't have automation that does the same work that you'd do in a month in 30 minutes. You're designing relevant parts of the work. It's a redesign of tasks. &lt;/p&gt;

&lt;p&gt;Smaller changes stop the uncertainty buildup. That makes targeting testing to a specific change possible. You can code less bugs when you have less time to code bugs, and more control over having every change bringing you back to a working baseline. Whether your release hits production environment or not, ability to have one stops the uncertainty buildup. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiaoa8n9cpqc1b3kdmial.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiaoa8n9cpqc1b3kdmial.png" alt="Image description" width="720" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A core part to redesign is that you don't test as much as you used to in what you call release testing. You do more before release testing. You ask developers to do more. You trust developers having done more, and let them fail and learn. Hard thing to do, I know. Developers can test. Really they can. They often don't because they like you to have work and imagine you wouldn't if they didn't leave you some. Talk about it. &lt;/p&gt;

&lt;p&gt;Learn to prioritize search of information at the time when that information on scale of the team(s) is most beneficial. When you are just about to change something, a conversation on risks of that change is welcome. We you just broke something, knowing it's the thing you just changed is welcome. Having to figure out problems without knowing what and when impacted them is a lot more work, and while we do that work too, we want less of it. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7c41doe6rnxccr79kt80.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7c41doe6rnxccr79kt80.png" alt="Image description" width="720" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What also must change is bugs found at this stage. Those should be a special case. And only certain kinds of bugs -- new ones -- should matter. You will learn as you test same product. Apply that learning outside release testing, you have continuous work. Let bugs get on the next train. If next train leaves in an hour, missing one means less. Only some events are important enough to stop things all in all. While stopping the line on bugs is an important mentality, line can deliver the fix - often - right after this. We don't need to be gatekeeping quality. &lt;/p&gt;

&lt;p&gt;The amount of testers I have offended saying we release today whether you tested or not is not small.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4h73ir6kq1vfrgyffyc5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4h73ir6kq1vfrgyffyc5.png" alt="Image description" width="720" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You make changes and deploy. You call that feature testing.&lt;/p&gt;

&lt;p&gt;You decide it's time to deploy elsewhere. You call that release testing. You don't repeat testing you did on the first for the latter. And you might use ways of hiding the changes until your organization is ready to show them. Feature flags are a way of limiting visibility to unfinished work from a customer perspective.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fam7gkcfrpsoyllz9zdu5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fam7gkcfrpsoyllz9zdu5.png" alt="Image description" width="720" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To top this off, I have three guidelines for you.&lt;/p&gt;

&lt;p&gt;Make everything as code. When code changes, you see the change. Follow the change. Learn to understand what the change you see is. Have conversations with developers around that change. Teach your developer colleagues to write what they just learned about the thing they implemented when they know the most, which is at time of pull request / commit. Plans and tasks are outdated and inaccurate at that time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frwwbrnnhovkjtec43ark.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frwwbrnnhovkjtec43ark.png" alt="Image description" width="720" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Learn to pace testing to continuous. Start off-sync testing activities as capability building activities to turn them in-sync. Build test automation, not because it finds you bugs but because it tells you about changes you did not understand were happening. When they fail, learn about what you did not know. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb34t5ze25sl2i9gs2ell.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb34t5ze25sl2i9gs2ell.png" alt="Image description" width="720" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Build that pipeline, and grow it. Make it do more of the work that needs doing for every pull request. Routine really comes with repetition. Capture that knowledge in code. Code supports discipline, and while they say it is not a great boundary object, in pipelines it is a better one than in application code. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6crf79v172isyyvejtqu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6crf79v172isyyvejtqu.png" alt="Image description" width="720" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To conclude, I am quoting myself. I may come to this from a tester - a manager - a programmer perspective, but "Making releases routine is the heartbeat of a good team, creating a bubble of productive serenity". You may need that bubble in the organization your team works in. &lt;/p&gt;

&lt;p&gt;It was worth the struggle to begin with. It was worth the pain of failing. It is worth the new experiments the failure gave purpose, meaning and energy to. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvrqb6u7kd9bc1zmjhlp9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvrqb6u7kd9bc1zmjhlp9.png" alt="Image description" width="720" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I enjoy connecting with people, and love a good conversation. You may notice I like my work. I also like talking about themes related to my work. I started speaking to get people to talk to me. I talk back, and I invite you all on a journey to figure out how we explore our way into a better place for software creators and consumers.&lt;/p&gt;

&lt;p&gt;I’m happy to connect on LinkedIn, and write my notes publicly on Twitter.&lt;/p&gt;

</description>
      <category>releases</category>
      <category>failure</category>
      <category>testing</category>
      <category>improvement</category>
    </item>
    <item>
      <title>Patterns to Whole Team Test Automation Transformation</title>
      <dc:creator>Maaret Pyhäjärvi</dc:creator>
      <pubDate>Mon, 01 Aug 2022 09:33:00 +0000</pubDate>
      <link>https://dev.to/maaretp/patterns-to-whole-team-test-automation-transformation-nmn</link>
      <guid>https://dev.to/maaretp/patterns-to-whole-team-test-automation-transformation-nmn</guid>
      <description>&lt;p&gt;Looking back at test automation in a product development team for describing patterns of success for research purposes, we identified themes where the experienced success significantly differed from what the literature at large was describing. With those lessons, I moved to a new organization and took upon myself to facilitate a transformation to whole-team test automation over multiple teams, one at a time, one after the other. In this writeup of a talk, we will revisit the research from one organization two years ago with lessons from another in the last two years. &lt;/p&gt;

&lt;p&gt;I will introduce you to my core patterns to practice-based test automation transformation. I can't promise a recipe I would apply, as my recipe changes and adapts as I work through teams, but I can promise experiences of the same patterns working on multiple occasions as well as examples of how my go-to patterns turned out inapplicable. We'll discuss moving from specialist language to generalist language, visualizing testing debt and coverage, using visualization to showcase progress made of continuous flow of small changes, choosing to release based on automation no matter how little test automation there is, and growing individual competencies by sharing YOUR screen when working together. &lt;/p&gt;

&lt;p&gt;This writeup is of a talk I prepared for Selenium Conference India/Virtual 2022, and I write it for same reasons I do talks: learning by structuring my thoughts and to snapshot my current imperfect understanding. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OobLsn-p--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m6fslrwb29k1iw0xq8y4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OobLsn-p--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m6fslrwb29k1iw0xq8y4.png" alt="Image description" width="720" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In terms of whole-team test automation, I'm inclined to start with discussing the technology choices. Here's the choices I see around me. &lt;/p&gt;

&lt;p&gt;A core to choices at Vaisala is the idea that we build embedded devices, as well as hosted and cloud-based systems and services. &lt;/p&gt;

&lt;p&gt;Building embedded devices has a special requirement for automating, that is to automate also on physical interfaces. We have around a proprietary system we call 'plexus', and you can't really find anything on it by googling. It's a combination of building hardware that can be driven by software to control other hardware. What matters is what we can do with it - we can automate not giving a device power, turning it on and off, and pressing physical buttons with various purposes. And for that purpose we've create something that is written in python and is for now a Robot Framework library. For webUIs (a lot of devices host a web server for administrative user interface) it's been Selenium within Robot Framework. Basically, plexus is Robot Framework + internal libraries + virtualisation. &lt;/p&gt;

&lt;p&gt;As we've moved along in the last two years, I've been driving an effort to not solely rely on Robot Framework, but in steps move towards a general purpose language, namely Python. While we're sometimes struggling with some folks strictly believing pytest is a &lt;em&gt;unit test framework&lt;/em&gt; which it isn't, we've made significant steps both in embedded software and other software on running some of the new tests in pytest + selenium, enough to learn we could remove Robot Framework should we want to. There are forces - people having specialty skills in the Robot Framework language - keeping it in place and we're learning on our choices incrementally. &lt;/p&gt;

&lt;p&gt;In addition to Selenium, we've started using Playwright in various places. We've been running the two side by side enough to come to terms with the idea that specialty skills and interest may drive the choice, and allowing for people to follow their energy helps us move to a good place in test automation doing relevant work for us. Both exist and it is not a problem. Someone already fluent in Selenium will work better with Selenium. Someone new seems to enjoy looking at the new, and working better with Playwright. No real difference in what they can test or how fast, even with the run time duration differences - it's machine time and we're not fine-grained enough to optimize minutes in most of our teams. &lt;/p&gt;

&lt;p&gt;Teams that don't have an embedded C + Python link or Python as their main language tend to favor not introducing Python to their mix. They run with languages their teams already use, typically testing in Javascript and Typescript, and with a mix of Cypress and Playwright. JS/TS Playwright is not just the APIs but also the runner, and confusion around 'what is what' is a bit of challenge. &lt;/p&gt;

&lt;p&gt;For whole team test automation, we've come to appreciate that tools matter for the language of choice. Choosing Robot Framework language, we are recruiting Robot Framework programmers. Choosing a general purpose language, we have been more successful in sharing the work with whole teams and keeping the automation around and operational when the core specialists decides to pursue career steps elsewhere. Choosing general purpose language, recruiting testers is a notch harder, as most people skilled in Robot Framework have a barebones knowledge of the language doing very limited set of programmatic tasks for purposes of testing. &lt;/p&gt;

&lt;p&gt;That's the world I live in now. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--314Ot6JV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8y1v6qwvuctqnro0ouxn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--314Ot6JV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8y1v6qwvuctqnro0ouxn.png" alt="Image description" width="720" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before I joined Vaisala, I was working with F-Secure. Just before I pursued career steps elsewhere, &lt;a href="https://www.academia.edu/49495368/Test_Automation_Process_Improvement_in_a_DevOps_Team_Experience_Report"&gt;we published an article&lt;/a&gt; about how we did automation in a team responsible for corporate windows endpoint security product. The team and how we automated was fascinating to me, because it was the first experience of all the experiences I have had with automation where I believed, honestly believed, that automation was worthwhile and working as it should for purposes of testing. We did particularly well with our Test Automation there - why? We decided to look at it with help of university research group specializing in test automation process improvement and assessment, and the article is the outcome of that collaboration. &lt;/p&gt;

&lt;p&gt;We had automation around, even in that organization before. We had grown to understand what good looks like by working on it - and failing with some of our smaller scale successes - over more than a decade. The whole team ownership seemed like a central aspect to success. To live through changes in organization and people, it needs to be for everyone. EVERYONE. It cannot leave with any single one of us became a criteria for success. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--T2S_cvBZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7slnuxejocl6v3be0n9p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--T2S_cvBZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7slnuxejocl6v3be0n9p.png" alt="Image description" width="720" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At start of writing the article, we thought it was the architecture and the services we had created. We attributed a lot of success to &lt;em&gt;testability&lt;/em&gt; meaning that there was higher chance of changing product to be easier to test to implementing hard test automation, and automation was possible. Same people in same teams could choose to automate or change the product. I find we misattributed success to technology, that was shared ownership - whole team ownership. &lt;/p&gt;

&lt;p&gt;We were happy and proud with the automation architecture we had in use. Particularly the parts about telemetry (coverage of automation in test vs. coverage of users in production, and analyzing results in scale while tests were running). It was running on Python Nosetest, later Python pytest and a lot of self-created services. While we had a tiny bit of Selenium in the overall system, a contributing factor many considered to be important was that a lot of the automation was done without a GUI involved. Also, most GUIs weren't web but windows, so Selenium would not have been a choice for driving them anyway. &lt;/p&gt;

&lt;p&gt;We had some great things we loved on services we shared for automating. Particularly the idea of getting 14 000 fresh windows operating systems running on virtual machine a day, and wait time of under 5 seconds to have new machine running and ready to test. Replacing the internal system for virtualization with cloud-based one could only make the experience slower. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TO8nwHSj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/158gshoht2vr8w9lhsx3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TO8nwHSj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/158gshoht2vr8w9lhsx3.png" alt="Image description" width="720" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So it was not the technology choice, even if the conversation we have in conferences and communities so often is around it. What was it then? I hinted on it already, with the how I titled this talk: the whole team ownership. We did particularly well with automation, and the whole definition of success was on automation doing the work without causing the pain to exist, and showing up positively on various outcomes. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JV-luwPa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r9qy0vfgp630cx5dc8dw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JV-luwPa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r9qy0vfgp630cx5dc8dw.png" alt="Image description" width="720" height="405"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;With the same kind of success criteria in mind, I would look at teams I have now worked with at Vaisala during the last two years. I think all teams are doing well with generally available criteria of success in automation, having some and having more over time. There is a general principle of hiring people who will automate, and automation exists around in scale. Yet still I would split teams into three categories:&lt;/p&gt;

&lt;p&gt;Ones I consider successful. &lt;/p&gt;

&lt;p&gt;Ones I consider not successful.&lt;/p&gt;

&lt;p&gt;Ones I haven't yet been around enough to know if I would consider them successful, but I am around trying to contribute to that success. &lt;/p&gt;

&lt;p&gt;All the ones I mark with a FAIL are successful if you ask the single tester who has been building automation in that team. I consider it a fail since I have already seen some of these have the single tester leave, and the automation vanish. Investing 10 years into building something you're proud of to only have it eradicated the moment you leave organization wouldn't pass my criteria of success. &lt;/p&gt;

&lt;p&gt;Success today is a snapshot of how we use our time on sharing today. It is easy to break by a management specialization decision. It is easy to break by asking to cut corners. For success to exist, it needs to become part of values to have whole team ownership and capability. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OHaAKlwC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1u8eew17cwsqwjve03vu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OHaAKlwC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1u8eew17cwsqwjve03vu.png" alt="Image description" width="720" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's still discuss this &lt;strong&gt;particularly well&lt;/strong&gt; in more detail. What does it look like? Personally, I like looking at it with a visualisation on level of code. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8mh_K46q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xyyinqmlragcch49lodp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8mh_K46q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xyyinqmlragcch49lodp.png" alt="Image description" width="720" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In a minute-long video, we can see a team progress on their test automation repository over a year of time, day by day for the year I spent with them. What we look at in the demo is that there are a lot of characters moving around making changes, it is not a single person, it is the whole team. They create new tests and services, maintain the old. It's not just the testers in the team who contribute, while it is clear that there is enough work for some people to specialize in testing.&lt;/p&gt;

&lt;p&gt;Success looks like whole team testing. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mOLUzEyL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r7y13kuy6z587oc3uos7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mOLUzEyL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r7y13kuy6z587oc3uos7.png" alt="Image description" width="720" height="405"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;In addition to having a whole-team idea of success, many of the practices we applied are considered unorthodox - resulting from group of people already working well together and building that forward. With the research done at F-Secure and the external researchers, some things we did and did not do were considered to not match what had been written about test automation improvement and success in existing literature. &lt;/p&gt;

&lt;p&gt;Particularly the organic nature of groups sharing ideas with a voluntary participation - enthusiastic participation even - seemed to be somewhat of a puzzle. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fgWKpKUl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zp72s2r1qy670jxwv4qe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fgWKpKUl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zp72s2r1qy670jxwv4qe.png" alt="Image description" width="720" height="405"&gt;&lt;/a&gt;  &lt;/p&gt;

&lt;p&gt;And success shows in other things as well. These we could collect at time of writing the article. The pink ones are from F-Secure, and I sampled the one team I spent most time with in the last two years at Vaisala as a comparison point. &lt;/p&gt;

&lt;p&gt;Doing particularly well is still something I attribute more to my past team at F-Secure than any of the teams I work with now. We are still building a co-ownership culture and it takes time to root in fully in sustainable behaviors that remain when I am gone. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zrx95A4P--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yhooqy4ag49amrkod1vr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zrx95A4P--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yhooqy4ag49amrkod1vr.png" alt="Image description" width="720" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we have addressed the success for what we considered to be good, and it's time to discuss more on the why - or rather, seek the practices and ideas that seemed to help us in being more successful. Both of my cases are product development, and expectations of long term commitment to the product is a foundational belief. But what about the practices?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BEjxr7s8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xub8rbwess2cxzei9ojm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BEjxr7s8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xub8rbwess2cxzei9ojm.png" alt="Image description" width="720" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With the research work at F-Secure, we identified many practices that were considered factors for success. And with. the stars I overlaid on the F-Secure research results, I show that I think these are also factors I now recognise for success at Vaisala. &lt;/p&gt;

&lt;p&gt;There is one we seem to need more work on in particular - the internal open source mindset. Personally I am undecided on the assets within the team only vs. shared assets in testing across teams, and with microservices the first may be more correct approach and it leaves learning on test automation as something within the team, not across the teams. &lt;/p&gt;

&lt;p&gt;The only one I don't yet see at Vaisala is telemetry, and it was the last steps at F-Secure so it may just be a step ahead of us. &lt;/p&gt;

&lt;p&gt;Since you can read about these practices in the published article, I want to pick 5 things that I did not discuss back then, that I think we should discuss today on how to become successful in automation. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Awl4nEg2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qkj66jikk3noh8d75vm5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Awl4nEg2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qkj66jikk3noh8d75vm5.png" alt="Image description" width="720" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First, the language of choice for test automation matters. While the trends of the world suggest "low code automation" or "special languages for testing" like Robot Framework so that we can ease &lt;em&gt;manual testers&lt;/em&gt; into automation, the past manual testers are smart people and can learn programming. The idea of giving them tools that leave them to operate different tools than the rest of the team feels off to me. &lt;/p&gt;

&lt;p&gt;A factor of success to co-owning sustainable test automation requires it to be build on a shared general purpose programming language. Removing Robot Framework has worked for me. Training Robot Framework to developers has worked also, and some developers emphasize they can write code in any language. At the same time, watching these people perform testing tasks with RF vs. Python, we get much more done in the latter. As we should know, writing code is not just writing it, it is also debugging it, and finding the ways of getting the right things in. &lt;/p&gt;

&lt;p&gt;We built tests for a tool with pytest + Selenium and page objects in a fraction of a time compared to having tried the same with Robot Framework + Selenium earlier. IDE-support is that powerful. &lt;/p&gt;

&lt;p&gt;Factor to success: Choose a general purpose programming language the team is already on. It means no single shared language for all things testing in your organization, and you can live with that. The other dimension is more important. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iH7gX9LM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yr3yen5qf08zzis2pi3o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iH7gX9LM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yr3yen5qf08zzis2pi3o.png" alt="Image description" width="720" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Second, learn to talk numbers. I had not done this at F-Secure, but I think it is essential at the part of the journey we are on at Vaisala. &lt;/p&gt;

&lt;p&gt;While what I call &lt;em&gt;contemporary exploratory testing&lt;/em&gt; was a de-facto way of thinking about testing at F-Secure in that one product I was in, Vaisala has more work in this space. And it helps to frame work as &lt;em&gt;Everything that does not need to be automated gets done while automating&lt;/em&gt; to have good testing done and good automation left behind. &lt;/p&gt;

&lt;p&gt;Showing in numbers where this is not true is good. Showing percentages of epics without automation. Showing percentages of automation failing and inviting us to explore, but late, with already red results in master. Showing the amount of work the late failures take, and optimising it for smaller. &lt;/p&gt;

&lt;p&gt;It's not about setting targets, its about understanding where you are. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--I6Kk2b5P--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gd0gqkvrxx8viw0aiql3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--I6Kk2b5P--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gd0gqkvrxx8viw0aiql3.png" alt="Image description" width="720" height="405"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Third, we need to remember the principle of &lt;em&gt;nothing changes if you change nothing&lt;/em&gt;. There are a lot of reasons for people to analyze a lot resulting in little movement, and that often means we are not making best possible progress in learning. Learning happens by doing, and continuous flow of small changes helps us with that. &lt;/p&gt;

&lt;p&gt;Sounds like contemporary exploratory testing - we can't separate the test design from the test implementation, even when we are automating. We learn about the design when we implement it. And learning is essential. &lt;/p&gt;

&lt;p&gt;Showing that with tools like Gource have been a good one for me. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--A04Ni_2o--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yl635qt55kh2vrg1kapt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A04Ni_2o--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yl635qt55kh2vrg1kapt.png" alt="Image description" width="720" height="405"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Fourth practice is about a surprise I have learned testing for a release. We can choose to not run a single manual test case for release testing and rely on whatever tests we did while implementing the features. We can accept the only things we reverify at release timeframe are ones in automation. &lt;/p&gt;

&lt;p&gt;We can choose to do continuous releases without test automation like I did some 7 years ago in yet another organization. Daily. &lt;/p&gt;

&lt;p&gt;We can choose to release once a month running whatever tests we have now in automation. We can choose to extend our automation if this fails. And we got surprised on the idea that we needed to test for releases a lot less than we had previously designed. &lt;/p&gt;

&lt;p&gt;We do follow the changes we make since last release on the baseline, so no change goes untested. We just don't collate that work to release testing timeframe. And we intertwine manual and automation with contemporary exploratory testing. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JPtapRPT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9ncm8obqpmo8sodpg21t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JPtapRPT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9ncm8obqpmo8sodpg21t.png" alt="Image description" width="720" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Fifth and final practice I wanted to mention is pair and ensemble testing. Particularly in times of remote, share your screen when stuck on an error message - don't just send the error message to someone to decipher. Take it further than problems, co-create code by sharing screen. Pair is two, Ensemble is 3 or more people. &lt;/p&gt;

&lt;p&gt;I have had really positive experiences getting started with for example Selenium tests by creating first ones in an ensemble, the least knowledgeable starting at keyboard following words of most knowledgeable, and becoming knowledgable one code line at a time as we rotate who is on the keyboard. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--g9UF_FV3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gtn1bimicgyppkwg9ofb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--g9UF_FV3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gtn1bimicgyppkwg9ofb.png" alt="Image description" width="720" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I included here a reference for an ensemble testing session at the conference the day before. Watching some of that unfold reminded me that we choose to address some things and leave others out, particularly the printing of lines is something I have personally learned in ensemble testing that I can avoid with better use of debugging tools. It made me a better tester. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PvbFqTi0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0jwtbx341rtcapblqwf3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PvbFqTi0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0jwtbx341rtcapblqwf3.png" alt="Image description" width="720" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It does not matter what tools we use, here's an ensembled example of the same thing from another session with Playwright. Even with this one you can pick on the choices like using location-based selectors (a feature also made available with Selenium 4) but still knowing you should opt for semantic selectors. Or noticing bugs in your tools of choice (this time VSCode showing a red pass). &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PqQINtaP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3t3o3qtjbnfhgxhsxy7a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PqQINtaP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3t3o3qtjbnfhgxhsxy7a.png" alt="Image description" width="720" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While I chose five specific patterns to success with whole-team test automation, these all come together to an idea I call contemporary exploratory testing. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Everything that does not need to be automated gets done while automating. &lt;/p&gt;

&lt;p&gt;You can't automate well without exploring. You can't explore well without automating.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That can be true if you know how to test and automate, and learn while you do the two. There is no manual and automated testing. There's a lot of manual things we do resulting in the best automation we can keep around. &lt;/p&gt;

&lt;p&gt;Improved understanding of success is that it does not rely on single people. It is built into the whole teams. And not as service an individual provides for their team,  but as a service everyone helps with and contributes in within what they can do today, growing to what they can do tomorrow. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1qEpVY4w--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1i01y4uwa076588h41zo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1qEpVY4w--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1i01y4uwa076588h41zo.png" alt="Image description" width="720" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I enjoy connecting with people, and love a good conversation. You may notice I like my work. I also like talking about themes related to my work. I started speaking to get people to talk to me. I talk back, and I invite you all on a journey to figure out how we explore our way into a better place for software creators and consumers.&lt;/p&gt;

&lt;p&gt;I’m happy to connect on LinkedIn, and write my notes publicly on Twitter. &lt;/p&gt;

</description>
      <category>testing</category>
      <category>testautomation</category>
      <category>improvement</category>
    </item>
    <item>
      <title>Better Ideas At Test Design</title>
      <dc:creator>Maaret Pyhäjärvi</dc:creator>
      <pubDate>Sun, 24 Oct 2021 09:42:45 +0000</pubDate>
      <link>https://dev.to/maaretp/better-ideas-at-test-design-2c62</link>
      <guid>https://dev.to/maaretp/better-ideas-at-test-design-2c62</guid>
      <description>&lt;p&gt;Even with many years in this industry, I get inspired by courses I take. A course - BBST Test Design - served as inspiration on sharing on this: Having better ideas at Test Design.&lt;/p&gt;

&lt;p&gt;And by Test Design, I mean the continuous collection, creation and prioritization of ideas that would help us produce the results from testing that the world around us expects. The ideas that lead us into doing what we do with software; so that we recognize what we recognize; so that we have the conversations around quality that we need to have. &lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqojmp4ijsbw09ruo01kk.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqojmp4ijsbw09ruo01kk.PNG" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I know from the 25 years I’ve been at this that testing is far from simple. It’s knowledge work, just like application programming, targeting information that helps us address quality concerns. &lt;/p&gt;

&lt;p&gt;With a simple model, we could describe testing as a process of doing testing where the input is someone with brains, and the output is learning to do the work better and the information and artifacts we expect in our organizations. We come as we are, and we learn: the software we are testing and its features; the problems and their relevance; each other and communication and collaboration; and the business that pays our salaries. &lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F35xj0jrg37ynhdwficba.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F35xj0jrg37ynhdwficba.PNG" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the last months, I’ve taken a course on Test Design. On the course, two exercises have thrown me at Open Office Impress, the presentation software, and choosing a single variable to analyze for test ideas. True to my exploratory tester nature, I could not commit to a variable before completing a whole variable tour to find something I would have fun with, finding information. &lt;/p&gt;

&lt;p&gt;I chose transparency of elements. I learned quickly to connect it with a default value it could have; with editing, presenting and printing modes and their options; the different element types it could be applied with; and the many places from where you can edit it. &lt;/p&gt;

&lt;p&gt;On the first exercise, we listed risks imagining bug reports we might end up writing on it. I generated the list allowing the application to be my external imagination and it increased the creativity I could bring at the task.&lt;/p&gt;

&lt;p&gt;On the second exercise, we were asked to apply risk-based domain testing. Equivalence classes, boundary values and the sort, but with the idea that risk - what we expect might fail - will guide us to equivalence classes. Like entering a single digit can (and does) behave quite differently from something with three digits or decimal numbers. &lt;/p&gt;

&lt;p&gt;I found a bunch of inconsistencies, and problems, and the application rewarded my tester efforts with a big visible crash dialog that nicely reproduces at will when combining two digit numbers with undo. Yes, single digit is fine but two digit with undo crash the app. And remind me that we don’t have to create bugs intentionally for learning, the software industry has us covered. &lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvjs1obts1nm5fyfewnk5.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvjs1obts1nm5fyfewnk5.PNG" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It’s not just courses where I find that I already think in quite many dimensions and details allowing me to discover bugs, but that is the experience and reality from the teams, projects and products I work with too. &lt;/p&gt;

&lt;p&gt;With the simple process, I am often called to situations where the output isn’t where it should be. We are missing bugs. We are not documenting with test automation. We are thinking simplistically about coverage, and thus missing even the idea that there are bugs to find on other dimensions. &lt;/p&gt;

&lt;p&gt;As a tester, I start with adding of results. But as principal, being great at testing isn’t sufficient. I need to make people around me better at testing. I need to fix the practice, while adding some of the results. &lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F06cw0fd78lcjokje15u8.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F06cw0fd78lcjokje15u8.PNG" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To fix the practice, I have a recipe of my own. I don’t do instructions and processes, and I don’t choose tools and enforce guidelines. I start my work from within, joining a team as a tester. As such, I experience what the team misses, and I try to figure out how to learn together ways of not missing that anymore, even when I am gone. &lt;/p&gt;

&lt;p&gt;We work towards making testing everyone’s business. Testing is too important to be left for just testers. Developers, product owners, neighbor teams are all welcome to pitch in. &lt;/p&gt;

&lt;p&gt;We make improvements continuously, but each individual improvement can be a small adjustment through feedback. We notice the change looking back six months, but day to day it seems we do the same things. &lt;/p&gt;

&lt;p&gt;I work to remove myself, so that I can repeat the work with another team needing insightful ways of taking small steps to better. &lt;/p&gt;

&lt;p&gt;Fixing the results start from showing what results we have been missing. &lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkzy0qi9f4yk3ijaqd5rt.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkzy0qi9f4yk3ijaqd5rt.PNG" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I’ve repeated this growth journey across organizations and teams, and the takeaway I still want to leave you with is on where do you learn to have the versatility of the ideas so that you see the results we have been missing. Those ideas stem from your ability to connect information of the past into the product change work ongoing right now. &lt;/p&gt;

&lt;p&gt;I recommend you read bug reports. Not just your own, but your colleagues, your organizations, and if possible, whatever the customers directly report in unfiltered form. &lt;/p&gt;

&lt;p&gt;I recommend you read lists of generalized bug reports. taxonomies are available in books by Kaner and Beizer with a lot of relevant information&lt;/p&gt;

&lt;p&gt;Learn Test Design. BBST course series is brilliant. I grew up to being a tester with Cem Kaner creating the teaching materials and owe a lot of foundational perspectives to his work now packaged as online learning courses.&lt;/p&gt;

&lt;p&gt;Finally, work together with others. When you work in a group - an ensemble - you will learn about things you did not know you don’t know, and thus could not ask. It speeds up our learning significantly. &lt;/p&gt;

&lt;p&gt;We all need to go and learn to experience what we miss. Better ideas produce better results in testing. &lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkzy0qi9f4yk3ijaqd5rt.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkzy0qi9f4yk3ijaqd5rt.PNG" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I’m happy to connect on LinkedIn, and write my notes publicly on Twitter. Looking forward to learning to provide better results in testing with you all. &lt;/p&gt;

</description>
      <category>testdesign</category>
      <category>exploratorytesting</category>
      <category>improvement</category>
    </item>
    <item>
      <title>Exploratory Testing Foundations</title>
      <dc:creator>Maaret Pyhäjärvi</dc:creator>
      <pubDate>Mon, 20 Sep 2021 18:43:00 +0000</pubDate>
      <link>https://dev.to/maaretp/exploratory-testing-foundations-4lb3</link>
      <guid>https://dev.to/maaretp/exploratory-testing-foundations-4lb3</guid>
      <description>&lt;p&gt;With so much to say and share on Exploratory Testing, what would you need to know to get started? This question lead us to summarizing basic theory on exploratory testing around one test target, and to creation of Exploratory Testing Foundations course material presented in this chapter. The course, slides and accompanying content description that together make up Exploratory Testing Foundations by Maaret Pyhäjärvi and is licensed under &lt;a href="http://creativecommons.org/licenses/by/4.0/" rel="noopener noreferrer"&gt;CC BY 4.0&lt;/a&gt; and is made available at &lt;a href="https://www.exploratorytestingacademy.com" rel="noopener noreferrer"&gt;Exploratory Testing Academy&lt;/a&gt;. Also consider &lt;a href="https://ko-fi.com/maaretp" rel="noopener noreferrer"&gt;supporting me on Ko-Fi&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To create the course, Maaret Pyhäjärvi paired to test the application under test with the brilliant Irja Straus (Croatia), Parveen Khan (United Kingdom), Julia Durán Muñoz (Spain) and Mirja Pyhäjärvi (Finland). The application and lessons were tried with many ensemble testing groups to finally come to be summarized as part of the course. We particularly want to appreciate two open space communities in creation of this content: &lt;a href="https://socratesuk.org" rel="noopener noreferrer"&gt;Socrates UK&lt;/a&gt; and &lt;a href="https://frogsconf.nl" rel="noopener noreferrer"&gt;Friends of Good Software&lt;/a&gt;. Both served as places to try out hands-on testing of the application to see the dynamics under various constraints. &lt;/p&gt;

&lt;p&gt;Pair testing and ensemble testing are social software testing approaches. In a pair, we have two people testing. In an ensemble, we have a group of at least three people. Both forms of social software testing enable us to test and learn together, and give us a better feel of the results testing of the application can produce. With tens of sessions with this little application, no two sessions have produced exactly the same results but each session has produced useful results that can be used to build on, should we seek good coverage over our target of testing. &lt;/p&gt;

&lt;p&gt;The course and this section sets out to teach foundational concepts of contemporary exploratory testing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;It is an approach to testing in which we optimize value of our testing. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It is about systematically combining information from available sources to do the best work possible for the context at hand, not merely guessing errors.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It is multidisciplinary and allows us to take perspectives (using 'constraints') one after the other or simultaneously on the discretion of the person doing testing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It includes use of test automation both for documenting and as a means to do things otherwise out of our reach. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It leads us to thoughtful test coverage, where most meaningful sense of coverage is on missing less important information (e.g. bugs).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It reveals that a simple application that appears to 'work' has meaningful layers to isolate information on. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We know there is more to teach on exploratory testing than this one course includes. We will create separate similar yet different sets on different types of applications and constraints that make sense with each of those examples. We will also address Exploratory testing the Noun - organizational frame of testing - on later courses. This one focuses on Exploratory testing the Verb - doing really good work optimizing value of testing through learning while testing. We give half of this course to the constraint of test automation as documentation, as we believe that this is a core aspect of contemporary exploratory testing. You can't automate well without exploring. You can't explore well without automating. &lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction to Exploratory Testing Foundations
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fngb29f5szlrg4s2ht2ua.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fngb29f5szlrg4s2ht2ua.JPG" alt="Exploratory Testing Foundations - the Course"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Welcome to Exploratory Testing Foundations –course. This course intertwines a simple application to test with basic theory of how to do exploratory testing to give you a foundation to build on. &lt;/p&gt;

&lt;p&gt;Exploratory testing is an approach to testing that centers learning. Test design and test execution form an inseparable pair where the application and feature we are testing is our external imagination. It takes domain knowledge, requirements and specifications, and testing knowledge as input and produces information and a better tester as an output. It also encourages us to at least consider documentation and test automation as a form of documentation. &lt;/p&gt;

&lt;p&gt;We think of this course as an antidote to the idea that test cases tell you how to test a feature and that is where a new tester would start. That type of test cases are only a small subset. You are expected to find defects, where the system does not work as we specified but not stop there. Finding change requests, things that would make things better for users is included. And instead of using most of your work time on documentation, we're inviting you to consider lighter and executable formats of documenting. &lt;/p&gt;

&lt;p&gt;This is what we fit into two days with one application. Theory and application go hand in hand. When taught in classroom, we will also reflect course experiences to work experiences and share war stories of testing in projects where applicable. &lt;/p&gt;

&lt;p&gt;In it's current format, the course takes two days in a classroom to deliver with many different passes. We are working to build a video course on the scope of the course, to enable people to learn this in scale as we are unable to make space for classroom guidance for everyone - we prioritize working as testers in projects over being teachers of testing. Thus the course material is openly available to use as is, or to freely adapt to scope of your choosing. We have delivered it in 99 minute segments combining various constraints learning it takes a minimum of three sessions to go through the ensemble testing to cover the application without theory slides. &lt;/p&gt;

&lt;p&gt;Exploratory Testing Foundations by Maaret Pyhäjärvi is licensed under CC BY 4.0. To view a copy of this license, visit &lt;a href="http://creativecommons.org/licenses/by/4.0/" rel="noopener noreferrer"&gt;http://creativecommons.org/licenses/by/4.0/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff74jod56qb2e1tkujdnm.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff74jod56qb2e1tkujdnm.JPG" alt="Optimizing Value of Testing"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Exploratory testing is optimizing value of testing. Let's think about that for a moment. &lt;/p&gt;

&lt;p&gt;Value of testing comes from the value of information. Value exists in relation to cost. Cost can be direct cost - what is the bookkeeping cost of doing something, or it can be opportunity cost - what is the value of something we did not do because we did what we did instead. When you seek to optimize value of testing, you seek to be aware of value of things you do and things you could do, and make good choices. &lt;/p&gt;

&lt;p&gt;Thinking in terms of optimizing value helps us dispel some of the common misconceptions around exploratory testing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;It isn't manual, it's attended. Automation can call you to attend. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It isn't about error guessing. It is systematic optimization of all sources with continuous learning to do best possible testing with the time you have available, considering both short- and long-term value. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It isn't about feeding applications bad data to see weird error messages. In fact, we often don't care for the problems related to these and playing with them isn't optimizing the value of testing.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F92e95ue0fnyi3fqs6bag.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F92e95ue0fnyi3fqs6bag.JPG" alt="Agency and Learning"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Exploratory Testing centers learning. The application we test is our external imagination. While we test, we learn about the application and about ourselves. We optimize the value of our work continuously. Instead of following a plan we created at a time we knew the least, we create plans, learn, adjust, even completely revise as we learn. Our ideas of the plan are best when we know the most, at the end of our testing. &lt;/p&gt;

&lt;p&gt;To emphasize learning, we emphasize agency – the responsibility of the person doing testing to do the work to their best ability, and to grow with the testing they do. &lt;/p&gt;

&lt;p&gt;We remember we learn &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;by researching the domain – the business, the legal, the financial &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;by passing on information from those who worked on the problem before us  - the stakeholders, the requirements and specifications&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;by using the application and thinking while using it&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;by critically evaluating the application as we use it&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;by focusing our attention to both how it could work and how it could fail&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;by reflecting new information against what we know from the past&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;by experimenting with approaches outside our usual repertoire&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;by centering value of information we produce&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The work we do is done in slots of time: a minute at a time, an hour at a time, a day at a time. Every unit of time, as we learn, makes us better at optimizing our value. &lt;/p&gt;

&lt;p&gt;We go as fast as we need, or as slow as we need. Just like driving a car requires you to take yourself and your surroundings into account to choose the right speed for the situation at hand, same applies with exploratory testing. You drive, your speed and your route – with the destination in mind. &lt;/p&gt;

&lt;p&gt;Optimizing value of our testing and centering learning hint that we most likely want to avoid investing prematurely into documentation. We also want to avoid forgetting documentation, as delivering the right documentation is part of our goals. We also want to avoid separating test ideas and test execution, but enable those two to go hand in hand, within the brain of the very same tester. Pulling information from outside, but not executing on someone else's orders. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdw1q8tcxmsutp9es4q4n.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdw1q8tcxmsutp9es4q4n.JPG" alt="Process"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To set our minds for the road, let’s talk about the process of exploratory testing. &lt;/p&gt;

&lt;p&gt;We are given something new to test. It may be a new application like one on this course. It may be a change in the application you’ve worked years on already. It may be a new feature that is new, in an application you are already familiar with. Your task in testing it is to provide information on when it works and when it doesn’t, in the perspective of stakeholders. &lt;/p&gt;

&lt;p&gt;This is a process of information discovery. We already know something, and we use that. But we are asked to learn more and share our learning with the rest of the application team. The information you provide extends the existing information. &lt;/p&gt;

&lt;p&gt;In the process of exploratory testing, anything can happen. Quality of output is related with quality of input, and time used in the process to learn and improve. &lt;/p&gt;

&lt;p&gt;We usually think of the process in terms of time used on activities. Simply put, there are four main activities you will want to pay attention to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Test&lt;/strong&gt;: Using time on test, you go through new ideas of what to try and observe. Without time on test, coverage will not grow.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Bug&lt;/strong&gt;: Using time on bug, you work on understanding information you are discovering and refining it to better serve others as you pass it on. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Setup&lt;/strong&gt;: Using time on setup, we work to make test possible. You may be setting up test data, operating the application to get to a starting point, researching while connecting information with the target of testing or solving issues on getting to test. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Document&lt;/strong&gt;: Using time on document, you leave notes and materials for your future self and anyone coming after you. This becomes increasingly important to track as own activity when we apply test automation as documentation. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These activities can happen in any size chunks within the process. They can be consecutive or concurrent. Usually through practice an exploratory tester learns to intertwine things for an appearance of concurrency of some perspectives, and each tester combines things within what they are comfortable with. &lt;/p&gt;

&lt;p&gt;Thinking back to the process of driving a car - When you were new driving stick, you would accidentally stop your car on traffic lights, forgetting which gear you were on, or letting go of clutch a little too soon. Over time, the basic operation of the car became a routine leaving you time to pay attention to surroundings rather than operating the car. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3j8zwzbvi89xfu5jzfok.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3j8zwzbvi89xfu5jzfok.JPG" alt="Input of Exploratory Testing"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The most important thing going into the process of exploratory testing is whoever is doing testing. It could be a career tester. It could be programmer. It could be an analyst. Job role aside, we call someone who performs testing a tester. &lt;/p&gt;

&lt;p&gt;No matter what the features the tester goes in with, going with the idea of learning while doing, we come out different. Since we optimize our value, we optimize our learning too: we want to be always both &lt;strong&gt;learning&lt;/strong&gt; and &lt;strong&gt;contributing&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;What we can’t take in as we start, we can acquire as we do testing. We can ask around. We can research. We can read existing documentation and apply it to the application under test. We can constrain ourselves with a test technique. We can make notes and create versions of documentation we intend to leave behind. &lt;/p&gt;

&lt;p&gt;Exploratory testing – doing testing from whatever the input - emphasizes learning while contributing. And while it can be something you do solo, on your own, it is also something you can do with a pair or a group (ensemble). &lt;/p&gt;

&lt;p&gt;We want to specifically mention four categories of input:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Domain knowledge&lt;/strong&gt; is about what the tester knows and how well the existing knowledge enables connecting with new knowledge to understand what the application is about, why would anyone want to use it and what risks pose a relevant, meaningful threat to its value. Both knowing domain of this application or another domain enable you to compare and contrast the information you have to information you are acquiring as you are testing, and building patterns. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Requirements and specifications&lt;/strong&gt; is about knowing the agreements around the organization on what the application under test should do. While being aware of claims is good, sticking only to claims made by others limits testing in a way that it can block us from starting conversations on relevant features we are missing. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Testing knowledge&lt;/strong&gt; is about knowing how to think in terms of charters and constraints to provide new relevant information about the application under test. It's about understanding the difference of seeing something work and then seeing it fail in both ways it should (error messages) and should not (bugs). &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Miscellaneous knowledge&lt;/strong&gt; is about everything else, including the tester's ability to program. Being fluent in programming enables writing documentation as code that can then stay around for later. Endless curiosity in wanting to understand how the world works helps ask relevant questions about the application instead of settling too low. Catalytic skills enable drawing other people's knowledge into the work you are doing and creating connections for shared success. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0vikxtc19xq1kia3t1s7.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0vikxtc19xq1kia3t1s7.JPG" alt="Output of Exploratory Testing"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What comes out of the process is a better tester but also different kinds of deliverables we consider relevant when optimizing value of testing for &lt;em&gt;the specific testing at hand&lt;/em&gt;. Sometimes we only report new information. We leave coverage as implicit, but our information is only as good as our ability to cover multiple perspectives. While we can come out with only information, we often want to put our learning into a good use for our own and our coworkers’ future and leave behind documentation, even executable documentation. &lt;/p&gt;

&lt;p&gt;Let’s now say this out loud: test automation belongs in exploratory testing. You can’t explore with great coverage without automation. You can’t automate without exploring in a relevant way. When automation fails, it calls you to explore. With that said, we are optimizing the value of our testing with exploratory testing: value today, value in the future.  Automation is a constraint that directs our attention. We make choices of when it is the right time for that constraint. &lt;/p&gt;

&lt;p&gt;If you don't know how to code and write automation, it cannot be part of exploratory testing you do personally. What you cannot do personally, you can compensate for through collaboration. Collecting ideas to pass to team members while exploratory testing to document as automation may be a constraint you live with. You can also learn to code and remove a knowledge-based constraint. Same applies for people who know programming but have hard time with good testing ideas. You can also learn to test and remove a knowledge-based constraint. &lt;/p&gt;

&lt;p&gt;On the knowledge-based constraints, we would like to remind people that software industry doubles in size every five years, meaning half of us have less than five years of experience. With less than five years of experience, we have knowledge-based constraints we later in our careers learn away from. Choosing a focus of skills to learn first is a natural way for us to divide the learning in a team, without letting it box us indefinitely into roles. Contributing to test automation efforts isn’t the most complex of our programming tasks and we believe everyone can learn it. We're less sure if everyone can learn to think in ways that ensure multidimensional coverage and we're hopeful with that too.  &lt;/p&gt;

&lt;p&gt;Successful output is effective: we find problems our organization expects us to find. The path to the result of finding relevant issues is through understanding coverage. &lt;/p&gt;

&lt;p&gt;The four main categories of output we want to mention are: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Coverage&lt;/strong&gt; is about knowing how effective your testing is. Did you cover code (implemented), requirements (asked), risks (problems) and how can you tell?   &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Information&lt;/strong&gt; is about knowing your results. What conversations is testing starting? What changes are we making based on those conversations? Are we removing bugs that might bug a user? &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Documentation: Strategy&lt;/strong&gt; is about knowing how we approach testing for this particular application under test. At first our ideas are vague, but in the end they should be at their clearest state. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Documentation: Tests&lt;/strong&gt; is about leaving behind a checklist of any sort that enables us to build on current learnings ourselves later or by others. How can we accelerate testing for next time around? &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp25l77wem5v68i9d87n9.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp25l77wem5v68i9d87n9.JPG" alt="Course Outline, the Long Version"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This course is first in the series of Contemporary Exploratory Testing courses where teaching on the course is framed around a particular test target. For foundations, our application is very simple, and it was chosen based on the idea that is &lt;em&gt;not supposed to be full of bugs&lt;/em&gt;. While finding lots of bugs is fun and does wonders to a tester’s self-esteem, it gives a false impression of what it is we do when we do exploratory testing.&lt;/p&gt;

&lt;p&gt;The course is split into 17 chapters, where each chapter will have a supporting video, and the final chapters will describe the best testing we have done on the application. It isn’t a recipe for all applications, but an example for you to understand what coverage for this application may mean, and what parts of the work you could do make sense when you are truly optimizing value.  &lt;/p&gt;

&lt;p&gt;From the outline, you can see that we will be using a test automation tool (pytest) on this course. Choice of tool is irrelevant but using one to give you concrete examples helps teach this content. Pytest allows for natural language like programming, more so with pytest-bdd, and can be useful for new people to get into test automation programming. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fln6upw4oaeqe7xczrvrw.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fln6upw4oaeqe7xczrvrw.JPG" alt="Course Outline, Condensed Version for Full Day Classroom"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The same course delivered in a one-day classroom format uses this four part structure. We start with options for exploring (and do exercise on documenting our ideas without use of application), continue with addressing personal choices on constraints allowing people to explore without automating, adding automation as constraint teaching people who to automate with Robot Framework Browser library, and conclude with addressing use of time over results, thinking through coverage. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3wxn02bs3q78sx8xopuj.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3wxn02bs3q78sx8xopuj.JPG" alt="Test Target and Our Options for Exploring"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this first chapter, we don’t yet get to use the application, but we get to see the application. Sometimes a good way to learn about how learning changes things is to try to think what you would do before you had more information. &lt;/p&gt;

&lt;p&gt;With exploratory testing, you need to appreciate learning. Learning shows up by your ideas changing, and ideas changing change your actions. You should notice when your ideas change. It is expected, welcomed, and makes you better. You can’t have the best answers available at the time you know the least. &lt;/p&gt;

&lt;p&gt;With an application you have never seen before, it is clear you &lt;em&gt;now&lt;/em&gt; know the least. It is harder to appreciate how that is true on your day-to-day job, with the same application with new changes coming in. Yet it is the same: at start you may know a lot, but you still know the least about that specific change and its impacts on the application compared to what you can know given proper time to explore it. &lt;/p&gt;

&lt;p&gt;Working in pairs or ensemble on generating ideas, we find people do better with versatile ideas than on their own. Then again, having a lot of ideas before you know anything that would help you prioritize or target your testing isn't going to be the best use of your time. Recognize what you think is most pressing on your list of ideas. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs3fxuxz9udwzgweevgsp.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs3fxuxz9udwzgweevgsp.JPG" alt="The Target of Testing: E-Primer"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The application we test on this Exploratory Testing Foundations course is called E-Primer. It’s a little application for people who want to check their English writing to master e-prime – a way of writing English avoiding the “be” –verb in all its formats. &lt;/p&gt;

&lt;p&gt;We chose this application because we were under impression it is not target-rich application for testers. That is, it is not so full of bugs that you should consider it ridiculous. &lt;br&gt;
Having tested it, I know it has its share of issues. And to begin with, the version we styled for this course has one major issue that the original did not have and we haven't fixed it (yet). &lt;/p&gt;

&lt;p&gt;We can figure out what the application is and what it does by using it. Also, name of the application gives us a hint and allows us to research e-prime further should we want to. &lt;/p&gt;

&lt;p&gt;At this point, let's not yet go and use the application. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmplkxdc7n0si3kc1otut.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmplkxdc7n0si3kc1otut.JPG" alt="Stop and Think - Options for Exploring"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By the time we get to chapter 3: The Moment of First Impression, your first impression on just seeing the application is already gone. But the first impression of using it is still ahead of you. &lt;/p&gt;

&lt;p&gt;Before you move forward, stop to think. What would you do first, and what soon after you get started? If you make an inventory of ideas you have, what do you list? Try doing that. &lt;/p&gt;

&lt;p&gt;People testing applications come to the targets of testing with our biases. Our internal dialogue of being awesome just as much as our internal dialogue of being bad has an impact on our ability to look at what we can do objectively, which is why I encourage writing things down to support your own learning about yourself. Learning is about changing your mind, replacing something you thought you knew with something more accurate, and adding new knowledge on top of what you already knew. Pay attention to what your first instincts say about testing this one. &lt;/p&gt;

&lt;p&gt;This careful listing of your starting point supports your learning, and we would not ask you to do this with every application you ever test. But it is an option you can start with, an option we enforce here for learning purposes even if it didn’t help with optimizing value of testing the application at hand. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjq2kf23w9zos4sp3rcdl.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjq2kf23w9zos4sp3rcdl.JPG" alt="Options for Exploring"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that you have created an outline of what you would do without doing anything but digging into your past lessons, it’s time to discuss your options for getting started with a new application. &lt;/p&gt;

&lt;p&gt;Time you spend away from application to learn about it is time when you don’t know about what the application does. &lt;/p&gt;

&lt;p&gt;Time you spend wandering without a purpose with an application could be a way of learning about the purpose, but there may also be more effective ways to learn about it. &lt;/p&gt;

&lt;p&gt;Balancing your options and creating new options as you go is at the heart of exploratory testing. &lt;/p&gt;

&lt;p&gt;If you can research the domain in a way where you continuously test and learn, you are simultaneously learning and contributing. &lt;/p&gt;

&lt;p&gt;If you start using the application, take control over what you do. Think of what is included, and particularly what isn't included.&lt;/p&gt;

&lt;p&gt;Some people will start with automation first. With many sessions over testing this application, we have come to understand it both enables and limits us. We see some types of problems, while becoming more blind to others. Same works when we start with use of the application first. No matter what constraint we choose to start with, it is our right as exploratory testers to make that choice for ourselves. &lt;/p&gt;
&lt;h2&gt;
  
  
  Self-Management in Exploratory Testing
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnzqtbtd8gi4zy9k03ss9.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnzqtbtd8gi4zy9k03ss9.JPG" alt="Self-Management Basics and Setting Yourself Constraints"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You have options for exploring, and your task in exploratory testing is adjust your intent, charter and constraints on a cycle that enables &lt;em&gt;you&lt;/em&gt; to keep up and do the best possible testing you can. Exploratory testing centers the tester, so you don’t have someone from the outside telling you what detail to verify, the control is with you. &lt;/p&gt;

&lt;p&gt;Some people tell us that the freedom frustrates them and makes it hard for them to start. They don’t have to have this freedom; they are free to set themselves into a box that enables them. They are also free to let themselves out of the box they created, when they discover that to be useful. &lt;/p&gt;

&lt;p&gt;In this chapter, we introduce three concepts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Charters &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Constraints&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Multi-dimensional thinking for intent and learning&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fysmaawidujv7fqm8rolc.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fysmaawidujv7fqm8rolc.JPG" alt="Charters"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Charters are a key concept in how the community talks about framing your exploratory testing. You can think of a charter as a box that helps you focus and generate ideas, but also assess when you think you are done with a particular idea. &lt;/p&gt;

&lt;p&gt;We like to think of charters as free form test cases. &lt;/p&gt;

&lt;p&gt;A charter could be a chapter from your design specification you want to understand empirically with the application. &lt;/p&gt;

&lt;p&gt;A charter could be reminding you of a non-functional perspective, like accessibility – can people with disabilities, permanent, temporary or situational, use the application, being an important group of stakeholders. &lt;/p&gt;

&lt;p&gt;A charter could be a very traditional step-by-step test case with your promise of stretching every single step to both it’s intended path but paths it inspires. &lt;/p&gt;

&lt;p&gt;Charter is a structure for &lt;em&gt;thinking like this&lt;/em&gt;, not &lt;em&gt;passing work along like this&lt;/em&gt;. Some people use charters to share work in team and our advice is to not do that &lt;em&gt;unless&lt;/em&gt; you co-created the charters in the first place. As soon as you remove tester designing, executing, and learning intertwined, and replace it with a tester designing and another executing, you will lose a core feature of what makes testing exploratory testing and shorten the leash a tester learns in. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://twitter.com/testobsessed" rel="noopener noreferrer"&gt;Elisabeth Hendrickson&lt;/a&gt; introduced a charter template in her book Explore It, and &lt;a href="https://twitter.com/ezagroba" rel="noopener noreferrer"&gt;Elizabeth Zagroba&lt;/a&gt; introduced an adaptation of it in one of her presentations. We like the concise template a lot but encourage you to think in terms of charters being anything that can box your testing and help you maintain focus, rather than follow a format. &lt;/p&gt;

&lt;p&gt;We advise against using charters for passing work along unless the work distribution is a shared endeavor between people. When we pass them along, the process starts to resemble traditional test cases where someone is following another's lead. This might be a temporary structure you try when you have new testers but our advise on teaching new testers is to pair with them rather than passing information through documentation they don't yet understand how to stretch. &lt;/p&gt;

&lt;p&gt;Growing a new tester in exploratory testing, we often see a pattern of first looking at an application as something that has little to test. Pairing and sharing dimensions in an actionable way transforms the tester, and with the tester, the results the tester is able to deliver. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc9ll1wgsz3mofh2tjsbh.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc9ll1wgsz3mofh2tjsbh.JPG" alt="Choose Your Own Constraint"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We prefer framing exploratory testing with constraints, which is one of the options to use in resources part of the charter template. The way we look at it, we have limited bandwidth to doing many things at once, and for us to frame the work we do in an effective way, we need to deliberately exclude perspectives to a level where we can cope with the perspectives we have decided to focus on. &lt;/p&gt;

&lt;p&gt;Like, you could say you are deliberately not writing notes at all to get through as much of different scenarios of using the application you can think of. Or, you could say you will go through a document describing correct behavior in detail, ensuring not to miss important claims. You could say you will get through 100 scenarios around the application quickly, or five in a lot of detail in the same timeframe - a day of testing. You most likely cannot do it all at once but need multiple passes with the application, with seemingly same tests but different ideas in what to do and pay attention to. &lt;/p&gt;

&lt;p&gt;If you try to do everything at once, you can’t get any of it done. So, we get to choose our constraint with my primary heuristic to never be bored.  &lt;/p&gt;

&lt;p&gt;Testing is a lot of fun. Finding out information that others don’t know is investigative work, and servicing many stakeholders is an intellectual challenge. No matter what changed, there is something new we now need to figure out, in a way that optimizes the value of our work. Doing the exact same things isn’t optimizing value of our work, so you start with the idea of always varying things to not be bored. If you find yourself bored, you need a new constraint that challenges you. &lt;/p&gt;

&lt;p&gt;Some folks take automation as the challenge that keeps them from being bored. Others find the multidimensional work all the ways we could use the application to see new problems motivating. Some people say that automating helps them have time for exploring, and they mean that seeing simple problems isn't good use of anyone's time. Covering ground with the application under test in a repeatable way is the grunt work of automation.  However, automating on level of unit tests might be our best bet towards catching unintended simple bugs and we like framing freeform exploratory testing and unit tests together in keeping things interesting! &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpfzn3exq8f4m4bdo5nll.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpfzn3exq8f4m4bdo5nll.JPG" alt="Explore with Intent"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Exploring is wandering with a purpose – without purpose, you are lost. A high-level purpose of spending time with the application or finding information is a little too generic. Specific purpose, intent, looks at what is the next step, next timebox and next theme, balancing serendipity (lucky accident) and coverage (making accidents likely). &lt;/p&gt;

&lt;p&gt;When we practice being intentional, learning to structure our thinking that is multi-dimensional can be helpful. When you imagine and fill the next slot of testing in your schedule be it next hour or next day, we find this matrix has helped us keep track of our intent and learning. &lt;/p&gt;

&lt;p&gt;Think of this as an empty A4 paper for the next piece of testing you are about to perform. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mission&lt;/strong&gt; includes a statement of why you exist in the organization, what is expected of you – there might be a role-based constraint applied to you such as being a test automation specialist, but that does not stop you from doing other things, merely reminds you that if there is a priority call, yours might go to this direction.  Like a sandbox you are responsible for, but with sides so low you can easily cross them when you are excited, this corner reminds you of where you anchor your purpose in larger scale. We use missions with large applications agreeing who works with each general area of the application. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Charter&lt;/strong&gt; would talk about something on shorter term, like a personal promise: “today I will test with 100 different inputs on one browser”, implicitly saying you will do that rather than test for example 10 inputs on 3 different browsers. It’s your intent you are framing, so it is your right to change the framing any time. Let us emphasize the important of &lt;em&gt;any time&lt;/em&gt;. As you learn, you don't have to stick to your promises to yourself. Charter only helps you to stay honest of what your idea was to begin with and avoid the "I tested for a day so this now much be tested" thinking pitfall where you unintentionally lower the coverage bar. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Details&lt;/strong&gt; makes space for things you note while you test on level of details.  Typically, we note things that are bugs (#), things that are questions (?), and things that are ideas (x) for documenting for future in whatever test material we leave behind. Knowing your pattern of details is useful. We have come to appreciate we know half of the answers to questions we have when we stop to think about it, and many things we consider bugs at first are not important enough to raise as we acquire more knowledge of the domain, the application and other available information.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Other charters&lt;/strong&gt; is your placeholder for things you’d like to do but won’t do now even they popped up in your head under influence of the application as your external imagination. You choose to park them here, leave them for later. Following your every whim is your choice, but it will not help you get the work done you had in mind first if you do so. &lt;/p&gt;

&lt;p&gt;When you find a bug, you can choose to write it down quickly and continue. Or you can choose to isolate it properly and log it. Your choice. We find it takes 10 minutes to 2 days to properly isolate and report a bug, and advice pairing with developers on fixes over reporting them in tools if the development team can come to support that way of working. &lt;/p&gt;

&lt;p&gt;When you find a question, you can choose to ask it right away. Or you can choose to collect questions for later, seeing if other things you learn while testing will provide you an answer. &lt;/p&gt;

&lt;p&gt;When you find something to document, you can write it down in the best possible format immediately. Or you can write a note appreciating it as something worth documenting and take time later. &lt;/p&gt;

&lt;p&gt;You can also completely skip a bug, a question or the documentation. The more thoroughly you process them, the more time they take from what your intent for this testing was.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2gcweekslj895pdbcvqb.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2gcweekslj895pdbcvqb.JPG" alt="Stop to Think - Charters, Constraints, Intent"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You are approaching the moment of first impression. You usually want to see it work before you see it fail, or you don’t understand and appreciate the way it fails to communicate the fail properly. What is the first thing you will do? Both as a high level idea, and as a specific idea - what would you try first and why? Would you frame your start as a charter, a constraint or as an intent? Or maybe you have your own style of framing your testing that we did not talk about here? &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdyxw2wssnfna0mc831ul.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdyxw2wssnfna0mc831ul.JPG" alt="The Moment of First Impression"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We already saw the application, even if we did not yet use it. One more thing to note before heading in for the test. You are only new with an application once. Even if it is time when you know the least, let yourself listen to how you feel about the application. Your joy could be user’s joy. Your confusion could be user’s confusion. Your mistakes are most likely some user’s mistakes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhxoltn3ziw0ss78l9mdu.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhxoltn3ziw0ss78l9mdu.JPG" alt="Options Expire"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With making choices of what we do first and what we do after, it is good to be aware that while we have an endless selection of options, some options expire. &lt;/p&gt;

&lt;p&gt;You can only have a first experience without having read documentation or given a personal demo before those have happened for you. Even if you can’t have the first impression personally working closely with the feature, you can always work to borrow someone else’s experience through pairing or watching them use the application. &lt;/p&gt;

&lt;p&gt;With time of first impression, you will want to listen to your feelings about the application more carefully, without yet jumping to conclusions about what is an important problem and what isn't. Notice now, prioritize later when you have context. &lt;/p&gt;

&lt;p&gt;While many of our options don’t expire and we can do them in which ever order, it takes exceeding amount of energy to remain curious about the new information when you have already tried many things. I find that we often give up and stop testing too soon! In the words of the famous Albert Einstein: “It’s not that I’m so smart, I just stay with the problems longer.” Given time of use, software has the habit of revealing issues it has always had but we either could not see or appreciate earlier. &lt;/p&gt;

&lt;p&gt;On this course we propose you have multiple rounds with the very same application with a different constraint:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Test it with first impression&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Test it with focus on domain knowledge and documentation, aim to being systematic in coverage of spec&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Test it with focus on functionality - both code and UI&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Test it with focus on data - what should work, what should not, and how those could surprise us&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Test it with focus on what it runs on, the environment&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Test it with mindmap as documentation&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Test it with traditional test cases within test automation tooling, without automating&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Test it with running automation you create to do testing for you&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The same application and eight rounds. You can put these in any order and your experience with the testing of the application will be different. Only the first impression is something that expires by its nature.&lt;/p&gt;

&lt;p&gt;The others expire if you cannot maintain focus and interest, and start believing you have already found everything relevant or come to conclusion that the application isn't worth this much effort. &lt;/p&gt;

&lt;p&gt;When working close to a deadline, options expire also on what feedback is welcome. Days leading into a major release, issues considered major months before can be prioritized down. Timing of feedback matters. &lt;/p&gt;

&lt;p&gt;You usually stop testing before you have exhausted all your options. Knowing your options helps you be intentional about it. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh4sjl2b8ztj09em2rzk4.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh4sjl2b8ztj09em2rzk4.JPG" alt="Let's Test"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With all this said, it is our time to test. The application is available at &lt;a href="https://www.exploratorytestingacademy.com/app/" rel="noopener noreferrer"&gt;https://www.exploratorytestingacademy.com/app/&lt;/a&gt; and eviltester’s original at &lt;a href="https://eviltester.github.io/TestingApp/apps/eprimer/eprimer.html" rel="noopener noreferrer"&gt;https://eviltester.github.io/TestingApp/apps/eprimer/eprimer.html&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Pay attention to where you start from, how you learn, and how you decide on your next steps. After every single thing you execute, stop and ask yourself: what did you learn? &lt;/p&gt;

&lt;p&gt;Each of the following chapters also gives you one constraint that is applicable to this application at hand and explains where the constraint leads you. You can do those in any order, even in combination with the first impression. &lt;/p&gt;

&lt;p&gt;Spend 15 minutes exploring the application alone, or 30 minutes in an ensemble. In an ensemble, pay attention to how you continue from what the previous was navigating to, and the intent emerging that changes direction. It is hard to follow through a larger idea with a group in the beginning. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F224whfpc8r66fsphkxnx.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F224whfpc8r66fsphkxnx.JPG" alt="It has Bugs"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We have lead the first impression to the course with different constraints with different people, and come to appreciate typical patterns in group work:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Given automation from the start, we find at most one problem but we get coverage of basic functionality up significantly&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Given freedom without constraint, results vary greatly depending on the tester's skill in testing. Relevance of results is often weak. No documentation ends up being written, even notes of bugs. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Given domain documentation, the understanding of relevance of results is better. Approach to slicing documentation varies a lot. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Given a mindmap requirement and a functional constraint, people create clearer plans and document bugs in a quick way.  &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Your first impression and learning of the application is most likely not taking you this far. This mindmap is from an hour of pair testing with a tester who found none of these problems alone but needed training on how to look at an application to identify what might go wrong with it. The mindmap is by no means exhaustive list of things that are off with the application. We’ll talk about bugs in more detail when we get to the part about coverage – because most relevant coverage is bug coverage.  The challenge is, we can only assess bug coverage in hindsight.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdjui90sbr2u05r2cm0wl.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdjui90sbr2u05r2cm0wl.JPG" alt="Bugs Are Conversation Starters"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We would hope first impressions are not about bugs, defects and change requests for users of our applications. Testers though get to bugs on the very first thing they do with an application, drawing from past experiences to select a way of using the application that reveals a bug. Tester fast forwards a user’s year into the hours they use testing – or better yet, multiple users’ years. &lt;/p&gt;

&lt;p&gt;We like to think of bugs as conversation starters. Sometimes we need to avoid the word defect and reserve that only for use where it is clear that the behavior of the application is against something we explicitly agreed in a specification. Defects come with the burden of guarantee, whereas change requests have a different tone to them. Guarantee implies that it should be fixed at the cost of the software development organization. Change request implies the cost is separately invoiced. Thinking in terms of whose money the fix requires can be part of a testers work. In some organizations this categorization has less impact than the importance of what the feedback means for the user and customer satisfaction. &lt;/p&gt;

&lt;p&gt;In many teams, we have called bugs “undone work” to remove the judgement from the conversation – it is simply undone work we propose we still might want to do. &lt;/p&gt;

&lt;p&gt;Bugs are things that might annoy a user or any stakeholder, and unless someone starts a conversation on them while still in development, the conversation starts only at a time a user starts it. These conversations turn things we didn’t know into something we can be aware of. And for many, it enables us to make changes that remove the bugs. &lt;/p&gt;

&lt;p&gt;You can also start conversations on the good things you see, the absence of bugs that surprises you or just the things you find that make our users with our application more awesome. Having that empirical touch on what we have built gives you a perspective of use that serves as a conversation starter. Identify something good, find ways of getting more of that kind of good. &lt;/p&gt;
&lt;h2&gt;
  
  
  More Specific Constraints
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftdhx3goby8wdr8xixxso.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftdhx3goby8wdr8xixxso.JPG" alt="Recognizing and Learning a Domain"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s move to discussing domain, requirements and specifications. The information about what the application is and does comes to us in many forms. &lt;/p&gt;

&lt;p&gt;Domain is the problem-solution space of the application. The application exists for a purpose. It has patterns of expectations that are specific to it and other applications in the same domain. A domain typically has domain specific concepts. &lt;/p&gt;

&lt;p&gt;Sometimes we know the domain by just our life experience – like most editors. We know what a text editor does. While there are specific functions to a text editor, we can figure those out since we too have edited text before. &lt;/p&gt;

&lt;p&gt;Sometimes we know very little of the domain and need to learn it as we test it. Learning a domain effectively and becoming a domain expert over time is what we would expect from someone testing applications in a particular domain for a longer period. Knowing the domain or learning about it until we know is what separates us from assuming something could be right to understanding if it is right. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftwoqwbyggdslkwvrju07.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftwoqwbyggdslkwvrju07.JPG" alt="Conference, Reference, Inference"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To learn a domain is to acquire information about the problem-solution space of the application, and usual expectations of it. &lt;/p&gt;

&lt;p&gt;You have three main routes to it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Conference&lt;/strong&gt; is about asking around. Talk to anyone you need to. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Reference&lt;/strong&gt; is about getting to an authoritative document. It may be given to you directly, or you may need to search to find it. Sometimes references disagree, and you get to settle those disagreements while you are testing, through talking with people. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Inference&lt;/strong&gt; is about applying other knowledge you have access to on this domain, expecting similarities or differences. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You may have a requirements specification. You may have a functional specification. You may have a user interface specification. You may have an architecture specification. No matter what you have, it is not all you will need. And if you have none or some of these, or some others, we advice to think that documentation is useful but also it reflects the things we already think we know. Exploratory testing begins with what we know and seeks to learn what we don’t know yet.&lt;/p&gt;

&lt;p&gt;When you have no documentation to refer to, you still have people and your past experiences. You can document relevant lessons and claims as you test them. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fip7e9t209aemyox7n5oh.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fip7e9t209aemyox7n5oh.JPG" alt="E-Primer the App"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With E-Primer, we don’t have a specification. We have a Wikipedia description of the domain that gives us a decent perspective into what the application is designed for, without knowing any of its intentional limitations. &lt;/p&gt;

&lt;p&gt;We can search online for any information about e-prime we consider useful and educate ourselves on it. &lt;/p&gt;

&lt;p&gt;With specification, we can find phrases to test with that showcase the application’s functionality. This good demo phrase for E-primer - "To be or not to be is Hamlet's dilemma" - is a result of testing, not the first idea to use the application even based on its Wikipedia description. We can best get good demo examples by asking the developers on how they discover the functionality. The phrase illustrates how you can get to seeing both Discouraged words and Possible violations, while counting words correctly. &lt;/p&gt;

&lt;p&gt;We have not talked to the application developer, Alan Richardson aka eviltester, to understand why he would choose to separate these two concepts. A working theory from exploration is that Possible violations are algorithmically harder to separate and require human assessment on whether it is possessive or a short form of "is". &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ahu9nl7jqgq7wzg7y1s.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ahu9nl7jqgq7wzg7y1s.JPG" alt="Source Code"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With this specific application, we can find the source code. It is available in the browser on Developer tools / sources to view it in the browser. &lt;/p&gt;

&lt;p&gt;It is also available on GitHub at &lt;a href="https://github.com/exploratory-testing-academy/ETF/blob/master/app/eprime.js" rel="noopener noreferrer"&gt;https://github.com/exploratory-testing-academy/ETF/blob/master/app/eprime.js&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Looking at the code gives us a direct chance of comparing what it can do to what we would expect it to do with regards to the Wikipedia page.  We could, just by reviewing the list of words that get marked discouraged identify that the words “you're, we're, they’re” that should be marked won’t be, as they are missing from the list. &lt;/p&gt;

&lt;p&gt;Programs do what they are told. You don’t have to know how to read all of code to read enough code to make sense on existing logic. &lt;/p&gt;

&lt;p&gt;Like with reading and writing English, they are connected but not the same. We can read great novels yet not write ones ourselves. We recommend all testers read code, at least on the level of what is included and changed. Commits and pull request reviews help us understand scope of changes we are testing. Code in version control has one feature that is very useful for testing: nothing changes without someone changing it and we can watch it change. Same isn't always true for the environment the application relies on, or our customer's expectations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzz54yu3mw2t8gsyj1efy.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzz54yu3mw2t8gsyj1efy.JPG" alt="Let's Test"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It is time to test. Try out what how testing flows when focusing on the domain knowledge. What do you think of coverage of what you have tested so far? Eventually, we care about you not missing relevant information others don’t yet have and thus can’t specifically ask of you! Try to make sure you cover the claims the Wikipedia page includes, or that you can explain the percentage of coverage against that you think you have. &lt;/p&gt;

&lt;p&gt;Take 15 minutes alone, or 30 minutes as an ensemble. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbfqsmjk2qtl85razoyf1.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbfqsmjk2qtl85razoyf1.JPG" alt="Learning of Domain of E-Prime"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The combination of reading and using the application means a significant portion of your time went into making sense on the Wikipedia text and finding other relevant references. You most likely learned something.&lt;/p&gt;

&lt;p&gt;We expect you are now more comfortable with the application and have an idea of what it does. &lt;/p&gt;

&lt;p&gt;You should now know that "Possible violations" isn't a concept you can find on that Wikipedia page or through simple online search. The written references on it are not particularly helpful and there is no documentation about it in the application.  &lt;/p&gt;

&lt;p&gt;Most commonly people come to understand “Possible violations” as a side effect. We commonly see three routes to finding it. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Using the demo phrase this course has – given the demo phrase, people miss out how hard it is to figure out when it should not be&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Reading the code – figuring out from the code that “’s” is the search criteria for counting things as possible violations, and then understanding that it looks like possessive and “be”-verb appear so often in hard to distinguish formats that this appears a likely design choice signaling need of user intervention over programmatic algorithm doing all the work&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Using large data samples – browsing through large data samples, like copy-pasting the whole Wikipedia page text at once in the application and browsing for blue&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You learn domain by asking questions and paying attention to answers the application gives you when you test it. Not all your questions have an answer, but they start conversations. You get to consider how far you take them. What helps you optimize the value of your work when testing? &lt;/p&gt;

&lt;p&gt;To get to some of the samples the Wikipedia page leads you to, you have to go through a number of tools. For example, a great reference of E-prime where there should be low number of things to detect is the Bible written in e-prime, available as pdfs. To get pdf to text, you will need to find a tool online for that purpose, direct copy-paste messes up the structure of your data. &lt;/p&gt;

&lt;p&gt;The text in the Wikipedia page also can lead you into thinking about comparable products you could use to understand testing this one better. The claims from the specification are not only on words it would recognize but also on benefits of using it in the first place: clarity of thinking and psychological effects of writing this way leading to e.g. objective expression of feelings. &lt;/p&gt;

&lt;p&gt;The more you read before using the application, the higher the expectations for it are. First pass is most likely reading selectively, and if we really cared for claims, it would require significant effort to isolate and test them. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvvv60utthwdm0xvpng48.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvvv60utthwdm0xvpng48.JPG" alt="Recognizing Functionality"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that we have an idea of the domain, let’s look at the functionality as a constraint. &lt;/p&gt;

&lt;p&gt;Different applications are built on a different technology. As someone testing an application, getting to know about the technology is something we would expect. Not expert level knowledge, but at least a basic level curiosity turning into expert level knowing over time, question after question. &lt;/p&gt;

&lt;p&gt;A lot of testers given a picture of the application start suggesting SQL into the text box or want to use Developer tools to watch network traffic. They find themselves puzzled with a JavaScript application running in browser after you first download it, and their backend related test ideas are not taking them far.  The application keeps working even if you disconnect from network as long as you don’t try to refresh from the download location. &lt;/p&gt;

&lt;p&gt;Let’s discuss functionality as constraint a little more. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7stcryijut3qz0peldaf.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7stcryijut3qz0peldaf.JPG" alt="Naming of Function"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For functionality – features – we find a good approach to be seeing function in different scales and naming it. Naming helps us think about coverage of each named function separately. &lt;/p&gt;

&lt;p&gt;The code has functions the programmer has given names to. It is structured as functions the programmer calls to get their intended results overall.  We could use a unit testing framework to explore the functions of the code, and for the scale we can, we probably should. Unit tests are great because they deliver feedback on the level of a developer making breaking changes. If unit tests document (in automation) the developer intent, it also reminds about it with red in running it after a breaking change. &lt;/p&gt;

&lt;p&gt;The application user interface has functions we would probably name ourselves. Some of those functions come from what the programmer explicitly introduces, some come from the fact that it runs in a browser. &lt;/p&gt;

&lt;p&gt;We can name what we see. We can name what we expect to see. And we can compare those. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftpcictpszsrqag26kzgk.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftpcictpszsrqag26kzgk.JPG" alt="Let's Test"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Given the constraint of function, what would you test? You could start with the code, but we suggest you start with the user interface. Give labels to functionality you see the application have, and explore each functionality for information about it. &lt;/p&gt;

&lt;p&gt;It counts words - what rules apply to counting words?&lt;/p&gt;

&lt;p&gt;It recognizes e-prime in text - what rules apply to e-prime in text? &lt;/p&gt;

&lt;p&gt;It color-codes e-prime - how could we know it marks the right words? &lt;/p&gt;

&lt;p&gt;It runs on a browser - what functionality does browser introduce for it? &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fna1zornns299hfclv0yq.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fna1zornns299hfclv0yq.JPG" alt="Learning of Function of E-Primer"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Usually when we use the constraint of function after using the constraints of domain, we find there is either an overlap or challenges in naming things we did not already see. Many times, we find we need to point at a function directly and name it to make it visible. &lt;/p&gt;

&lt;p&gt;With this application, there are a few functions that are not obvious:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Text area for output text has a size limit for width of the grey background&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Text box is resizable, and so is the page and browser window&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Anything running in browser has a connection to browser settings&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The E-prime recognition algorithm is core to the problem and feels to be on a weaker side&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Between our version of the application and eviltester’s version of the application, we have lost scroll bars – a function really relevant when dealing with larger chunks of text&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We can name functions in many ways:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Inputs&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Outputs&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Containers &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Presentation&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Browser&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Algorithm&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Categories of function serve as idea generators. We are sure there is more to browser than we have listed here. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe6b6alrmkmwk4rtozftq.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe6b6alrmkmwk4rtozftq.JPG" alt="Recognizing Data"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next we focus on the constraint of data. Many applications process data, and this application isn't different. Sometimes the data is visible to us as inputs and outputs we observe. &lt;/p&gt;

&lt;p&gt;Sometimes, the data comes from the application based on our input. &lt;/p&gt;

&lt;p&gt;To deal with data, we have some of our most well know testing techniques:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Equivalence class partitioning&lt;/strong&gt; is an idea that we try to minimize the data we use in testing by choosing one for each class of risk. We do this because covering a lot of data is time-consuming and we need to optimize value of testing. However, that constitutes a relevant risk is a long conversation and very application dependent, and anyone seriously applying this technique should look again into automation and risks. For purposes of exploratory testing, think of it as an idea saying you want to try things you can imagine could be different. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Boundary value analysis&lt;/strong&gt; asks you to focus on where behaviors change. If something is allowed until it isn't, the moment where things change is relevant. It is also more likely to be off by one, or vulnerable to problems when combined with another functionality. We suggest to think of this technique as the Goldilocks rule: try too small, too big and just right to really understand if what you are testing works as you would expect.  &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These techniques on looking at data can and should be applied to both inputs and outputs, together and separately. We want to tease out different situations with different data. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fumcniyal0ki4u5np1oho.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fumcniyal0ki4u5np1oho.JPG" alt="Data or Variables"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For every function you could find in the previous chapter, you can extend it by varying it. If you had a button to click, you could click it directly, move focus to it and press enter, try pressing enter when focus is on the text box and see what happens. &lt;/p&gt;

&lt;p&gt;The most lucrative functions with regards to data are ones where the function of the application changes in a relevant way. In the application under test, anything you put into the text field will have a lot of options. &lt;/p&gt;

&lt;p&gt;Using function with one piece of data lets you know the function exists and can work. Combining the function with data and variation, you learn about reliability of the function. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhr9y0km5iznb7om6i359.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhr9y0km5iznb7om6i359.JPG" alt="Versatile Data"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We often find that testers come equipped with experiences of what type of data often fails and use their past experiences with the application. We have learned that when data has a lifecycle, something can go wrong in different stages especially when we mix them up. We create, read, update, and delete data, either completely or partially. &lt;/p&gt;

&lt;p&gt;Similarly, we know whole collections of typically problematic data, like the GitHub Naughty Strings list at &lt;a href="https://github.com/minimaxir/big-list-of-naughty-strings/blob/master/blns.txt" rel="noopener noreferrer"&gt;https://github.com/minimaxir/big-list-of-naughty-strings/blob/master/blns.txt&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;We also may remember from past experiences easy ways of producing garbage text like opening a gif-picture in a text editor to copy from. &lt;/p&gt;

&lt;p&gt;A dynamic we often observe when working with data is ability to keep track of coverage of data. If you have a long list of things you could try, some applications allow you to try it all at once. With a wall of text though it is harder to pay attention to details of what is unexpected. Choosing data one type at a time leads us in forgetting what we have and have not covered. &lt;/p&gt;

&lt;p&gt;Similarly, we have tools that help us get to these types of ideas, like the Bug Magnet. It is a Chrome extension that allows for injecting values from various categories into a web user interface. &lt;/p&gt;

&lt;p&gt;A wide idea of data often brings out fun in testing. However, we need to stop and think if the values we are trying will be information the team will find relevant. If you start off reporting an application accepts weird inputs but not doing anything particularly bad with them, it most likely will read as you wasting everyone's time. If the application crashes, there is usually a connection to security that makes it more relevant.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcj2z9u8pvz49jx6szc2s.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcj2z9u8pvz49jx6szc2s.JPG" alt="Let's Test"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Given the constraint of data, now it is time to find the things that could be different. You find data you can vary on the easy places like input values to a text field, but equally data is there on getting to output values of the application. What can you vary to discover problems the application holds? &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk9rzmey074jsxuix2nsb.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk9rzmey074jsxuix2nsb.JPG" alt="Learning of Data of E-Primer"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We find that with this constraint, people find new bugs they were not previously aware of. We've collected some of the most typical ones here. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;anything but space as word delimiter - application only recognizes space as word delimiter and messes up count of words when we use anything else - including the hyphen and line feed. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;apostrophes - there's two types of it and application only knows one, and even we did not expect to learn about differences of typesetter and typewrite apostrophes. Similarly, put the apostrophes right into the forbidden word, and it no longer gets recognized: 'be' &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;long text - varying lenght of text is quite basic variation of the goldilocks rule, and finds a bug that reproduces on the course version of the application but not on the one Eviltester created. Seems like styling into a new page introduces functional problems that were unexpected until discovered. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;e-prime - recognizing well-formulated e-prime and it's violations, in scale of relevance. Not just individual words but longer bodies of text. The specification page has a link to e-prime bible, in pdf, that you can transform to text and use as a great revealing source for data-related issues. To get to test these, you either fix the bug on the application creating a local copy or use the Eviltester version. Yes, testers can fix bugs and wasting time because we shouldn't for a simple fix makes little sense. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;lucky selection - from e-prime examples, we have observed people choosing both lucky an unlucky subsets. Unlucky subset is one where all of the picked samples work as we would expect, while those we chose to leave out we learn in another session to be broken. Taking a systematic approach to data matters. Large samples make focusing on verifying correctness harder, but also allow for serendipitous discovery of samples we could not identify thinking. Copying the Wikipedia page in its entirety is a good example of this .&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Note: To fix the bug, you need to fix the css (stylesheet). Change "position: fixed;" to "position: relative;". One line googleable fix. Sometimes you fixing the bug is faster than you writing a bug report. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foq3ezi3x4ta5266g1h7g.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foq3ezi3x4ta5266g1h7g.JPG" alt="Recognizing Application and Execution Environment"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next we look at a constraint I find core to the idea of &lt;em&gt;system testing&lt;/em&gt;. Even when we work in teams creating our team's components and code, testing does not exist merely to test the code we have created. It does not comfort us much if our users tell us that the function they were trying to use does not work, even if we could explain that this is in fact because Microsoft operating system does not work correctly and we assume it does. &lt;/p&gt;

&lt;p&gt;We find that often we need to constrain our attention to the operating environment of our application specifically to pay attention to things we must have interoperability with. &lt;/p&gt;

&lt;p&gt;For a web application, different browsers and even browser versions are a low-hanging fruit for making choices where and how we test. Yet we often forget built-in functionalities in the browser such as settings the user can change, and 3rd party plugins our application needs to co-exist with even though we have no control over what the user has installed. &lt;/p&gt;

&lt;p&gt;Recognizing our application architecture and technologies are a relevant part of this. We can expect different problems for different technologies. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fttmyviohnqdjn5z99agl.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fttmyviohnqdjn5z99agl.JPG" alt="What You Coded is a Bad Constraint"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;No matter what level of detail we look at when we test – code structures, application programming interfaces, user interfaces be it individual parts of multiple parts end-to-end – we should remember that with testing, we are not only evaluating our own code, but we are also evaluating our own code in the bigger ecosystem. &lt;/p&gt;

&lt;p&gt;Quality experience of our users depends on things that are outside our direct control, and we may need to choose things in our direct control respectively to uphold promises we make for our users. &lt;/p&gt;

&lt;p&gt;Signal – the messaging app – is a great example, and Naomi Wu makes the point for one aspect of quality (security) very clearly. You are only as secure as the weakest link in the system, and you can’t have a secure application if it runs on a platform that isn’t secure. &lt;/p&gt;

&lt;p&gt;We evaluate applications as systems for multiple stakeholders. Pointing out the problem is someone else's in our supply chain does not comfort the users who see your company's logo in the corner of the application they are trying to work with. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feil8ff12m9t3dpas0mxs.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feil8ff12m9t3dpas0mxs.JPG" alt="Execution Environment"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We talked about environment being important, but let's reiterate parts of execution environment for a web application. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Browsers, web and mobile on an operating system&lt;/strong&gt; - We have a lot of browsers, browser versions, browser versions on different operating systems. Some operating systems are desktop, others mobile. And majority of people use web on mobile phones these days. Operating systems have security features that may adversely hit your product, and antivirus solutions in particular might not like your site or its content and functionality quite as you envisioned. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Browser functionality, settings, add-ons&lt;/strong&gt; - Browsers have functionality (like zoom), settings (like no cookies), and add-ons (like ad-blocker). While you don't have to support them all, you probably want to know their impact to your application before your users are on the phone upset on not even understanding why your application does not work on their computer. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;HTML standard compatibility&lt;/strong&gt; - Badly formatted html causes errors in the browsers and creates differences across browsers. Try to run your site though a checker tool. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Accessibility standard compatibility&lt;/strong&gt; - Your application may not work for disabled people, and some of the low-hanging fruit in that space are collected in standard checkers available as sites and browser add-ons. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Layers of its architecture&lt;/strong&gt; - This particular application is in-browser only. Some web applications comprise of frontend and backend. When we would focus on testing of the backend, we may miss problems in the frontend. Understanding what parts make the application helps us target our exploration. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz3mlia8l3rlf9bfrhz48.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz3mlia8l3rlf9bfrhz48.JPG" alt="Let's Test"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Time to test for things that are about the environment. Using execution environment and what your application relies on as a constraint, explore what issues are there with E-Primer. Do you find new issues? What kind of a list of environment functionality you come up with? &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffnevo7zwde43x71gmfob.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffnevo7zwde43x71gmfob.JPG" alt="Learning of Application and Execution Environment of E-Primer"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This particular constraint always gets the least amount of time and has widest possible scope. Setting up different operating systems with different antiviruses and browsers, both web and mobile is a significant effort. &lt;/p&gt;

&lt;p&gt;We often approach environments with a rotational strategy - on different days we are on a different environment. Sometimes we automate for basic coverage of the environments, but the automation may not notice visual issues unless we specifically build for that having considered it good use of time. &lt;/p&gt;

&lt;p&gt;Many environment difference issues we rather address in user interface designs that work across browsers, sticking closely to a standard and tried technology. We also may announce to support only certain browsers, or browser versions. It is close to the time organizations are starting to be able to say goodbye to Internet Explorer and support only Edge from Microsoft browsers. &lt;/p&gt;

&lt;p&gt;Did you try googling for HTML and accessibility validators? Both find problems with this web application. &lt;/p&gt;
&lt;h2&gt;
  
  
  Constraints about Documentation
&lt;/h2&gt;

&lt;p&gt;We are half-way through our slides for this course, and we change gears now towards documentation. The next chapters introduce three ways of documenting. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Making notes in a mindmap&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Creating "test cases" in automation tooling that only document steps&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Creating executable documentation through exploring with automation&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Documentation is a constraint, as it slows us down when running through ideas with application. We want to create documentation that serves the tester doing the testing, and often those notes don't need to be shared with anyone. When we need to share our notes, writing them in a way that enables others takes some thought. &lt;/p&gt;

&lt;p&gt;We believe that whenever we have time to create traditional step-by-step test cases, we are likely better off if we were creating test automation instead. Our recommendation is to make choices between no documentation visible to others, session notes in any format and detail required, and executable documentation based on available skills and time. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8tajs4xfycevia17d1ep.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8tajs4xfycevia17d1ep.JPG" alt="Documenting in a Mindmap"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The first format of documenting we discuss is a mindmap. Creating a mindmap has some benefits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;It is fast to create and restructure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It is visual representation of relationships of things we model from the application.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Having something to show from what we covered in our testing enables others to comment on our thought patterns.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It can naturally be built while exploring to anchor our learning. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are a lot of mindmapping tools, and I commonly use Mindmup. Within company with company secrets, I use Xmind and save files on company hard drives only. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fflvctey1lebst0omgkwp.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fflvctey1lebst0omgkwp.PNG" alt="Mindmap"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Mindmaps can have any structure. Sometimes people start with a template reminding of the constraints we have had in the earlier sessions: Function, Data, Environment, Domain. We find a good use of mindmap is to remember that less words is better. Making functionalities visible so that we can write down our data, questions, and bugs as color coded nodes is what we propose to try. &lt;/p&gt;

&lt;p&gt;Rather than writing down everything, write down things that were hard to come by or were major learnings when you were exploring. &lt;/p&gt;

&lt;p&gt;Maps can grow too big, and we may need to structure them into multiple maps each with a particular perspective. &lt;/p&gt;

&lt;p&gt;Our lesson learned over years of using mindmaps is that they serve well in discovering a new functionality and its testing. They don't serve as well when trying to remember all functionalities there are to consider impacts of new changes across functionalities. We often find we create a checklist out of established structures we mapped out earlier in the project for this purpose.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3mwyzccpfupiac56rhbx.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3mwyzccpfupiac56rhbx.JPG" alt="Bug Reports"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since we are now discussing the basic documentation of testing, we should talk a little bit about bug reports as the core documentation.&lt;/p&gt;

&lt;p&gt;Cem Kaner et al. wrote in their book Lessons Learned in Software Testing this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A tester that does not report bugs well is like a refrigerator light that is only on when the door is closed. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In providing information and starting conversations, only conversations started can make a difference. Our bug report handling skills are often our signature in the project. &lt;/p&gt;

&lt;p&gt;We believe that instead of automatically reporting a bug, contemporary exploratory testing includes the need of considering options:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Fix it yourself&lt;/strong&gt;. If in the same time it takes to report the bug, you could fix it. What stops you?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Pair with developer&lt;/strong&gt;. If with little more time you could learn how bugs are fixed and developer could learn how bugs are found, you should pair to fix &amp;amp; unit test. What stops you? &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Discuss before reporting&lt;/strong&gt;. Show the bug as demo, and see if the developers would fix it immediately. If fixing now is not the right thing, report so that we don't forget. If the bug finding is timely, developers are not yet busy with other work. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Write a bug report&lt;/strong&gt;. Write a report. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When writing a bug report, you should consider making it more likely to be fixed by paying attention to reporting. RIMGEN is a way of remembering how to do that. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Replicate. Make sure that with whatever your report says, the readers of report will be able to see the problem as well. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Isolate. Don't just report vague symptoms, analyze the problem and isolate what causes it.  &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Maximize. Describe impact of it realistically but with motivating through maximized consequences you can analyze for it.  &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Generalize. Find the most meaningful sample of it. Even if you found that trash text overflows the intended area, report it with real text.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Externalize. Speak of its meaning in terms of a stakeholder that matters. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Neutral tone. Keep it factual. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwx41jn4c4qws3h70zkpb.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwx41jn4c4qws3h70zkpb.JPG" alt="Let's Test"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's try testing and creating a mindmap while we are testing. Document all functions we have found, all data we have tried, and all bugs we have found. How does your map look? &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi9y7940zi8p6d5vdyozw.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi9y7940zi8p6d5vdyozw.JPG" alt="Mindmapping as Future Reference"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At this point, your mindmap should look like early example on the course we shared after first impression. You may already know more than what the early mindmap entailed. Everything you know is not documented, but your documentation could help you rediscover things effectively later and show what your thoughts were while testing. &lt;/p&gt;

&lt;p&gt;Mindmap is &lt;em&gt;documentation in the moment&lt;/em&gt;. You choose what keywords are useful for recalling your learning, and structure them as you go about testing an application. &lt;/p&gt;

&lt;p&gt;When you learn, you &lt;em&gt;restructure the map as you learn&lt;/em&gt;. Sometimes we see people make major changes on their maps as they understand connections of features of an application, and that is where mindmaps are on their best. They encourage that change through drag and drop of branches to their proper relational places. &lt;/p&gt;

&lt;p&gt;Saving the mindmap in a common structure and place can work as &lt;em&gt;documentation for the future&lt;/em&gt;. It does not have everything, but it has things you have considered relevant to write down. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;General purpose mindmaps&lt;/em&gt; include ideas and heuristics you could use for multiple similar applications. Try searching online for web application testing mindmaps, and you find many examples. If those help you, color coding coverage of ideas may be sufficient for you. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxwswzesuhc1rok2gdpd2.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxwswzesuhc1rok2gdpd2.JPG" alt="pytest the Very Basics"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On this course we use pytest to illustrate the idea of test automation as documentation. Python is fairly English-like programming language, and provides pytest-bdd to write GIVEN-WHEN-THEN structure of English. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foo0m3y7o9kk3tv7f5qp6.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foo0m3y7o9kk3tv7f5qp6.JPG" alt="pytest"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For this course to illustrate test automation, we use pytest. It is a Python-based test runner that comes with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;large user base and lot of online examples&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;An ecosystem of python libraries you can use&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Simple and extendable logs that describe results of the testing&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Possibility to extend like any general purpose programming languages&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can find instructions to install it online, and if you already have Python installed, it can be as simple as saying on command line:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install pytest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After this for files with .py ending including test_-methods, you can run them with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pytest test_file.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F478ofexdgms88i4aa8k8.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F478ofexdgms88i4aa8k8.JPG" alt="Documenting as Skeleton Test Automation"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Documenting with mindmaps is not the only option you have. Let's look at the option of documenting with stepwise test cases, but skip writing them separately and take you directly into the context of test automation when you consider writing them. &lt;/p&gt;

&lt;p&gt;We think of it this way. For creating code, you are translating your intent into something that can be run by the computer. You express your intent in English. You translate your intent in English to code. Your brain is at its strongest working in natural language, not in code. &lt;/p&gt;

&lt;p&gt;At simplest, your code can be just keeping track of the things you would like your tests to do for you programmatically. &lt;/p&gt;

&lt;p&gt;We start with the idea of creating skeleton test automation. It does not automate anything but it moves your test cases into the tooling that you could use in translating your intent of tests to code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5x7eul51laajwg25nppz.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5x7eul51laajwg25nppz.JPG" alt="Document in Context of Code"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For writing skeleton test cases in pytest, we show you a simple idea of how to do that. &lt;/p&gt;

&lt;p&gt;You name your test case, with whatever text you want to have using snake_case, with underscores connecting words. Think of this as the title of your test case. &lt;/p&gt;

&lt;p&gt;For steps, you need to write comments to leave your test ideas in context of the code. &lt;/p&gt;

&lt;p&gt;It does not test for you, but it can describe how you would test. You could later turn your comments in English to code, either yourself or with help of another team member. &lt;/p&gt;

&lt;p&gt;We use this approach in some projects to bring the non-programming testers ideas of what they would like to have in automation into the context of automation code. &lt;/p&gt;

&lt;p&gt;If you want cleaner English with the GIVEN-WHEN-THEN syntax, you need to do a little bit of more setup to take pytest-bdd into use. &lt;/p&gt;

&lt;p&gt;You can install it with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install pytest-bdd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You need a folder called &lt;em&gt;Tests&lt;/em&gt; and under it another folder called &lt;em&gt;features&lt;/em&gt;, with a file ending &lt;em&gt;.feature&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Your eprime.feature -file could then include:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Feature: Eprime text analysis
    As a user,
    I want to verify my text for violations of eprime,
    So I learn to write proper English

    Scenario: Eprime analysis
        Given the eprime page is displayed
        When user analyses sentence to be or not to be
        Then user learns sentence has 2 be-verbs, 0 possible be-verbs and total 6 words
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, you can run your tests with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pytest 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With some tools, we have used a 'best before' style expiration concept to make these tests fail after an agreed time if the team has not picked them up for filling in executable details. &lt;/p&gt;

&lt;p&gt;The more difficult question that the tool for this case is the question of what would make a good test case. Our advice is to write down flows we discovered and deemed relevant while exploratory testing, understanding that we will choose to write down only a small subset of all the flow variations we have played with to discover ones worth keeping around. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8fj3o1phwx637iz2wwel.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8fj3o1phwx637iz2wwel.PNG" alt="Let's Test"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's try creating a test cases like this with pytest. One is sufficient. What would the basic flow of testing in this application look like? &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fszk97felimu8qf4blis7.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fszk97felimu8qf4blis7.JPG" alt="Skeleton Test Automation"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is one example we created you can save in example.py:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def test_basic_eprime_sentence():
   # New page with the application
   # Write sample text into the text field, use To be or not to be is Hamlet's dilemma
   # Click on the button
   # Verify number of words, 9
   # Verify number of discouraged words, 3
   # Verify number of possible violations, 1
   assert True
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Leaving the test with a unique tag could allow to see in the report how many tests like this exist, and help drive a process of turning them into automation. &lt;/p&gt;

&lt;p&gt;The skeleton test cases can act as future automation placeholders, and we can choose to write only one line of the test to capture the idea over capturing the steps. &lt;/p&gt;

&lt;p&gt;These can be like traditional test cases but instead of maintaining them in a system of their own, we maintain them in the context of the code, version controlled as code. The aim of such practice would be to limit the distance between different kinds of testing. In the same way when steps are translated to code, the names of the tests would make sense for everyone on what tests exist in automation. We need to consider reading the names even when we may not ourselves be working in the details of the test code. &lt;/p&gt;

&lt;p&gt;These skeleton test cases are a concrete handoff to an idea to keep around through automating, and while we propose thinking of decomposing the tests differently for automation purpose, refactoring from this input is possible.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxa19d3drx6v1yi1zmic7.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxa19d3drx6v1yi1zmic7.JPG" alt="Playwright library and Selectors on Web Page"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To get your automation to drive testing on this application, you need more than what pytest alone comes with. We are now adding the concepts you need around web applications, namely two things: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Playwright library that allows you to drive a web based application.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;CSS selectors that allow you to express for a program what elements on a web page you want to be manipulating. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;While Playwright library gives you methods to do things in browser, the CSS selectors are a simple form of being specific about what you want to do on that page. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa8ubhf927jaazex1w50i.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa8ubhf927jaazex1w50i.JPG" alt="Playwright Library"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To get playwright library into use, you will need some more setup. At its simplest form you need to run these commands on a command line:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install playwright
pip install pytest-playwright
playwright install
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In your example.py file, you would now need to add a reference to the library:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from playwright.sync_api import Page
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At this point, we want to discuss briefly what you get on your machine when you install this library. &lt;a href="https://playwright.dev" rel="noopener noreferrer"&gt;Playwright&lt;/a&gt; is Microsoft's open source web driver tool running on NodeJS. &lt;/p&gt;

&lt;p&gt;You may have heard before about Selenium - it is another library for the same purpose. We use Playwright for simplicity of its API - the methods read nicely to a new user. The library methods wait by default, so you don't have to be defining waits to avoid tests failing on previous page due to slow loading. &lt;br&gt;
This new library lowers the bar for new automators by new design on how waiting for applications work, requiring the user of the tool to do a little less. Waiting for web applications is important because if we look for something on the screen too soon and it has not yet emerged as is typical in these web technologies, our tests fail even though the application works as it should. &lt;/p&gt;

&lt;p&gt;Playwright promises speed, reliability, visibility. Speed is about it being faster than Selenium library. Reliability is about the new designs on waiting. Visibility is about being able to control API calls in browser as well as working with the web page structures we can expect to be complex in the real world applications. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu8en9e96987e6t5axn8c.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu8en9e96987e6t5axn8c.JPG" alt="CSS Selectors"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;CSS selectors are one way we can access elements on a web page. For example, we see there is a button, but we need to be more specific for automation purposes on how the program we are creating would know to press exactly that button. &lt;/p&gt;

&lt;p&gt;For various Playwright library methods, we need to tell what locator to use for an element:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;page.click("#TheOnlyButton")
page.fill("#TextFieldSelector", "Writing this text")
assert page.inner_text("#TextSelector")   ==   "This text should be visible"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We need to find something that uniquely identifies the thing we want to interact with. To do so, we need to inspect the web page element we are looking for, and make our choices of what the unique value is. Some typical ones are: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;id  - "#id"&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;class  - ".class"&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;tag - tag   for example "h1"&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For more information on how selectors work, we suggest to look into &lt;a href="https://www.w3schools.com/cssref/css_selectors.asp" rel="noopener noreferrer"&gt;CSS selectors reference&lt;/a&gt;.  &lt;/p&gt;

&lt;p&gt;To get to a place of running your first test with Playwright library, you would name your test as you wish and use Playwright methods as steps&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import pytest
from playwright.sync_api import Page

def test_example(page: Page):
    page.goto("https://www.exploratorytestingacademy.com/app/")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The test runs quickly and shows nothing other than the result when you run the file from command line. The Playwright library runs headless by default - without opening the browser for you to look at. To change the default, you introduce a file &lt;em&gt;pytest.ini&lt;/em&gt; with contents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[pytest]
addopts = --browser chromium --headed --slowmo 1000 --screenshot only-on-failure --video on 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This does a bit more than show the browser. It selects the browser so that you can change it, shows the browser, slows it down so that you have time to also think about what you see, takes screenshots of failures and does videos of your test executions. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0xext97gyhdfrohh1zhg.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0xext97gyhdfrohh1zhg.JPG" alt="Documenting as Executable Test Automation"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Documenting as executable test automation may take you more time, depending on your application structure and your skills in moving in steps that allow you to automate and explore. It may also very soon save you time in allowing you to do some things you would not be doing manually as fast. After you have one test case, moving towards different inputs and their matching outputs of similar structure become a listing exercise. Similarly, you can run your tests across multiple browsers with the example application we use on this course. You could, should you want to, run your tests in a loop while you sleep and see what happens. These are not &lt;strong&gt;all&lt;/strong&gt; your tests, but they are some you can consider. &lt;/p&gt;

&lt;p&gt;We think of it this way. You could choose to start your exploratory testing with automation tools at hand from the very first moment of impression. That will guide your focus, but your focus is yours to choose and as we discussed earlier, there are very few things that you can only do with moment of first impression. At the same time, you want to be aware of possibility of premature documentation at a time you know the least. &lt;/p&gt;

&lt;p&gt;Think of exploring while documenting with automation as a timing-aware way of creating automation. You can create something simple and extend. We recommend adding reusability when you need it and cleaning up when you need it. You may be throwing the work away so pay attention to when you invest your time into keeping things around up to standards withheld for good test automation. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ws5rdajemuwgjmd2mvj.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ws5rdajemuwgjmd2mvj.JPG" alt="Let's Test"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, you could translate each of your earlier log lines into executable code, using css locators and playwright library reference available at &lt;a href="https://playwright.dev/python/docs/api/class-page" rel="noopener noreferrer"&gt;Playwright pages&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;We suggest you do this in steps. &lt;/p&gt;

&lt;p&gt;First, create one single executable test case that matches a basic test like we showed earlier with Verify a Basic Sentence. Each comment line translates to an executable command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from playwright.sync_api import Page

def test_example(page: Page):
    page.goto("https://www.exploratorytestingacademy.com/app/")
    page.fill("#inputtext", "To be or not to be is Hamlet's dilemma")
    page.click("#CheckForEPrimeButton")
    assert page.inner_text("#wordCount") == "9"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Second, make your values variables.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from playwright.sync_api import Page

URL = "https://www.exploratorytestingacademy.com/app/"
input_text = "To be or not to be is Hamlet's dilemma"
expect_wordcount = "9"

def test_example(page: Page):
    page.goto(URL)
    page.fill("#inputtext", input_text )
    page.click("#CheckForEPrimeButton")
    assert page.inner_text("#wordCount") == expect_wordcount
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa2chqur77mz7vsnxfgki.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa2chqur77mz7vsnxfgki.JPG" alt="Example"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The way this usually plays out in stages is this. &lt;/p&gt;

&lt;p&gt;Step 0. Starter&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from playwright.sync_api import Page

def test_example(page: Page):
    page.goto("https://www.exploratorytestingacademy.com/app/")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 1. Single executable test case&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from playwright.sync_api import Page

def test_example(page: Page):
    page.goto("https://www.exploratorytestingacademy.com/app/")
    page.fill("#inputtext", "To be or not to be is Hamlet's dilemma")
    page.click("#CheckForEPrimeButton")
    assert page.inner_text("#wordCount") == "9"
    assert page.inner_text("#discouragedWordCount") == "2"
    assert page.inner_text("#possibleViolationCount") == "1"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 2. Refactor to variables&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from playwright.sync_api import Page

URL = "https://www.exploratorytestingacademy.com/app/"
input_text = "To be or not to be is Hamlet's dilemma"
expect_wordcount = "9"
expect_discouraged = 2
expect_violation = 1

def test_example(page: Page):
    page.goto(URL)
    page.fill("#inputtext", input_text )
    page.click("#CheckForEPrimeButton")
    assert page.inner_text("#wordCount") == expect_wordcount
    assert page.inner_text("#discouragedWordCount") == expect_discouraged
    assert page.inner_text("#possibleViolationCount") == expect_violation
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can run your tests from terminal with&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pytest test_filename.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We encourage you to run your tests frequently as you are creating them to understand each step. Pay attention to indentation - in python indentation is part of the structure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4wwg1lcdp0k422207d0i.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4wwg1lcdp0k422207d0i.JPG" alt="Reports"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When you run tests with pytest, it creates an output on the terminal. If you want it to create report files, there is pytest-html that you can add to your project. Later, &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fws7emtkxwboff5ziqyaa.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fws7emtkxwboff5ziqyaa.JPG" alt="Parametrized Tests"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Parametrized tests are an idea that for any linear script, you can replace all values with variables and reuse the same script for different values of variables. This is sometimes referred to as data-driven approach to test automation. &lt;/p&gt;

&lt;p&gt;Moving from a linear script in step 2 to a parametrized test can be done with refactoring without changing yet the test at all. Taking steps in programming your test is just as good a practice as taking steps in your other exploratory testing activities. &lt;/p&gt;

&lt;p&gt;Step 3. Refactor to template tests&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from playwright.sync_api import Page

url = "https://www.exploratorytestingacademy.com/app/"

@pytest.mark.parametrize('input_text, expect_wordcount, expect_discouraged, expect_violation', 
[
    ("To be or not to be - Hamlet's dilemma", 9, 2, 1)
])
def test_parametrized_test(page: Page, input_text, expect_wordcount, expect_discouraged, expect_violation):
    page.goto(url)
    page.fill("#inputtext", input_text)
    page.click("#CheckForEPrimeButton")
    assert page.inner_text("#wordCount") == expect_wordcount
    assert page.inner_text("#discouragedWordCount") == expect_discouraged
    assert page.inner_text("#possibleViolationCount") == expect_violation
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After you have a parametrized test, you can come up with more values of inputs and their expected outputs. &lt;/p&gt;

&lt;p&gt;Step 4. Extend&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from playwright.sync_api import Page

url = "https://www.exploratorytestingacademy.com/app/"

@pytest.mark.parametrize('input_text, expect_wordcount, expect_discouraged, expect_violation', 
[
    ("To be or not to be - Hamlet's dilemma", 9, 2, 1),
    ("", 0, 0, 0)
])
def test_parametrized(page: Page, input_text, expect_wordcount, expect_discouraged, expect_violation):
    page.goto(url)
    page.fill("#inputtext", input_text)
    page.click("#CheckForEPrimeButton")
    assert page.inner_text("#wordCount") == expect_wordcount
    assert page.inner_text("#discouragedWordCount") == expect_discouraged
    assert page.inner_text("#possibleViolationCount") == expect_violation
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We propose creating a nice list of examples you can find in the application linked Wikipedia page you analyzed as a specification. Our expectation is that your experience of collecting those examples into this format can be straightforward. &lt;/p&gt;

&lt;p&gt;You can also try for a file you saved as sample.txt:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from playwright.sync_api import Page

url = "https://www.exploratorytestingacademy.com/app/"

def this_is_sample(): 
    with open('sample.txt') as f:
        lines = f.readlines()
    return str(lines)

@pytest.mark.parametrize('input_text, expect_wordcount, expect_discouraged, expect_violation', 
[
    ("To be or not to be - Hamlet's dilemma", 9, 2, 1),
    ("", 0, 0, 0), 
    (this_is_sample(), 507, 2, 0)
])
def test_parametrized(page: Page, input_text, expect_wordcount, expect_discouraged, expect_violation):
    page.goto(url)
    page.fill("#inputtext", input_text)
    page.click("#CheckForEPrimeButton")
    assert page.inner_text("#wordCount") == expect_wordcount
    assert page.inner_text("#discouragedWordCount") == expect_discouraged
    assert page.inner_text("#possibleViolationCount") == expect_violation
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also choose to name your tests so that when you run them, you see your assigned names:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from playwright.sync_api import Page

url = "https://www.exploratorytestingacademy.com/app/"

@pytest.mark.parametrize('input_text, expect_wordcount, expect_discouraged, expect_violation', 
[
    ("To be or not to be - Hamlet's dilemma", 9, 2, 1),
    ("", 0, 0, 0)
],
ids=['demo', 'empty']
)

def test_parametrized(page: Page, input_text, expect_wordcount, expect_discouraged, expect_violation):
    page.goto(url)
    page.fill("#inputtext", input_text)
    page.click("#CheckForEPrimeButton")
    assert page.inner_text("#wordCount") == expect_wordcount
    assert page.inner_text("#discouragedWordCount") == expect_discouraged
    assert page.inner_text("#possibleViolationCount") == expect_violation

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3w9hclgjsrdwv3h0j0pj.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3w9hclgjsrdwv3h0j0pj.JPG" alt="Logs with Failure"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When your tests fail, logs show red for the step failing. If you did not try so before, try shortened version of plural "you're". This is two words in shortened format, against e-prime and not recognized by the program.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;("you're", 2, 1, 0)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With a fail due to a problem in the application, you have a choice to make: how will you deal with the problem. First you report it. If it isn't immediately fixed, it will show as red in your test automation. For purely attended use of your test automation this does not matter, but if you want to continuously run your tests, alerting on the known issue isn't desirable. Some teams comment out the tests that fail with a known issue. Other teams have practices to tag their tests to categorize results of those tests differently. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe8n9so3wsuxi4md7n2l4.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe8n9so3wsuxi4md7n2l4.PNG" alt="BDD"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In case you are up for a little more refactoring, you can also try adding pytest-bdd to the mix. Install it with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install pytest-bdd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a feature file with your examples, tests/features/eprime.feature:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Feature: Eprime text analysis
    As a user,
    I want to verify my text for violations of eprime,
    So I learn to write proper English

    Scenario: Eprime analysis
        Given the eprime page is displayed
        When user analyses sentence to be or not to be
        Then user learns sentence has 2 be-verbs, 0 possible be-verbs and total 6 words

    Scenario: Incorrect Eprime analysis
        Given the eprime page is displayed
        When user analyses sentence To be or not to be - Hamlet's dilemma
        Then user learns sentence has 2 be-verbs, 1 possible be-verbs and total 9 words

    Scenario Outline: Eprime samples are correctly analyzed
        Given the eprime page is displayed
        When user analyses sentence &amp;lt;sentence&amp;gt;
        Then user learns sentence has &amp;lt;count_certain&amp;gt; be-verbs, &amp;lt;count_possible&amp;gt; possible be-verbs and total &amp;lt;count_total&amp;gt; words

        Examples:
        | sentence    | count_certain | count_possible | count_total |
        | was not     | 1             | 0              | 2           |
        | cat is hat  | 1             | 0              | 3           |
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then create step definitions file test/step_defs/eprime_test.py:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from pytest_bdd import scenario, given, when, then, parsers
from playwright.sync_api import Page

HOME = "https://www.exploratorytestingacademy.com/app/"

@scenario('../features/eprime.feature', 'Eprime analysis')
def test_eprime_analysis():
    pass

@scenario('../features/eprime.feature', 'Incorrect Eprime analysis')
def test_incorrect_eprime_analysis():
    pass

@scenario('../features/eprime.feature', 'Eprime samples are correctly analyzed')
def test_eprime_samples_correctly_analyzed():
    pass

@given("the eprime page is displayed")
def eprime_home(page: Page):
    page.goto(HOME)

@when(parsers.parse('user analyses sentence {phrase}'))
def analyze_phrase(page: Page, phrase):
    page.fill("#inputtext", phrase)
    page.click("#CheckForEPrimeButton")

@then(parsers.parse('user learns sentence has {violations} be-verbs, {discouraged} possible be-verbs and total {words} words'))
def search_results(page: Page, words, violations, discouraged):
    assert page.inner_text("#wordCount") == words
    assert page.inner_text("#discouragedWordCount") == violations
    assert page.inner_text("#possibleViolationCount") == discouraged
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm79bjtu02p4vpx0xzy4m.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm79bjtu02p4vpx0xzy4m.JPG" alt="BDD with failure"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Similarly than before, you will also want to see your tests fail and make sure you understand how to read the results. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5cz9anh5cggap279i1k0.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5cz9anh5cggap279i1k0.PNG" alt="Documenting as Executable Test Automation"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With this exercise, you have now taken multiple steps through to create test automation to document your tests. Test automation is an output of your exploration. It is not complete of things you paid attention to while exploring, but no documentation is. As executable documentation it allows you to see if the changes in your application result in changes in your tests results. &lt;/p&gt;

&lt;p&gt;We recommend you explore with documenting as automation step by step. You first create one line that runs a small part of your test. You add, and you make sure you can see your tooling give you fails when verifying something that isn't as you would expect. &lt;/p&gt;

&lt;p&gt;In the end, you have choices of keeping your notes private to you, throwing them away or keeping them around running as part of your continuous integration pipelines. If your tests sometimes fail and sometimes pass, the false alarms take away trust from your tests in the continuous integration. Your scripts may also run on your own environment, and extend your exploration on other days. Our experience is that it's not about the scripts extending your work, but the scripts extending the whole team's work, even when you are not available. Working to bring your test automation into continuous integration is often a worthwhile effort. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqnid437k4l5z59yqrdd3.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqnid437k4l5z59yqrdd3.PNG" alt="Why This Is Not About Any Specific Tool"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When we teach this content including this constraint with automating with pytest, people sometime - we would say often - forget that there are many things to explore and this is only one of our constraints that directs thinking. Temper your possible excitement. There are many tools and your choice should be one the developers in your team will share with you. Over time, our recommendation is learning the language of the programmers in your team and working with libraries that allow working in the same language and ecosystem. &lt;/p&gt;

&lt;p&gt;The words of caution we want to extend on come down to four themes you need to pay attention to: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Searchability. When you have a problem and you seek help, the language ecosystems define number and style of answers available. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;IDE support for test development. When creating tests, being able to run individual ones in tooling integrated with your IDE (integrated development environment, such as Visual Studio Code or PyCharm) is become essential. Use an IDE.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Logs. Built-in logs are limited. We have preferred Allure reporting framework as an extension to our capabilities. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Debugging. Figuring out what fails and where is a big part of the work in long term. Running individual tests and being able to enjoy the full debugging features of IDEs is essential.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6qrp7dd0f6a8b9gxp3fn.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6qrp7dd0f6a8b9gxp3fn.PNG" alt="Documentation as a Constraint"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When you do exploratory testing, you are making choices of where to use your time. Time used on something is time away from something else. You can do documentation too extensively, prematurely, or in formats that limits future use. You are always striking a balance in what you can focus on at a time, what your skills are, what is possible and what you in particular should do in relation to the other people testing in your team. &lt;/p&gt;

&lt;p&gt;Think about it this way. At some point of time, you will want to work on something other than what you work on right now. Will you leave behind documentation that enables those coming after you? Will you enable testing that happens with what you leave behind with automation? It isn't perfect and complete, but it is useful. Executable documentation run regularly stays up to date with maintenance. &lt;/p&gt;

&lt;p&gt;Earlier on this course, we talked about the heuristic of "Never be bored". We believe that it is not possible in the long term in scale of whole teams without automation. &lt;/p&gt;

&lt;p&gt;Testing is for everyone. Testers are a group that build their professional identity around testing. A common observation in preferences is that many people with tester identity cope with tedium, while those with programmer identity automate tedium away. We recognize the second level of thinking happening when there is appearance of repetitive work in exploratory testing, yet there are possibilities of doing simple things to get to a mindset also when test automation is doing significant repeat work. We can have best of both worlds in the teams. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbta0rr4kpg1jlcv5tsl0.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbta0rr4kpg1jlcv5tsl0.PNG" alt="Automation in Frame of Exploratory Testing"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Test automation as documentation is only one of the uses of test automation in exploratory testing. &lt;/p&gt;

&lt;p&gt;Sometimes your automated tests enable you to extend reach: you can test when you are not around and come back to noting results when you are around; you can use your automated tests to get to a place in tests and data from which you otherwise explore further; your continuous integration showing tests failing will invite you to explore the change and help notice lack of communication just as much as unintended side effects. &lt;/p&gt;

&lt;p&gt;We like to think of continuously running growing set of tests as a spider web that enables you to notice when something is caught in the web. It isn't doing all the work for you, but it is calling you to attend. &lt;/p&gt;

&lt;p&gt;Finally, when you make an effort to automate, it is like you are applying a magnifying glass on many of the details. You need to understand details to automate effectively, and while automating, you can't help but explore the details at hand. We encourage you to report issues you find when automating as the constraint of test automation will allow you to notice things that you may otherwise miss. &lt;/p&gt;

&lt;p&gt;A particular category you will notice is ideas to improving testability. What would make it easier for you to control what you want to control, and have visibility to things you want to see and verify. A popular example for web applications is good selectors. With the example application we have used on this course, unique IDs are generally made available but have many conventions to naming them. If use of these was consistent, creating test automation as documentation would be a little more straightforward. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvm9vnuojzzg9cglv1vha.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvm9vnuojzzg9cglv1vha.PNG" alt="Moving Focus"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With exploratory testing, we find it is better to frame testing activities as attended and unattended. &lt;/p&gt;

&lt;p&gt;You need to do attended testing to figure out what information you want and can get out of an application. The application, as we discussed very early on in this course, is our external imagination. It helps you systematically think through what we could and should test. You also need attended testing to slow down just enough to create executable documentation or other programmed tests that extend the reach of your testing. A big part of test automation is actually attended testing, since programmed tests don't write themselves but require a programmer in creating them, even now with major advances on the tool support and examples available making this more approachable. &lt;/p&gt;

&lt;p&gt;You need to do unattended testing where your programmed tests do work for you even when you don't. Repeating to isolate issues on reliability, or covering environments or data variations - all of these are valuable ways of doing exploratory testing and you don't need to attend to them all the time while they run. They hook you back in to attend when something fails and invites you to explore further. &lt;/p&gt;

&lt;p&gt;Creating good, reliable programmed tests is inherently an exploratory testing activity, now in addition to exploring your application you are exploring your application with drivers that you need to balance into the mission of optimizing the value of your testing - today and in long term. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzwsifbvaz1ohkbkm1pl6.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzwsifbvaz1ohkbkm1pl6.JPG" alt="Stop and Think - pytest-playwright"&gt;&lt;/a&gt;&lt;br&gt;
We have now discussed using one particular framework with our target application. It is time to stop and think back to all the different constraints you have used during this course so far. How would the testing you did before this have been different if this was what you started with? &lt;/p&gt;

&lt;p&gt;We have watched groups start with this, end with this, and try this in the middle. Those who start with this, tend to end up with a more structural understanding of the test application, but also be inattentive to many of the problems. You fit the work you do to the constraint you do it with. When using automation, you pay attention to things you can do with automation - first. The premise of exploratory testing is that your work can start there but it does not have to end there. It can also start elsewhere and still end with automation in place.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9375domyeobh8da2kajy.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9375domyeobh8da2kajy.JPG" alt="Use of Time"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You have made it through all of the constraints we wanted you to apply on E-Primer, and if you completed all the exercises, you would have spent a day on this course. We teach this in classroom focusing on exercises and these barely fit into a day. With exercises, discussion and the theory in the written material, this course takes usually two working days. In the end of that time, we are all bored of the E-Primer application and wish for something else to test. &lt;/p&gt;

&lt;p&gt;Being aware of time you use is core to exploratory testing. We will discuss this briefly. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffl6elbcymm4sxbappr97.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffl6elbcymm4sxbappr97.JPG" alt="Test, Bug, Setup"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When you test, your time is roughly consumed in four categories:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Not testing. There are other things than testing your days will include.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Test. This is time you spend on increasing coverage. Coming up with and trying new ideas, and using the opportunity to try and find new information. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Bug. This is time you spend on things worth a conversation. Whenever you see a bug, it stops the testing you were doing and moves you to reporting and conversations track. If and when the bug you found gets fixed, it will make you repeat tests you tried to complete but that were interrupted, as well as some you completed but have now seen their best-before date as changes invalidate old results. If and when the bug you found isn't fixed until later, it becomes something you try to remember to not see and report again, while still making effort to see other problems. This is a significant time drain. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Setup. This is time you use on setting up for testing. The time for collecting and creating data to use as well as time for documentation to enable testing effectively the next times would belong in this category. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you don't get to spend time with Test, you aren't making progress. Pay attention to where time goes. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fom3y86gosdu62zg2e509.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fom3y86gosdu62zg2e509.JPG" alt="E-Primer Traps"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With E-Primer, we have observed groups getting time related troubles with four items we call time traps. &lt;/p&gt;

&lt;p&gt;Those groups who start exploring with automation may try to build a complex first test and end up trying to design a versatile algorithm that could be reused. At a time when you know little about the application, this is often premature. We call this an algorithm trap. &lt;/p&gt;

&lt;p&gt;Those groups who start freeform exploring and first run into bugs like noticing that the word count can be easily fooled with things like line changes may try to find all the ways the word count fails. Word count itself seems like the least important feature of this application in the sense that knowing number of words teaches us nothing about e-prime. We call this a bug trap. &lt;/p&gt;

&lt;p&gt;Those groups who first open the specification may try to read it all and structure it into a clean list they can track execution against. It can take a lot of effort at a time when people don't know what for example is a "possible violation" as it is not described at all on Wikipedia page. We call this a test cases trap. &lt;/p&gt;

&lt;p&gt;Those groups who start with values they can enter can end up trying all kinds of values from &lt;a href="https://github.com/minimaxir/big-list-of-naughty-strings/blob/master/blns.txt" rel="noopener noreferrer"&gt;naughty strings list&lt;/a&gt; without paying attention to the type of application in question. We call this a data trap.  &lt;/p&gt;

&lt;p&gt;All traps are fatal if we run out of time while stuck in a one of these. With exploratory testing, you need to learn about your use of time, and assess it critically. &lt;/p&gt;

&lt;p&gt;When you are out of time, did you end up doing the best testing you possible can? What would you do differently so that you've used your time well? &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftv7aos2yds27qrip53gx.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftv7aos2yds27qrip53gx.JPG" alt="Stop and Think - Time and Traps"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Stop to think once more. You have tested with many constraints. Where did your time go? &lt;/p&gt;

&lt;p&gt;Did you write your bugs reports so clearly that you could pass them to someone else who does not have all the knowledge you acquired through testing? &lt;/p&gt;

&lt;p&gt;Did you create documentation so that your coverage of new ideas and most importantly, problems of relevance did not get the time it needs? &lt;/p&gt;

&lt;p&gt;Did you find all the problems we outlines in the mindmap with bugs in red? Did confirming our list take you a significant amount of time? &lt;/p&gt;

&lt;p&gt;What time taught you most about the application? What about what use of time taught you most about testing? &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft5he1v21mob8nlizui9x.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft5he1v21mob8nlizui9x.JPG" alt="Coverage"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Coverage is a way for us to talk about thoroughness of our testing against some criteria. &lt;/p&gt;

&lt;p&gt;If we make plan of test ideas and go through them all, we have 100% coverage of the plan. The plan may not be complete and the worse plan we create the more likely we are getting through everything on that plan.&lt;/p&gt;

&lt;p&gt;If we take the constraints on this course and apply them all, we have covered those constraint, yet the results may be insufficient.&lt;/p&gt;

&lt;p&gt;If we find all the relevant problems - list we don't have available now, or even in hindsight as customers don't report but leave your application or work around slownesses and annoyances - we have covered all problems. &lt;/p&gt;

&lt;p&gt;We can estimate coverage with many criteria: plans; claims in specifications; features; code; risks. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjdjciinz886l2ruvmlnd.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjdjciinz886l2ruvmlnd.JPG" alt="Setting the Stage for Testing"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Simply framed, testing is the act of targeting test ideas to find relevant issues and information that we act on and assessing the work we do in relation to all the work we should be doing (coverage). &lt;/p&gt;

&lt;p&gt;Not all information is equally valuable. Some is more valuable found early on. We need to target our efforts in support of product/project success and recognize what is relevant. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0tuqv631f0t0lzw3ypix.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0tuqv631f0t0lzw3ypix.JPG" alt="Risk Coverage"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the end, it is about risk coverage. Of all things that can go wrong, how well we work to know those things aren't going wrong. &lt;/p&gt;

&lt;p&gt;We are painting on an empty canvas, and we try to understand what is there. Bugs may sometimes be equally expensive (cheap) to fix when found in production by real customers, but we will want our customers to be able to rely on a level of functional. Different products will require different detail. You can easily imagine e-primer isn't a product that threatens life, but it is only one of the many applications you may end up testing. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8yl6ou0etl9xmapyhtkq.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8yl6ou0etl9xmapyhtkq.JPG" alt="Stop and Think - Coverage of Testing"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You thought about time, now think about coverage. Would the testing you did and thought of have missed any of the bugs we have mentioned on the course? &lt;/p&gt;

&lt;p&gt;Can you still think of something we did not test? Maybe Security? Performance? Or Reliability? &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkong90yhkl7s13il8xp1.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkong90yhkl7s13il8xp1.JPG" alt="Test Strategy"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Strategy - ideas guiding our test design - is usually something we think of as starting with, &lt;br&gt;
to target our testing to match those ideas. Yet given an application that we know nothing of, &lt;br&gt;
before starting testing of it is making decisions prematurely with the least information we have at hand. &lt;/p&gt;

&lt;p&gt;We recommend taking the approach of strategy being always present from the start to the end, and being ready to be summarized after relevant amount of testing to learn the application has already taken place. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9axlv51n6gr98ib81y5e.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9axlv51n6gr98ib81y5e.JPG" alt="Ideas that Guide Test Design"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To understand what is test strategy, we define it as ideas that guide test design. Those ideas are about risks and tasks, and our understanding of why the application exists in the first place. Since they are ideas about a specific application, we recommend paying attention to the specificity - same ideas don't apply to all applications or project constraints.&lt;/p&gt;

&lt;p&gt;We suggest to think about a test strategy as answers to three questions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;What is the product?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What are the product's key potential risks? &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How could we test to evaluate the actual risks? &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The answers to these questions both improve and change over time as we are learning about an application through doing exploratory testing. &lt;/p&gt;

&lt;p&gt;Writing down a strategy enables critique of the ideas that currently drive us. This critique can be you reviewing what you wrote down as time has passed with more experience with the application under your belt, or it can be stakeholders critique inviting improving the ideas you have collected. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6xtlszpm0rmbym3g9lrb.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6xtlszpm0rmbym3g9lrb.JPG" alt="Let's Test"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Just like everything on this course, when we expect you to create a testing deliverable like test strategy, you can test to create it. The strategy would look different if you did this segment earlier in the course flow than where we tentatively schedule it. &lt;/p&gt;

&lt;p&gt;Let's try testing and creating a written description of a test strategy. Try answering clearly and concisely to 1) what is the product? 2) What are its key potential risks? 3) How could you test to evaluate the actual risks? &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq8rte9m919ablzxskm5v.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq8rte9m919ablzxskm5v.JPG" alt="Test Strategy for E-Primer"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After all the testing we had done on the application, we paired on documenting the strategy including all the learning that we had. &lt;/p&gt;

&lt;p&gt;We decided to describe it three sections. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;What is the product&lt;/li&gt;
&lt;li&gt;What are the key potential risks&lt;/li&gt;
&lt;li&gt;How could we test the product so as to evaluate the &lt;em&gt;actual risks&lt;/em&gt; associated with it&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We concluded that E-Primer is an English text validator that checks text against specific rules around avoiding the verb 'to be'. It identified rule breaking with two categories: one that can be checked by a rule and and another that needs human assessment (for now). &lt;/p&gt;

&lt;p&gt;It's key potential risks are: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;It suggests wrong corrections and misses corrections in real samples&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It miscounts words in a way that leads us to underappreciate the scale of processing&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It looks wrong on some browsers and data samples&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It requires too much effort to learn in relation to the value of proofreading it provides&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To evaluate the actual risks we would propose the following activities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Understand rules of e-prime through research&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Collect data samples (short and long ones) that represent both e-prime text and text that violates the rules of e-prime and run them through the program&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Document specification as automation that shows the rules of e-prime and and enables running a subset of tests across browsers&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Try fooling word count to count less words or more words by specific data samples&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Run the web page through a set of html validators (incl. accessibility)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Visually verify the page with realistic e-prime text samples&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Read the code of the application for inspiration focusing on names of functions rather than understanding implementation&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Summarize learning obstacles for user and value of the application as comparison sheet &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This example is just that - example. It is not the only possible outcome. Your outcome can differ, perhaps even should differ and comparing two outcomes is done on usefulness, not exact match. Would following these ideas driving your testing get you to a place where you do good testing that you can be happy with? &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpd6hvcigyzwzch2atdjf.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpd6hvcigyzwzch2atdjf.JPG" alt="Closing Remarks"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You have reached the end of the course and it is time for us to reflect on what we were teaching. &lt;/p&gt;

&lt;p&gt;The whole course was set up around one simple application, yet we could approach it with many constraints and see different perspectives to quality of it. &lt;/p&gt;

&lt;p&gt;There are other applications we could test to learn - partially same things to deepen our understanding, but also a lot of new perspectives. &lt;/p&gt;

&lt;p&gt;To score yourself on ability to find bugs, here are the 22 we know of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Long text moves button outside user's access as vertical scroll is disabled&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Two words separated by line feed are counted as one&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Space is considered only separator for words and special characters are counted as words&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The possible violation's category takes possessives and leaves for human assessment and would probably be expected to be something to create programmatic rules on&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You're / we're / They're contractions not recognised as violations of e-prime&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Possible violations does not handle typesetter's apostrophe, only typewriter's apostrophe in calculation&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Human being is noun but recognised as violation&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Two part words (like people's last names) in possessive form are not recognised as possible violations&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Contractions for word count (I'm) count as two words as per general rules of how word counting works&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Images missing alt text necessary for accessibility&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Accessibility warnings on contrast&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Mobile use not supported&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Zoom renders page unusable due to missing scroll bars&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;UI instructions for user are unclear&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Long texts without spaces go outside the grey area reserved for displaying the texts&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choosing which links are to overload this app and which open new browser window are inconsistent&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Resizing the input text field can move it outside view so that it cannot be resized back&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Site is missing favicon and security.txt - both common conventions for web applications&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;html validator identifies 3 errors&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;id naming is inconsistent, some are camel case, others not&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If word is in single quotes, it is not properly recognised as eprime.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Text box location in UI is not where user would expect it to be as per the logic of how web pages are usually operating&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fennglhqs5fsxp8u4jsq6.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fennglhqs5fsxp8u4jsq6.JPG" alt="Course Outline - In Summary"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The two main constraints were focus without documentation and focus with documentation. Test automation we framed as a form of documentation, and speed of creating it depends on the application as well as the skills around the types of tools needed for it. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsjvsggtz7nmkfex8kqx0.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsjvsggtz7nmkfex8kqx0.JPG" alt="About the Course Author"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You are done with the course, but may have questions. We have set up &lt;a href="https://docs.google.com/forms/d/e/1FAIpQLSfQUlLU2agTSp0eMHj7nWMdi8eMD6-iNvdKZvIjkXP_6qAexA/viewform" rel="noopener noreferrer"&gt;an exploratory testing slack group&lt;/a&gt; you can join. You can ask anything on &lt;a href="https://twitter.com/maaretp" rel="noopener noreferrer"&gt;twitter&lt;/a&gt; from the main contributor of this course material. And if this material was valuable to you, you can choose to &lt;a href="https://ko-fi.com/maaretp" rel="noopener noreferrer"&gt;pay Maaret as many coffees as you like&lt;/a&gt;. A simple message sharing your experiences would also be most appreciated in support of her goal of SCALE - making this material useful for more people.  &lt;/p&gt;

</description>
      <category>testing</category>
      <category>exploratorytesting</category>
      <category>testautomation</category>
      <category>course</category>
    </item>
    <item>
      <title>Practice Makes Better - 5x to Continuous Releases</title>
      <dc:creator>Maaret Pyhäjärvi</dc:creator>
      <pubDate>Wed, 23 Jun 2021 21:02:25 +0000</pubDate>
      <link>https://dev.to/maaretp/practice-makes-better-5x-to-continuous-releases-9nh</link>
      <guid>https://dev.to/maaretp/practice-makes-better-5x-to-continuous-releases-9nh</guid>
      <description>&lt;p&gt;I've had my share of practice as a software tester. My career consists of streaks of 3 years at a job before moving on to try something different. Yet I find, just looking back five last teams I have joined, that my quest for different has given me the same assignment, and I recognized that only in hindsight! In this talk, I want to distill some of my lessons from this practice of showing up in places and turning up the release pace. I come at this as "just a tester" with the notion that &lt;em&gt;no one is *just&lt;/em&gt; anything*. We come to things with our human traits and interests, and it is up to us to team up with others and make the best out of whatever we're given. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw4fzi43mll68zu4ysmdw.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw4fzi43mll68zu4ysmdw.PNG" alt="Future is Already Here"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For years, I looked around at conferences and heard some brilliant, amazing talks. For continuous delivery, I was inspired by Facebook and Amazon in learning about the machinery they built to release to production many times a day. As I looked around and talked to my friends in the industry, I learned that while we do great things, we have a tendency of including bits of wishful thinking in the experiences we share. The stories we tell at conferences are usually our best stories. These stories have power. I urge you all to believe that here, talking about future, the future is already present. It most definitely isn't known to me, and I have a lot of energy and other people's support on my side to pay attention. The future we are building is amongst us, it is very much not evenly divided. I believe I see one part of that future though, for all types of applications. And that part of future is one where continuous delivery transforms our software delivery capabilities regardless of technologies. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkn96pfj4ea9kwzsl5pje.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkn96pfj4ea9kwzsl5pje.PNG" alt="My Work in Last Year"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I work as just a tester, with a fancy title: they labeled me a principal test engineer. What I do is improve testing from within, doing testing and sharing the work of testing with everyone. I test across teams, across the entire organization, with other system testers, developers, hardware testers, product owners and product managers. I get to talk to about 60 people a month as part of the work, I am the single person with end to end accesses to a major product we've been building because I've asked to get to test them. And the work I have done in a year can be shown in this series of numbers. &lt;/p&gt;

&lt;p&gt;For a particular team when I retrospected with them on testing for the first time a year ago, it took them 32 days of making a release, and they were expecting to see one every six months. We worked together, turned releases into smaller, got rehearsing through six releases with most recent one taking us 2 days from feature complete commit to having the version running in production. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Featrgig56hhreh0ejy6k.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Featrgig56hhreh0ejy6k.PNG" alt="Tempo Metric"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The most recent system I applied frequent releases at is not one that makes mobile applications. But we made a lot of the same arguments I hear people on mobile applications teams make: customers don't want frequent chance; this works only on web applications; it is impossible; the overhead of releasing is too much.&lt;/p&gt;

&lt;p&gt;We have more work ahead of us to up the frequency, but I already see a team with improved collaboration, testing practice that can cope with the change that fits in their head, and aspirations to go further than where we are now, not just in the team but in the organization. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg63dce73djc2y3m8rmyj.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg63dce73djc2y3m8rmyj.PNG" alt="Deployments/Releases"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We don't really separate deployment and releases conceptually, and the core of the change is added frequency. There's an old agile wisdom going around suggesting things get better with practice, so doing things that we avoid because they feel painful get better with the increased visibility. &lt;/p&gt;

&lt;p&gt;There's two things that I think are quite magical on releases. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The way release-making unifies a team of individuals and subgroups around a common goal. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The way requirements discussions turn into real customer feedback and we stop the uncertainty buildup. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Releases frame a lot of the work we do. Not tasks. Not features. Releases. Making value available for customers. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh3v56qx6nu284q839bw1.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh3v56qx6nu284q839bw1.PNG" alt="5x Continuous Releases"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We talked about my last year, which I did not intend to become a year of moving teams to frequent releases. While my last year included several teams on a similar journey, my last five places of work, in hindsight, are a repeat story with similarities but also differences. I have come to appreciate that this is not a recipe I apply on places I work at, but it is more like a guardrail to introduce experimentation in. &lt;/p&gt;

&lt;p&gt;Experimentation allows us to take the pieces of future we are building from where they are, adapt them and try how they fit for us.  Experimentation is how we bring teams from the past to the future - and beyond. &lt;/p&gt;

&lt;p&gt;From my first team of frequent, weekly releases to thousands, the journey has taken me to monthly releases to millions where a release is not about updating a server but personal computers and devices across the world. &lt;/p&gt;

&lt;p&gt;Every single one of the five has told me that what we did was impossible. Impossible is just something we have not yet figured out. And looking into the future that is already around us, we can pick up advice on the steps we could try to find our way around that impossible. &lt;/p&gt;

&lt;p&gt;I can't today take the time to share you each individual journey. Instead, we look at some highlights that might prove useful on your way of making changes happen around you. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2cqg5cjx3euzyux1z23w.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2cqg5cjx3euzyux1z23w.PNG" alt="Airing out stale practices"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Being new to a team often, I have come to appreciate that a new person joining can be a disruptive event. The local culture is very infectious, and especially during trial periods, many of us, me included, are careful on how much we dare to rock the boat. &lt;/p&gt;

&lt;p&gt;Coming into teams who have collected years of practices that have worked for them, I can be quite a whirlwind. Suggesting things like "no product owner", "no jira", "self-destructing product backlog", "no gatekeepers, pair over PR" and "everyone tests" are not easy ones to take in. And we usually already have system in place that keeps us functioning. &lt;/p&gt;

&lt;p&gt;I remind myself that the way teams worked before I joined got them the job done and changing a little all the time is better than revamping anyone's practices. Continuous releases and turning up release frequency is an easier change that brings everyone on board, and enables sneaking in other changes. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6zqb6sp0nf3wgypra759.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6zqb6sp0nf3wgypra759.PNG" alt="Release frequency changes practices"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When you move from a release every year to a release every month, you just can't keep all the old practices. As release frequency changes, you see testing looking different, shared for the whole team and steps towards build and test automation creating a shared experience in the team.  The architecture starts to turn something everyone understands and scope of releases move from systems to subsystems, and to components and even files. The organization around shows a different kind of concern, changing the tone of conversation from project management to tracking delivered scope running in production. &lt;/p&gt;

&lt;p&gt;Most likely code review and branching practices change. And testing definitely looks at change as files and components changing in making risk assessments of what needs to be verified. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frv313r51jssuqcotomuh.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frv313r51jssuqcotomuh.PNG" alt="Indirect change"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While the change is about release frequency, it changes many other things too. Everything is interconnected. Getting reluctant people to join in releasing more frequently helps rewrite their perceptions of what is possible. For us people there is this idea of cognitive dissonance, saying we are uncomfortable when our beliefs and actions are not congruent. So by making us do actions that don't match out beliefs is a more likely way of changing beliefs than conversations about those beliefs. &lt;/p&gt;

&lt;p&gt;I did not know about cognitive dissonance until I was psychologically profiled for a leadership position I never accepted. The psychologist pointed out that what I was doing to make change happen had a name. &lt;/p&gt;

&lt;p&gt;We have ideas of change we want to see, but another change may create the space for the change we hoped for. Everything is connected. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzqgfuavp7zy7969ma4nk.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzqgfuavp7zy7969ma4nk.PNG" alt="Continuous releases without test automation"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A more surprising lesson over these five transformations has been one about test automation. I used to believe that test automation was crucial to more frequent releases. But then I worked with an organization where we had no unit tests or other level test automation. We built a release pipeline without a single test, where we could promote our system at will on press of a button. &lt;/p&gt;

&lt;p&gt;We might not have gotten to the super-fast releases, but we did release daily, sometimes multiple times a day. We sorted out risks by testing in branches and hiding new functionality until we felt ready to show it. &lt;/p&gt;

&lt;p&gt;Going through this on repeat, increasing release frequency sets a measurable goal of increasing our level of automation. But also, we can release a version without testing it and sometimes, in some of these organizations, the tests we were running were very close to closing our eyes at release time - to no harm in production. I've worked with some pretty amazing groups of developers. &lt;/p&gt;

&lt;p&gt;But let's still emphasize this: build automation - reliable, repeatable, trustworthy, tested with every change - is non-negotiable. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzslkjjk5zy2ju6ypy5g4.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzslkjjk5zy2ju6ypy5g4.PNG" alt="It's a Feature"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This lesson  of considering features needed bit me again in the last week, so I'm reiterating it for myself just as much as sharing it for you. Moving to more frequent releases is more than just repeating the same faster. If you have something that at release time renders your customers system less valuable for 24 hours, you can't do that daily. You can't even do that monthly. But accepting that isn't necessary. &lt;/p&gt;

&lt;p&gt;In efforts to find more frequent releases, we've introduced new product features like silent installs that never bring the system down, feature flag frameworks that allows us to hide changes until we want to see a group of changes revealed, upgrades that diff to a level of files that need changing and treating everything as code thus being able to assume that we can see changes we make. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fct6bq8pl8bo8mwk6whik.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fct6bq8pl8bo8mwk6whik.PNG" alt="Crossing Roles and Tasks"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Moving through this change so many times has added to my understanding of importance of job crafting - the idea that we can take a job we're given and make it the job we want. There is almost always a little more flexibility than we give ourselves credit for, and conversations over cups of (virtual) coffee can do wonders. Building those bridges and relationships across roles helps in becoming more valuable for our organizations as we learn to connect things in insightful ways. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbf1ewtgeq8nxq0zvo7e6.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbf1ewtgeq8nxq0zvo7e6.PNG" alt="Software Must Change"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We've talked about how frequent releases change testing and software development, but the change we see in our future is larger than our development teams. Our customers are increasingly expecting to see change, and choosing the applications they use and download based not only on features and quality, but the belief that software that does not change indicates the business behind it is not doing well, and the software is dying. &lt;/p&gt;

&lt;p&gt;Looking at my phone where I don't turn on automatic updates, the feelings with large numbers of updates is three-fold. The annoyance of always having to update, the blind trust in it working because it did almost every other time I got to rehearse it - tinted with tester fear of new problems, and the appreciation of what apps are still worthwhile my time. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqhzv61gb2ej0u0cxumi2.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqhzv61gb2ej0u0cxumi2.PNG" alt="Summarize to Milk"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, I am inviting appreciation of how important change is by revealing how I find myself talking about milk at office. &lt;/p&gt;

&lt;p&gt;Yes, milk. The white substance we in Finland consume one of the highest amounts per capita in the world. It's not the fact that an average Finn consumes 130 liters per year, but it may contribute to the fact that I have become painfully aware how quickly milk goes bad.&lt;/p&gt;

&lt;p&gt;I find myself talking about milk with regards to test results, when my product management colleagues want to reuse results from six months ago (last release) in making a new release with limited testing now. The test results are like milk, if we infrequently release, we don't end up replenishing them regularly, and the old results are like old milk and we shouldn't trust them. So let's release frequently and keep our test results worthwhile. &lt;/p&gt;

&lt;p&gt;I find myself telling my colleagues the same on practices. In places where I have worked, we tend to learn from our past mistakes and for everything that went significantly wrong, we add a new rule or practice. Sometimes we add new practices to a level where the practices prevent movement, and recently I have been having these conversations about pull requests, our go-to practice in protecting quality in the product. We really should have, like milk has, a best before date printed on every single practice. We shouldn't automatically assume all the great agreements from times before I even joined the team should still be valid today. Throwing old out makes room for new and fresh. So let's create space that moves us from our local optimum to finding something that works even better. &lt;/p&gt;

&lt;p&gt;I also find myself thinking back to old experiences of importance of the second batch that is more fresh, enabling fixing something the previous batches introduced. When we invited an older lady from next apartment over for coffee in our student apartment and the first milk poured into her coffee made a plopping sound of sourness, it was great having a new fresh cup of coffee and a recently acquired milk to fix the mistake that had just happened. So let's make sure we have the possibility to fix errors without leaving our customers waiting for long. That requires new releases. &lt;/p&gt;

&lt;p&gt;I also often go back to talking to business people how software is not like milk. When you go buy milk, it is cheaper if you buy as much of it as you think you can consume - larger portions of it are relatively cheaper. Software is exactly he opposite. The smaller portions include less risk, enable control of schedules and scopes. We want to keep asking how we could buy even smaller batches to improve our development efforts. So let's make sure we purchase and deliver software in smaller cartons, with features we can hold in our heads and supporting structures that help us hold the old promises with test automation. &lt;/p&gt;

&lt;p&gt;I invite you to find the stories you repeat, and share them further. The future is already here, while not evenly divided. The stories of others inspire us on our journeys, and give ideas of things we could try. We're on a journey to future of our industry together. I believe the future is built in small incremental changes and social settings - together. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1wss5a5hopq03a2v1s43.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1wss5a5hopq03a2v1s43.PNG" alt="Get in Touch"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I enjoy connecting with people, and love a good conversation. You may notice I like my work. I also like talking about themes related to my work. I started speaking to get people to talk to me. I talk back, and I invite you all on a journey to figure out how we explore our way into a better place for software creators and consumers.&lt;/p&gt;

</description>
      <category>continuousdelivery</category>
      <category>release</category>
      <category>improvement</category>
    </item>
    <item>
      <title>Exploring Pipelines</title>
      <dc:creator>Maaret Pyhäjärvi</dc:creator>
      <pubDate>Fri, 02 Apr 2021 19:00:19 +0000</pubDate>
      <link>https://dev.to/maaretp/exploring-pipelines-32og</link>
      <guid>https://dev.to/maaretp/exploring-pipelines-32og</guid>
      <description>&lt;p&gt;Exploring Pipelines is about putting together two things that clicked for me on one day at work. As I was digging my way through a particularly annoying and unnecessarily complicated pipeline on Jenkins jobs with various rules of when to run and where to start, I started recognizing that the efforts I was going through had a strong resemblance to exploratory testing. Exploratory testing being my favorite activity, I came to the idea that perhaps this connection could be useful to others as well. Exploratory testing is not only a part of pipelines but is an approach that drives creating more impactful pipelines that optimize the value. &lt;/p&gt;

&lt;p&gt;In order to make the connection, we need to talk about both things: exploratory testing and pipelines. Finally, we’ll talk about a few different organizations and my experiences there to show that the theory of today may be a practice of tomorrow.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxyryc2p55herefv48xxc.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxyryc2p55herefv48xxc.PNG" alt="Defining Exploratory Testing - inseparable design and execution for learning"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Exploratory testing has been around since 1984, when Cem Kaner first coined the term. Even if we like to think same words mean the same thing for different people, it is particularly not the case with this word.&lt;/p&gt;

&lt;p&gt;What it means for me is that we intertwine test design and test execution by avoiding splitting those two into two different people’s heads so that whoever is doing the testing gets to learn from executing a test and design differently for the next test. It’s about that silent moment inside your head where you ask: what did I just learn and how it makes my next action different?&lt;/p&gt;

&lt;p&gt;The pace in how quickly we move between the two activities is controlled by whoever is doing the testing. We can take as much time between actions as we need. Or we can intertwine them so that in some moments we can’t tell one from the other, while other times we are clearly spending all of our energy just on one.&lt;/p&gt;

&lt;p&gt;To emphasize learning, the in-head connection between the activities makes a difference. Whoever does exploratory testing needs agency, the possibility to make decisions allowing what they learn to influence their choices, instead of following their own plan, let alone someone else’s plan.&lt;br&gt;
We do this to optimize results and investment. For the time we spend exploratory testing, we want our results to become as impactful as they can.&lt;/p&gt;

&lt;p&gt;Remember, the core to this is agency that enables learning. And for that, we can’t split this work for two different people, we would seek different dimensions in splitting of exploratory testing when we need scale.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fev94m1vgnjuqyrlginrv.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fev94m1vgnjuqyrlginrv.PNG" alt="Refining execution - inseparable manual and automation"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that we established that for exploratory testing to be exploratory, we can’t split design and execution, we need to talk a bit about test automation in exploratory testing. It comes to my attention at work, with various organizations, that people somehow think that even if we can’t split the design and execution, we could do that for the dimension of manual vs. automated in execution.&lt;/p&gt;

&lt;p&gt;Let me just talk you through an example from office in just the last weeks. There was a feature around registering devices into cloud with a certificate that my team was building. While it was frustratingly open-ended not being able to see task close a few times a day, I’m very happy with the way the feature was tested. Usually every night the tester would leave their tests running, to come to a day of analysis of the results the next day. There was a part of execution that was automated, generating files on disk. There was a part of execution that was manual, finding trends from those files. And there was definitely the exploratory testing loop to designing different variables for the next night for deeper understanding of ways this feature could fail. Over a few weeks on that one testing task, the tester run thousands of registrations, covering many variables that no one could list at start of the task.&lt;/p&gt;

&lt;p&gt;So you see, it makes very little sense to try and define exploratory testing as manual testing in the part of execution. Even if we think of our executed automated tests as regression tests, whenever they fail, their execution has a manual element of going in figuring out what needs to be different.&lt;/p&gt;

&lt;p&gt;If you are looking at exploratory testing, with design and execution strongly linked, you can’t separate automation in the execution. Even the creation of automation is a manual task. Or like I learned last week, humanual. I really like the ring to the term from Erika Chestnut.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftmyv457inftbi1gsykiw.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftmyv457inftbi1gsykiw.PNG" alt="Refining design - inseparable manual and automation"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Manual vs. automated however isn’t part of only the execution part. We are also working to figure out, and have been for decades in the academia, automating parts of test design. If we split design consideration into coming up with an idea of what to do, and how to tell if the result is right, we can easily already give examples of activities that are both manual and automated within the design.&lt;/p&gt;

&lt;p&gt;Think about approval testing. Approval testing is this idea where you design your actions and most likely automate them, but you decide on your results based on humanual element of recognizing acceptable, and approving it for future baseline.&lt;/p&gt;

&lt;p&gt;Think about model based testing. We design our tests with images, boxes and arrows that we map into pieces of code. Our model defines where we should end up. Our code moves us there, and runs checks on where we are. We can argue that we are automating one aspect of designing of the tests, but not all of it.&lt;/p&gt;

&lt;p&gt;You may be seeing a pattern here: if you can’t split design and execution in exploratory testing, and you can’t split manual and automation in either of those two, perhaps what we mean by exploratory testing is that there’s the same kind of relationship with manual vs. automated than with design vs. execution?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6dg7000au8mw73yp06wl.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6dg7000au8mw73yp06wl.PNG" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There indeed is. If you are separating the manual and automated, you are not living true to what exploratory testing frames to be. You are removing agency from the person making those design and execution choices to the best testing they could do. You are not enabling learning between activities. And it shows up in your results.&lt;/p&gt;

&lt;p&gt;I call this idea contemporary exploratory testing. I have come to it with experiences with colleagues who automate but design their tests poorly, and are not learning to optimize their value just as much as with colleagues who refuse to automate emphasizing manual testing, and replacing the time they could be using in automating with arguing with management on how that is not a reasonable expectation.&lt;/p&gt;

&lt;p&gt;I believe we need to frame our testing differently. We need to ensure that we have agency in whoever is doing testing, and that agency shows up in them learning while performing whatever design, execution, manual, and automated activities they need to get their work done for impactful results.&lt;/p&gt;

&lt;p&gt;Not all testing is exploratory testing. Look for the broken links in activities that inherently belong together.&lt;/p&gt;

&lt;p&gt;We have been through a bit of a discussion on what exploratory testing is and we have not yet talked about pipelines, but let’s just say this now: pipelines are automation. Understanding that automation is inherent in exploratory testing is important.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjtu8t69uvokzjqpox9s6.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjtu8t69uvokzjqpox9s6.PNG" alt="All paths lead to learning more"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I may have intimidated some of you with images of having to take on work beyond your scope today at the office. While I talk about agency and sustaining connection between activities that need learning in between them to optimize their value for whatever investment we have, I am also well aware that the scale in which we create software does not fit in a single person’s head. We need teams that share ownership. We need to accept that in an industry that doubles in size every five years, half of us have less than five years of experience.&lt;/p&gt;

&lt;p&gt;Looking back at my 25 years so far, I started off learning the parts where programming was not in the center. I conveniently allowed myself to forget that I started programming when I was a teenager, and that the university degree I studied for is computer science. I embraced the problem domains, and learned about finances, legal, psychology, sociology, agile, teams, and communication. I grew in being a tester, with solid critical thinking and observational skills, and a researcher skillset to design experiments optimizing learning. To do testing, I often needed to set up my own environments, build my own components with the changes intended, and operate both during development and during production the complex technical environments just to get my job done. I changed jobs to figure out what testing looks like in different problem domains and business constraints.&lt;/p&gt;

&lt;p&gt;I rediscovered my polyglot programmer identity only in the last five years. And while that identity was hidden from me - by myself and no one else - I would look at problems prioritizing what I could do with the skills I had and appreciated. Rediscovering that I code, and that I can work on code even when I don’t write it all alone enabled new dimensions for optimizing the time I was investing into being good at my work.&lt;/p&gt;

&lt;p&gt;Particularly, I learned that moving around picking up skills - being even an engineering manager and boss of a group of developers - all plays to skills that enable me now as a principal test engineer doing the best possible testing I can with teams I share the work with.&lt;/p&gt;

&lt;p&gt;I remind you all on the power of “not yet”. You don’t know something - yet. And every single one of us is already useful while we are on a journey to grow our impactfulness.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi99zrbpzmbu1352tbtiw.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi99zrbpzmbu1352tbtiw.PNG" alt="Content warning: animals as food"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This all sums up to a vision that I want to share with a metaphor. The metaphor includes food and animal products, and if that particular area is a source of discomfort, I suggest skipping forward the next slide.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp7avlzeo7rubba0cyve5.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp7avlzeo7rubba0cyve5.PNG" alt="Comparing beef for value"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What I look forward with contemporary exploratory testing is impactful results, and higher value.&lt;/p&gt;

&lt;p&gt;Sometimes people say “testing”, as in all testing is exploratory, but it really is not. There’s varying degrees of exploratory.&lt;/p&gt;

&lt;p&gt;Like if you look at two pieces of beef. One is a regular american beef courtesy of the internet, and the other is Japanese waguy beef. If you enjoy a steak on a grill, you would probably enjoy the cheaper version too. The grease that is visible is part of the taste we expect and we don’t want to take it out. Like testing. But the grease in the latter is more evenly distributed thoughout the beef. This particular style makes it more valuable. You could no longer try to cut it out, it is now integrated. And it makes the entire result more valuable.&lt;/p&gt;

&lt;p&gt;There’s room in the world for both. But with contemporary exploratory testing, I look at moving from the lower value category to the higher value category.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc1ftu66l50dkcr02tgne.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc1ftu66l50dkcr02tgne.PNG" alt="Test automation pyramid - where is exploratory testing"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The same idea with a less controversial image is how we place exploratory testing on a test automation pyramid. The sprinkle on top kind improves the result and is definitely better option than not having it sprinkled on top. But the kids I share a vision on, contemporary exploratory testing, cuts through all layers and drives learning in all testing we do.&lt;/p&gt;

&lt;p&gt;Exploratory testing is more valuable when we allow it its intended place as an approach to testing, instead of turning it into a technique applied on top of everything else. It is exploratory testing in both formats, but the results and practices we would observe are essentially different.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F05n3vyup813ur8nxwat6.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F05n3vyup813ur8nxwat6.PNG" alt="Defining pipeline as automation connecting automation"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Understanding how I view the relationship of exploratory testing and test automation, as two sides of the very same coin, leads us to discussing exploring pipelines.&lt;/p&gt;

&lt;p&gt;The way I define pipelines is that they are automation that connects other automation. A lot of times this pipeline automation is more important than test automation, and yet to much of our conversations on automation remain on the side of tests.&lt;/p&gt;

&lt;p&gt;Let me illustrate this a little more.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzjvtswz757p0xxp7cybv.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzjvtswz757p0xxp7cybv.PNG" alt="Exploring to create test code"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In building a course where one of the many constraints of exploratory testing is documenting with test automation, I follow people work though creating sometimes the first automated test of their lives. Watching this unfold, makes it very clear it is an exploratory activity where to create automation, we are really exploring all the way.&lt;/p&gt;

&lt;p&gt;From writing a single line that opens a browser to the page where our test target is, we learn with every step through designing something, executing it and allowing our learning to change what we would do next.&lt;/p&gt;

&lt;p&gt;A usual progression is from a single line, to a few, seeing that the test can fail and the tools work, to having our first test scenario written down in a format that a computer can execute. From the single test, we move to parameters and using test templates, turning it quickly into a way of documenting multiple tests we can run though the automation tooling. &lt;/p&gt;

&lt;p&gt;Adding enough, we find a problem and see tests fail for real reasons. We decide what to do with the failing tests and leave it failing, make it pass, or turn it off. And complete our activity by covering a spec with automation, and adding whatever values we design, as well as cover all the tests in multiple browsers in a timeframe where we could not do the same scope manually.&lt;/p&gt;

&lt;p&gt;This is test automation, but it is not in a pipeline. The human remains as the person starting the run of automation on demand. And if we left along the red failing test, this set would not serve us well in a pipeline always reminding us that it is still broken until it gets fixed, potentially hiding other issues.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdqlpltab7ntbl3n3o1k8.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdqlpltab7ntbl3n3o1k8.PNG" alt="Robot test with templates example"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So we need to move from a piece of code doing testing for us when we ask to piece of code that does testing for us on rules we have defined.&lt;/p&gt;

&lt;p&gt;Looking at this example I use as illustration, the tool in question is Robot Framework and I have a split relationship with it. On one hand, I teach it to new people who have never automated. On other hand, I propose never using it as soon as you’re ready for a general purpose programming language.&lt;br&gt;
In the picture, test 6 fails because the program does not work as it should by its specification.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp0dnlqrwh5nistwl4l6e.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp0dnlqrwh5nistwl4l6e.PNG" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Dragging our code along into the pipelines, the other end of this is human, team and passer-by manager understandable traffic lights that report - radiate - our status on close to real time. When the code runs in a pipeline, results show what works now.&lt;/p&gt;

&lt;p&gt;Back when pipelines were new, a lot of people set up all kinds of fancy ways of alerting on problems. I’ve seen led displays reporting “N days since last accident” style encouraging the always blue/green state, to lava lamps slowly turning up until a name of the guilty person is revealed, to sound effects calling everyone’s attention.&lt;/p&gt;

&lt;p&gt;In recent years, we have become more boring. But we put a lot of thinking, particularly exploratory thinking, into what boxes would communicate something that truly helps us. From having any boxes to getting to boxes that are meaningful for the state of the team is usually a path you get through, exploring what would make it better than it is now while respecting it is already doing some relevant work for you.&lt;/p&gt;

&lt;p&gt;My previous organization was into blue (with enough color blind people around), and with my current organization I am getting used to green again, and just working towards having less colorful displays in general. That is a culture change we often need to make it through in pipeline building.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5qed6f00a1c3a8ja1r7b.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5qed6f00a1c3a8ja1r7b.PNG" alt="Connecting automation with automation"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From that single script, there is glue automation in the pipeline that makes the results continuously available. Remember, we defined pipelines as connecting automation with automation, what are the key aspects to explore then for this type of automation?&lt;/p&gt;

&lt;p&gt;Looking at the learning I was going through when exploring our pipeline, I came to four major realizations I want to share today.&lt;/p&gt;

&lt;p&gt;First, we have our choice of when to run something automatically. Coming to an organization that is heavily invested in a concept of nightly build, I can appreciate the daily rhythm even if I have grown addicted to fast feedback. But when I have nightly, I can start turning that to twice a day, once an hour, whatever schedule the resources - and I do not mean people but computers doing the work - allow us to work with. Doing things on schedule gives us a known repeatable point of measurement, and we would particularly resolve to it when we have external dependencies we can’t watch over but want to keep an eye on. On trigger would work when we can follow a rule of something happening first, making it essentially relevant to refresh our results automatically. And no matter how we end up scheduling, we can always also go and just say that now I want to know, on demand.&lt;/p&gt;

&lt;p&gt;Second, we work with the concept up upstream and downstream. Thinking of things as flow of actions, something happens before, and something can only happen after this has completed. Externalizing that logic out of your tests and into your pipelines - while making pipelines code too - is essential. There is a rich source of exploring right here. What really need to be your dependencies, and how changing them will change the experience you live with.&lt;/p&gt;

&lt;p&gt;Third, I can’t deal with things that are not pictures. When exploring pipelines, I find myself drawing boxes and arrows. I have a clear favoritism towards tools that make it easy for me to visually see what the flow is. The boxes also allow me to point at something and ask: what if this would be different?&lt;/p&gt;

&lt;p&gt;Fourth, existence of pipelines in repeatability of building our software cannot be overemphasized. Sometimes you test the result. Sometimes you test the thing that creates the result. And when the result gets refreshed and recreated multiple times a day, testing the thing that creates the result is a better choice. In recent weeks, we have built pipelines to build a three dimensional matrix of same but different in 3x5x3 dimensions. If I had to test all those 45 end results, I could not deal, even with test automation around. Pipelines visualize the difference, and create a baseline that is repeatable.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7fx0jxp8yupkl1kthqpe.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7fx0jxp8yupkl1kthqpe.PNG" alt="Pipeline, the card game for learning"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To get started in playful exploration of pipelines, I recommend trying out the pipeline card game created by Emily Bache. Knowing that Emily created the game inspired by a simulation by Abby Bangser at European Testing Conference I used to run makes me even more happy this game exists.&lt;br&gt;
The idea of the game is quite simple and great for educating teams. You build a pipeline from committing code to deployed in production. You are given a rich set of options, and asked to design your pipeline, and calculating the cost (in wait time) of running your pipeline for different kinds of organizational risk profiles.&lt;/p&gt;

&lt;p&gt;Caring for the cost in wait time ties to the accelerate metrics of lead time, noted in research to be linked with general company success.&lt;/p&gt;

&lt;p&gt;You can choose the sprinkle on top exploratory testing, but that costs you time in the pipeline. I invite you to think about ways of integrating it everywhere, and remind you that some of exploratory testing we do in production looking at improvement we could propose.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fffhpzwczs1ve2cb1jozv.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fffhpzwczs1ve2cb1jozv.PNG" alt="Experiences"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I have been through organizations, changing jobs. The three last organizations have each been very different in regards to pipelines.&lt;/p&gt;

&lt;p&gt;The first one, two jobs ago, is one where we introduced a build pipeline but absolutely no automated tests. We moved with repeatable pipelines into a process where we could control quality at start of work better on both value and risk, and move from releases happening only a few times a year to releases happening daily. The secret to testing them is to keep features hidden until you are ready to reveal them. Continuous delivery without automated tests is possible, even a worthwhile goal.&lt;/p&gt;

&lt;p&gt;The second one, my previous job, had come to a pipeline driven development organization. You could always follow a pipeline. You could trust in making changes in a complex system with a lot of components, and the pipeline delivering your changes to the next release candidate. The pipelines were an organizational memory that enabled working on things built before us. And they captured more than just application code and test code - they also captured infrastructure as code.&lt;/p&gt;

&lt;p&gt;The third one, my current job, has a good baseline of pipelines but having experienced the previous, I have many wishes of advancement. This motivates me to rebuild pipelines of my dreams, learning to change my dreams with my team, and find again the future I thought I once had.&lt;/p&gt;

&lt;p&gt;I believe that the future is already here in our industry, it is just not at all equally divided. Appreciating what we have today, knowing we can have something else tomorrow is a foundational idea.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmg6ifacq1ewzlyqy1gh2.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmg6ifacq1ewzlyqy1gh2.PNG" alt="Automation strategy"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For me, the strategy for automating starts with the idea of incremental, incomplete, learning. What we have is a good baseline even when it is not that good.&lt;/p&gt;

&lt;p&gt;Small flows of value, always for better, create big changes over time.&lt;/p&gt;

&lt;p&gt;Even the ones of us who think we know how it should be done would do well in allowing the team to learn together to get to something the team can run with, even when you are gone.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxvbisptr6hwtzeeeejxr.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxvbisptr6hwtzeeeejxr.PNG" alt="DevOps turns things to code"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, as a departing thought to leave you with. Pipelines are a core of how we automate automation. In this process, we turn things into code. We put it in version management. And we call that shift to being able to see what we have as code - or configuration - DevOps. And that changes testing.&lt;/p&gt;

&lt;p&gt;We move from black box to more hints on what have changed. We move from lack of control to some control. And that ability to have a little more control over our systems is what enables the increased complexity and speed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcfyr53rozx958b0hur8r.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcfyr53rozx958b0hur8r.PNG" alt="Get in touch"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I enjoy connecting with people, and love a good conversation. You may notice I like my work. I also like talking about themes related to my work. I started speaking to get people to talk to me. I talk back, and I invite you all on a journey to figure out how we explore our way into a better place for software creators and consumers.&lt;/p&gt;

</description>
      <category>testing</category>
      <category>exploratorytesting</category>
      <category>automation</category>
      <category>pipelines</category>
    </item>
  </channel>
</rss>
