<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: ajditto</title>
    <description>The latest articles on DEV Community by ajditto (@ditto).</description>
    <link>https://dev.to/ditto</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ditto"/>
    <language>en</language>
    <item>
      <title>The power of API testing</title>
      <dc:creator>ajditto</dc:creator>
      <pubDate>Tue, 14 Mar 2023 19:30:41 +0000</pubDate>
      <link>https://dev.to/ditto/the-power-of-api-testing-1321</link>
      <guid>https://dev.to/ditto/the-power-of-api-testing-1321</guid>
      <description>&lt;p&gt;In the world of software development, APIs have become a crucial part of modern application architecture. They allow different software components to communicate with each other and exchange data. However, with the increasing complexity of applications and the growing number of APIs, it has become challenging to ensure that these APIs are working correctly. That's where automated API testing comes in.&lt;/p&gt;

&lt;p&gt;Automated API testing is the process of using software tools to test APIs automatically. This means that the testing is performed by a computer, without the need for human intervention. Automated API testing has several virtues that make it an essential part of software development.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Speed&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Automated API testing is much faster than manual testing. When testing manually, test cases have to be written out, then executed. This can be a time-consuming and error-prone process, especially when your application has a lot of APIs. Automated API testing, on the other hand, can execute hundreds or even thousands of test cases in a matter of minutes, without human intervention.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reliability&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Automated API testing is more reliable than manual testing. Since human error is eliminated, automated testing ensures that the test results are consistent and accurate. This is important, especially when working on critical applications where even small errors can cause significant problems.&lt;/p&gt;

&lt;p&gt;**Catching issues&lt;/p&gt;

&lt;p&gt;This approach enables developers to catch bugs early in the development cycle. This means that developers can fix problems before they become major issues, saving time and resources in the long run. Additionally, automated testing ensures that your application is tested thoroughly, allowing developers to release new code to production with confidence.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stability&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;APIs are inherently stable. Most developers and companies understand the need to not introduce breaking changes to their APIs. So a layer of testing for an ever more complex application ecosystem that can always be relied upon brings peace of mind. When failures do happen, there is no need to question the tests, because the tests are written in a way that mandates the un-changeability of the API.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cost Reduction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Lastly, a reduction of the long term cost of testing. Since automated testing is faster and more reliable than manual testing, it reduces the amount of time and resources needed to test an application. This means that quality assurance professionals can focus on other critical aspects of quality, such as improving the user experience or improving quality metrics.&lt;/p&gt;

&lt;p&gt;In conclusion, automated API testing is an essential part of software development. As APIs continue to play a vital role in modern application architecture, automated API testing will become even more critical in ensuring that applications are tested thoroughly and reliably.&lt;/p&gt;

</description>
      <category>api</category>
      <category>testing</category>
    </item>
    <item>
      <title>Pitfalls of QA automation</title>
      <dc:creator>ajditto</dc:creator>
      <pubDate>Tue, 07 Mar 2023 16:20:54 +0000</pubDate>
      <link>https://dev.to/ditto/pitfalls-of-qa-automation-4g3b</link>
      <guid>https://dev.to/ditto/pitfalls-of-qa-automation-4g3b</guid>
      <description>&lt;p&gt;It’s time to talk about qa automation. At this point it’s more than an industry buzzword, it’s something that every company insists they need. At my level of experience nearly every professional connection I’ve made has talked to me about automation, whether it’s my efforts in automation, what my current company’s focus on automated qa is, or just which test frameworks I’m familiar with. &lt;/p&gt;

&lt;p&gt;Yet even with all of this buzz around qa automation and the promises it makes, I’ve seen very few companies that have found a way to &lt;em&gt;rely&lt;/em&gt; on their automated tests, and that’s if they’ve even managed to write any automated tests.&lt;/p&gt;

&lt;p&gt;Why?&lt;/p&gt;

&lt;p&gt;The ideas behind automated testing are great, and when it works it’s amazing. More often than not, however, I’ve experienced companies that have big dreams and hopes for automation, but fail when it comes time to execute on the strategies and cost of actually automating their testing. Here are some common pitfalls I’ve seen.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It is not a silver bullet&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When first introducing automated testing, the conversation most often starts with discussion of how great it would be to automate boring expensive regression testing. It’s easy to understand why, the promise of automated testing is that it can do the boring stuff quickly and with less errors, and since manual regression testing is expensive and boring, it would be a win for everyone. The problems start to pile up fast though when actually trying to implement this magical fix all. &lt;/p&gt;

&lt;p&gt;Right off the bat there’s some big expenses that come into play: Do you hire a senior qa automation engineer who can hit the ground running? It’s a large added expense, but experience usually is. Or do you push your existing team to learn what they need for automation? You’re going to see a major drop in output from your team while they get going if you do things this way. On top of that, learning by doing is a path paved with mistakes, and while that’s not a bad thing, it is, again, going to be expensive. &lt;/p&gt;

&lt;p&gt;Then, once you’ve chosen a way forward, tests need to be written, reviewed, and run. Writing is a thing that pretty much everyone understands, so we’ll skip that for now. &lt;/p&gt;

&lt;p&gt;Reviewing tests is usually a cost that’s less often considered. As a friend of mine likes to say: “who tests the tests? Who tests the tests that test the tests?” This is, unfortunately, only half as silly as it sounds. Tests need to be accurate to the desired result, which sounds obvious until you find out that a test is failing because it relies on an api call that isn’t supported (this is not a made up scenario, I’ve seen it happen).&lt;/p&gt;

&lt;p&gt;Then tests have to run. Again, it may seem obvious, but I’ve personally experienced writing automated tests only to be told that there is no possible way to get them running in the current pipeline, and doing so would require an infrastructure re-write. &lt;/p&gt;

&lt;p&gt;So no, automated testing is not a silver bullet, it’s an expensive endeavor. One that is usually underestimated to the detriment of the team.&lt;/p&gt;

&lt;p&gt;Even if all of that gets properly accounted for, another common error is starting in the wrong place.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ignoring the testing pyramid&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For those unfamiliar with the automation testing pyramid, the idea is that, like a pyramid, you get smaller as you go up. In the testing pyramid the base (largest portion) is made up of unit testing. Moving up, the next largest section is integration tests, ending with UI or end to end testing as the smallest section.&lt;/p&gt;

&lt;p&gt;Following the logic of the pyramid, it would seem folly to invest in UI testing until a solid base of unit tests and integration tests has been built. Unfortunately, many companies that decide to automate their quality make the mistake of jumping with both feet into the UI test automation pool without building a foundation to stand on.&lt;/p&gt;

&lt;p&gt;It’s an understandable mistake to make. UI tests are flashy, and can be shown off at demo day. Modern UI testing frameworks are relatively easy to pick up and learn, meaning manual qa professionals can get an easier introduction to automation and its potential power. What every engineer will quickly learn though, is that those easy to write tests are flaky, and fail often. Which leads to the next point.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tests as a standard&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To illustrate my point, allow me to share another experience. When working as an automated qa engineer, my team realized that one of our api endpoints needed a total refactor. It was an endpoint that handled far too many things, and over the years had become so complicated, nobody felt safe making any changes to the code. &lt;/p&gt;

&lt;p&gt;As a team we cataloged every action the endpoint handled, including errors. Then I started to write automated tests for each and every one of those actions. We agreed that once the tests were written, we could begin the effort of refactoring the endpoint, knowing that if the tests all passed on the other end, we could release with confidence. &lt;/p&gt;

&lt;p&gt;The main takeaway from this scenario, is that when tests failed, nobody tried to blame the tests. They became the standard, and anything that didn’t pass the standard was incorrect.&lt;/p&gt;

&lt;p&gt;Implementing automated tests that instill that level of confidence is a very difficult task, but it is supremely important to the ability of automated tests to actually speed up development. You can’t go fast if every time a test breaks everybody has to stop and figure out if it’s the test that’s broken, or the code, because in the end that’s just manual testing with extra steps, and wasn’t the point of test automation to remove the need for manual testing? &lt;/p&gt;

&lt;p&gt;What are your thoughts? Share them below!&lt;/p&gt;

</description>
      <category>testing</category>
      <category>automation</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Don't make QA a crutch</title>
      <dc:creator>ajditto</dc:creator>
      <pubDate>Tue, 21 Feb 2023 17:12:01 +0000</pubDate>
      <link>https://dev.to/ditto/dont-make-qa-a-crutch-22ob</link>
      <guid>https://dev.to/ditto/dont-make-qa-a-crutch-22ob</guid>
      <description>&lt;p&gt;It seems like each company has a different approach to making sure their software is high quality. Some demand that developers do all of their own testing, completely eliminating their need for a quality assurance department. Some swing hard in the opposite direction, and require each and every code change to pass through a quality assurance department for approval before releasing anything. In environments like this companies run the risk of making their quality assurance department a crutch, slowing down development, when in reality qa should be accelerating the companies abilities.&lt;/p&gt;

&lt;p&gt;To start, crutches are not a bad thing. People use them all the time as an aid &lt;em&gt;to get better&lt;/em&gt;. What you don’t often see is people using crutches with two perfectly functional legs, indeed, doing so will actually make things worse in the long run; once healthy legs will start to wither with disuse.&lt;/p&gt;

&lt;p&gt;So how does qa become an unneeded crutch? How do you know when to stop? Can the damage of over-reliance on qa be cured?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How does qa become an unneeded crutch?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In my experience small startups decide it’s time to hire a qa engineer when the developer approach “move fast and break things” stops being viable. They have paying customers, people are relying on their software, and consistent breaks are beginning to adversely affect future growth potential. &lt;/p&gt;

&lt;p&gt;So a company makes its first quality assurance hire to figure out the fastest way to stabilize their development process. Right away it will probably be clear that there are some issues that need to be fixed, and the qa engineer will start to make changes. To make sure those changes are happening they’ll be double checking everything: Are commits being code reviewed? Are new features being audited by PM/UX before going live? Have all of the user stories been tested? And so forth.&lt;/p&gt;

&lt;p&gt;As time goes on, product stability will start to improve, because the process has been improved with careful observation and extra cautious testing.&lt;/p&gt;

&lt;p&gt;This is the tipping point.&lt;/p&gt;

&lt;p&gt;The most difficult part of the process is reducing that observation and extra testing, and just like a person healing from a broken leg doesn’t run a marathon the day their cast is removed, it’s unreasonable to expect a dev team to have solved all of the problems they were having in the first place. Part of becoming a great qa engineer is knowing when it’s time to allow for a little failure, and when extra support is needed. It’s really easy to see bad patterns re-emerging right after a reduction in testing and feel the pull to put all of those safeguards back in place. Which leads us to:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How do you know when to stop?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Unfortunately knowing when a team has been healed of their bad habits isn’t as simple as an x-ray to see if it’s still broken. The art of removing safeguards is its own, slow process, and it requires allowing failures to happen, something that goes against the general nature of a qa professional. To illustrate this I’ll share an experience I had:&lt;/p&gt;

&lt;p&gt;While working with a dev team to reduce the frequency of critical bugs on production we decided that each critical issue would get its own retro meeting. We sat down with the engineers involved in introducing the issue, the ones that solved the issue, myself, and the team leads. The first few meetings were uncomfortable, I was responsible for running the meeting, and while I knew the goals of the meeting I didn’t know the best way to accomplish them. &lt;/p&gt;

&lt;p&gt;Eventually we got better at the meetings, and we came out of every critical retro with clear goals to avoid the same issues happening again with tickets for the long term fixes created and prioritized.&lt;/p&gt;

&lt;p&gt;After doing this for a few months a critical issue arose, and the engineers involved sent a slack message that basically said: “We know what the issue is, and here are the tickets needed to get the fix done. [team lead] please prioritize them.” &lt;/p&gt;

&lt;p&gt;From that point on, I became much more relaxed about ensuring every critical issue had its own retro. Critical issues were happening with less frequency, fixes were being properly executed, and the meetings became more of a formality. Yes, there were still mistakes, but we knew the path toward fixing them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Can the damage of over-reliance on qa be cured?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The short answer is yes. The reality is that it is difficult to break the habits that have been picked up that lead to an over-reliance on qa, and breaking those habits will almost always be met with rejection and fear. Questions of “Why do we need to change this?” “Our process is working, do we really want to risk making things worse?” and others will arise.&lt;/p&gt;

&lt;p&gt;The reality is that it’s easier to continue on with bad habits, but to go back to our broken leg analogy: continuing to use crutches after the leg is healed will lead to an atrophied appendage that slows everything down, and hinders a person from fulfilling their potential. Relying too much on qa to make sure that everything is as perfect as possible before delivery slows everything down, and in the end, slow just doesn’t cut it in today’s software world.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How can you measure quality?</title>
      <dc:creator>ajditto</dc:creator>
      <pubDate>Mon, 13 Feb 2023 20:12:03 +0000</pubDate>
      <link>https://dev.to/ditto/how-can-you-measure-quality-4hcg</link>
      <guid>https://dev.to/ditto/how-can-you-measure-quality-4hcg</guid>
      <description>&lt;p&gt;How do you measure quality? It’s a question that not just software companies seem to struggle with, but many quality professionals have a hard time answering well. It’s pretty simple to just ask how many bugs do we have in production? That’s a good metric for quality right?&lt;/p&gt;

&lt;p&gt;It. Is. Not.&lt;/p&gt;

&lt;p&gt;Metrics tend to be a tricky problem in general. Experience says that too much time focusing on a metric makes the metric the goal, when its true purpose should be an indicator of how progress toward a goal is going. On the flip side, if there isn’t a goal associated with a metric it quickly fades into obscurity and serves no purpose.&lt;/p&gt;

&lt;p&gt;The latter is the exact reason ‘bugs on production’ is a bad measure of quality. There is no realistic goal that can be tied to this metric: The number should be zero? Is there a software company out there that has ever managed this? I doubt it. &lt;/p&gt;

&lt;p&gt;What about “less” production bugs? This is barely better, because there are so many ways around it. Tell the QA team to log less issues and boom; problem solved. Unless of course your QA team is being evaluated by how many issues the log… &lt;/p&gt;

&lt;p&gt;All of this is to say that the metric “bugs on production” doesn’t help because it doesn’t represent work towards a goal of higher quality. Every team I’ve ever worked with has willingly and knowingly shipped products with bugs at some point, the reasons for this are as varied as the teams I’ve worked with, and more often than not, I’m giving a thumbs up to releasing known bugs. &lt;/p&gt;

&lt;p&gt;If your only quality metric is bugs on production, then this represents a failure of quality plain and simple. In the end it’s unrealistic to expect to deliver 100% bug free code, so you will always get a failing quality grade.&lt;/p&gt;

&lt;p&gt;So how to do better?&lt;/p&gt;

&lt;p&gt;What kinds of metrics show that a team is producing quality work? The best answer I’ve come up with is to have &lt;em&gt;actionable&lt;/em&gt; metrics. Actionable here meaning that each metric has a realistic goal attached to it, and the team is willing to take action to work toward that goal.&lt;/p&gt;

&lt;p&gt;The power behind actionable metrics is immense. When team members have a goal and a way to measure that goal buyin to the goal becomes easy. So what are some actionable metrics?&lt;/p&gt;

&lt;p&gt;Some of the metrics that I have experience to be highly effective are:&lt;/p&gt;

&lt;p&gt;Mean time to resolution of critical issues&lt;br&gt;
Customer reported bugs (measured weekly)&lt;br&gt;
Issues related to recently released features&lt;/p&gt;

&lt;p&gt;Each one of these metrics showcase a different aspect of the product, and the team working on them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mean time to resolution of critical issues&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Tracking MTTR gives a lot of insight into a team’s responsiveness, and ability to handle problems that come up. Each critical issue that teams encounter will (hopefully) be unique, where some will be simple fixes, and some will be immensely complex. Each one, however, represents the potential to have a major impact on customers' experience, and requires immediate attention. &lt;/p&gt;

&lt;p&gt;This metric serves to help the team learn the value of tracking down the underlying issue as quickly as possible, and figure out the fastest way to reverse the impact to users, sometimes that’s a fix, and sometimes that’s a rollback. Knowing a team's ability to respond to major issues and outages serves to boost the confidence of the whole company in the engineers ability to solve customer problems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Weekly customer reported bugs&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It seems pretty clear that this is a metric that should be kept track of. The question to ask is what action is tied to this metric? This is the metric that should drive improvement initiatives. Research can be done all day about what features customers want, but reported bugs show what customers &lt;em&gt;use&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;When joined with other metrics, knowing how many bugs are being reported by customers can paint a picture of how the product’s quality is being perceived:&lt;/p&gt;

&lt;p&gt;If customer bug reports jumped one week, was there a correlated spike in weekly users? Then there’s probably no reason to be alarmed. Did bug reports spike following a major feature release? It looks like our customers are trying to use the new feature, which is great. (Not the bugs, but we’ll talk about that next.)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Issues related to recently released features&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It’s fairly common to have a handful of issues crop up after a feature release. Earlier I mentioned that sometimes bugs are released on purpose. More often than not customers are using the software in a way nobody imagined it would be used, and yes, I’m admitting that as a quality professional. Why, then, should this be tracked, and what action can be taken?&lt;/p&gt;

&lt;p&gt;Tracking post release issues helps development teams discover the holes in their process or assumptions. By tracking this, teams can discover that they’re spending as much time post release in fixing a feature as they did in developing the feature. It empowers teams to better estimate time needed to complete features, as well as challenge their assumptions and in the end, release a better product. That is, after all, what quality assurance is all about.&lt;/p&gt;

</description>
      <category>cicd</category>
      <category>opensource</category>
      <category>devops</category>
      <category>discuss</category>
    </item>
    <item>
      <title>Issue backlogs and you</title>
      <dc:creator>ajditto</dc:creator>
      <pubDate>Mon, 06 Feb 2023 20:07:46 +0000</pubDate>
      <link>https://dev.to/ditto/issue-backlogs-and-you-3a0k</link>
      <guid>https://dev.to/ditto/issue-backlogs-and-you-3a0k</guid>
      <description>&lt;p&gt;Every company deals with a backlog of issues. Some backlogs are small, some are embarrassingly large, all of them represent work that still needs to be done. One of the questions that nobody wants to answer though, is who owns that backlog?&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Proposal:&lt;/em&gt; QA should own the issue backlog. To be clear, this is referring specifically to the issue backlog, not the product backlog or project roadmap. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;Why:&lt;/em&gt; Generally the entire backlog seems to belong to the product manager, who creates the team roadmap based on company needs and objectives. Product managers, however, have a tendency to focus on product, and new features. This isn’t inherently a problem, it’s their job, but the need to focus on product needs tends to leave issue management in the dust.&lt;/p&gt;

&lt;p&gt;What this means is that the PM might make an attempt to add bugs to every work cycle, but the need to figure out new feature work, designs, and everything else they do (I’m not a PM, can you tell?) doesn’t leave time for giving issues the consideration needed to properly prioritize them. This usually leads to more incoming issues than outgoing issues, which eventually leads to an issue backlog numbering in the 100s or even (shudder) the 1000s. At that point nobody can figure out how to organize it, and who can blame them?&lt;/p&gt;

&lt;p&gt;So what does good bug backlog management look like?&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Priority&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;A good issue backlog is prioritized, even if nothing else happens with the endless list of issues that might someday get solved, every existing issue should have a priority. At a minimum somebody can reach into the black hole and pull out something that at one point was considered a high priority issue to fix. &lt;/p&gt;

&lt;p&gt;That is just step one though. While it’s important to have a well prioritized backlog, leaving it at that means that the only things that ever really get fixed are the high priority issues. Which leads to step two:&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Timelines&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;A full discussion of issue fix timelines merits its own post, but I’ll touch on the importance of them here. Generally speaking, high bugs are the most important things to fix (excluding critical issues, which are obviously higher, but should be far less common). Most commonly, what this means is that anything not marked as high priority might as well be marked as ‘will not do.’ Standardizing timelines for issues helps to alleviate this problem, which means teams should agree that medium priority issues should be fixed within x number of months, and low priority issues will be fixed in y number of months/quarters. &lt;/p&gt;

&lt;p&gt;The job of the quality assurance engineer is to bring to the team’s attention any issues that have gone longer than agreed on in their resolution, the team then decides on the best course of action by either:&lt;/p&gt;

&lt;p&gt;Fixing the issue&lt;br&gt;
Changing the priority&lt;br&gt;
Marking the issue as won’t do&lt;/p&gt;

&lt;p&gt;This is a great way to clear out an old backlog, because if a low priority issue has been hanging out for six months, the likelihood of it ever getting resolved is pretty much zero.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Total Issues&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;The reason issue backlogs spiral out of control is because at a certain point the number of open issues is so large, nobody is willing to look at it. Somebody looking at a backlog of 50 issues sees something manageable that can be tackled with some concentrated effort. 100 issues is a lot, but it can be kept under control. 500 issues is a lot, and who knows where the important issues are. 1000 issues is just noise, and for all the attention it gets can be treated as empty. If priority and timelines are being handled properly a backlog should rarely reach more than 250 outstanding issues.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;So how do you get there?&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;If you’re reading this while staring down an issue backlog numbering in the thousands, this probably all just seems like an impossible dream. Here is how I handle it.&lt;/p&gt;

&lt;h5&gt;
  
  
  Step 1
&lt;/h5&gt;

&lt;p&gt;Turn off anything that is automatically creating issues. If issues are being created automatically, and the existing backlog is numbering in the high 100s, all that’s really happening is pointless alerts being shouted into an unlistening void.&lt;/p&gt;

&lt;h5&gt;
  
  
  Step 2
&lt;/h5&gt;

&lt;p&gt;Delete every unassigned issue that hasn’t been touched in the last year. That’s right, delete, not close. If it was important enough to keep around, there’s probably a duplicate that’s more recent. &lt;/p&gt;

&lt;h5&gt;
  
  
  Step 3
&lt;/h5&gt;

&lt;p&gt;Ping people about every issue assigned to them. Most of the time the answer will be some variation on: “Oh, that’s not an issue anymore, it’s been fixed.” Then get them to close the issue (don’t do it for them). It sounds petty, and annoying, but managing an issue backlog also means working to make sure other people aren’t using it as their garbage bin. The backlog is in a bad state because of bad habits, and breaking those habits is usually  painful, but they need to be broken.&lt;/p&gt;

&lt;h5&gt;
  
  
  Step 4
&lt;/h5&gt;

&lt;p&gt;Metrics. Keep track of how many issues are coming into the backlog every week, and how many are going out. Encourage your team to start trending down, even closing one more issue a week than you have incoming is a win at this point.&lt;/p&gt;

&lt;h5&gt;
  
  
  Step 5
&lt;/h5&gt;

&lt;p&gt;Be consistent. This is the hardest part, but it’s also the most important. Properly cleaning out an issue backlog can take months of hard work, but in the end it will help your team have a clear vision of their issues, and a plan for improving their quality.&lt;/p&gt;

&lt;p&gt;Let me know your thoughts.&lt;/p&gt;

</description>
      <category>tooling</category>
      <category>softwaredevelopment</category>
      <category>development</category>
    </item>
    <item>
      <title>Test cases are hard</title>
      <dc:creator>ajditto</dc:creator>
      <pubDate>Mon, 30 Jan 2023 18:19:50 +0000</pubDate>
      <link>https://dev.to/ditto/test-cases-are-hard-397l</link>
      <guid>https://dev.to/ditto/test-cases-are-hard-397l</guid>
      <description>&lt;p&gt;Every person that has worked as a professional quality assurance engineer has, at one point, had a conversation about test cases. How to organize them, what they should entail, who should write them, etc.&lt;/p&gt;

&lt;p&gt;Test cases seem to be one of those things that every company knows they should be doing, but in the end, they have no idea why. When asked, the de facto answer is something along the lines of:&lt;/p&gt;

&lt;p&gt;“We need to document test cases so that our future automation efforts know what steps to take.” &lt;/p&gt;

&lt;p&gt;While this sounds like a good response, it really isn’t. It’s a response that, in reality, says two (bad) things. The first is: “We’re not doing automation now, and have no idea if we will, or what we want from it, so these test cases will probably get relegated to documentation hell.” The second is: “We heard a qa professional on youtube talk about test case management, so we need to have test cases.”&lt;/p&gt;

&lt;p&gt;This might sound harsh, but further examination should make the point clear. To start, what are test cases?&lt;/p&gt;

&lt;p&gt;A test case is a series of steps that a user can follow to complete a desired outcome. In other words, it’s a set of step by step instructions for a user to follow, where each step has the expected outcome. If the user encounters an outcome that doesn’t match the prescribed results, the test case fails, and a bug report is written up. It sounds simple. Any quality assurance engineer that has spent time writing test cases will tell you, it really isn’t that straight forward.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Preconditions:&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Preconditions are the bane of any test case writer. When writing test cases, there has to be an agreed upon starting point:&lt;/p&gt;

&lt;p&gt;Does the user already have an account?&lt;br&gt;
Has the user already logged in?&lt;br&gt;
What type of user is this test for?&lt;/p&gt;

&lt;p&gt;It might look like a simple problem, but this choice will dictate how every test case is written, and any exceptions to the agreed upon starting point will need to be explicitly written out. It’s also just the tip of the precondition iceberg. The real issues happen when talking about dependent test cases.&lt;/p&gt;

&lt;p&gt;In other words: Test X has to pass to even try to validate tests Y and Z. So now whatever software is being used to track the test cases must be able to handle preconditions as an option on the tests. Again, it sounds simple, but what about tests A, B and C that depend on Y? Or test D which depends on X and B? What about when the first 5 steps of C are the same as all of test A? Should A be a precondition of C? Or should C just have 5 more steps?&lt;/p&gt;

&lt;p&gt;Did you follow all of that? Most test case management software can’t either, so don’t feel too bad. &lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Step breakdown:&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;If all of the issues of preconditions are solved, teams have to decide how to break out each step? Is a test a single step? Is each step its own action with its own result? How many steps in a test case is too many?&lt;/p&gt;

&lt;p&gt;This is a pretty small problem, like a toddler building a block tower, every new block added threatens to bring the whole tower down.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Test Case Language:&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;This might seem completely unrelated, but it’s not, so bear with me. &lt;/p&gt;

&lt;p&gt;Test cases are a form of documentation, and just like any documentation, it’s not written for the purpose of one that wrote it. The overarching idea is that somebody, somewhere, down the line is going to need to figure out exactly what the software they’re working on should do, and the test cases (ideally) will be the go-to documentation for getting that answer.&lt;/p&gt;

&lt;p&gt;In order to maximize the efficiency of that discovery, each test case should use language that describes each component the same way throughout every test case. &lt;/p&gt;

&lt;p&gt;If a test case uses the word ‘table’ and the word ‘list’, every other test case should refer to those components by the same name. Switching them, or introducing a new word will only lead to confusion and frustration (always assume the documentation writer won’t be around to explain things).&lt;/p&gt;

&lt;p&gt;This could just be me being a stickler, but I’ve read enough documentation that throws out synonyms willy-nilly to experience an unhealthy amount of frustration when trying to decipher these kinds of ‘helpful’ instructions.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Types of tests:&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;This is probably the worst part of test cases. The question of: Which test cases should you write?&lt;/p&gt;

&lt;p&gt;It’s easy to just focus on happy path testing, you want to know how the application works after all, not how it doesn’t work. Any good qa engineer, however, will tell you that how the application handles errors is just as important as how it handles success. To illustrate, here are some test cases for a single, simple number field:&lt;/p&gt;

&lt;p&gt;Can I enter a number into the field? (Happy path)&lt;br&gt;
What’s the smallest number the number field allows? (.000001 is a number) &lt;br&gt;
What’s the largest number the field allows?&lt;br&gt;
Is there an error for numbers that are too large?&lt;br&gt;
Is there an error for numbers that are too small?&lt;br&gt;
Can letters be added to the input?&lt;br&gt;
Do letters in the input show an error?&lt;/p&gt;

&lt;p&gt;There’s a quick seven test cases for a single number input. Now think about the number of inputs your application has. How many test cases do you need to properly document how it works?&lt;/p&gt;

&lt;p&gt;Is your head spinning yet?&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;The point&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Again, each of these seems small on their own, but like the block tower mentioned above, they all build on each other and make it that much harder to balance. Sustaining this pattern for a few dozen use cases isn’t too bad, but your application probably has more than a dozen workflows. When there are ten qa engineers all writing test cases, whose job is it to make sure every one of them has stacked the blocks correctly? &lt;/p&gt;

&lt;p&gt;At the end of the day the real question to ask from all of this though, is what’s the ROI? Test cases, and test case management is an expensive endeavor, and any business should be looking at the value a full set of test cases adds. So if the answer is: “someday we’ll turn these into automated tests,” it’s probably better to forgo the headache until “someday” turns into an actual day. &lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;It might sound like I have no faith in test cases, and think of them as valueless, and you wouldn’t be blamed for thinking that. I have seen companies that manage to successfully balance all of these problems, and make the difficulties worth the effort. It is very tough, but when it works it’s enormously helpful. That issue I see is that the companies that make it work well don’t seem to be in the majority, because they fail to ask themselves the right questions.&lt;/p&gt;

</description>
      <category>cryptocurrency</category>
      <category>crypto</category>
      <category>blockchain</category>
      <category>web3</category>
    </item>
    <item>
      <title>Quality assurance is not testing</title>
      <dc:creator>ajditto</dc:creator>
      <pubDate>Mon, 23 Jan 2023 19:46:19 +0000</pubDate>
      <link>https://dev.to/ditto/quality-assurance-is-not-testing-1a75</link>
      <guid>https://dev.to/ditto/quality-assurance-is-not-testing-1a75</guid>
      <description>&lt;p&gt;When it comes to quality assurance, it’s always surprising how often the terms ‘tester’, and ‘qa engineer’ are used interchangeably. It’s so common, in fact, it's common to hear engineers and technical leaders say things along the lines of: “We don’t need a qa department here, our developers can test their own code.”&lt;/p&gt;

&lt;p&gt;While this is belittling to somebody that’s decided to make a career in quality assurance, it’s also, sadly, something that the field of software quality assurance has brought upon itself.&lt;/p&gt;

&lt;p&gt;To start, it’s good to understand why quality assurance and testing should be considered separately. Saying that testing and quality assurance are interchangeable is the development equivalent of saying that software development and code reviews are the same thing. &lt;/p&gt;

&lt;p&gt;Imagine hiring a senior developer and expecting them to spend all of their time doing code reviews. It’s absurd. It’s a waste of not only company resources, but also the talents of this brilliant, and talented developer. On the flip side, however, this very experienced software engineer won’t be exempt from doing code reviews, their expertise and knowledge is invaluable when doing code reviews for less experienced team members.&lt;/p&gt;

&lt;p&gt;In other words, code review is part of a developer’s job, it is not their job. To say otherwise is a waste and, frankly, an insult.&lt;/p&gt;

&lt;p&gt;Testing is no different to quality assurance. Unfortunately so many either don’t realize this or have forgotten it, and most companies and individuals are complicit in crippling themselves and the careers of talented quality assurance engineers. &lt;/p&gt;

&lt;p&gt;This however, begs the question. What is left for quality assurance if it’s more than just testing? The answer isn’t simple, but the name of the role gives a good place to start. Quality assurance works to make sure that the final product being shipped to the customer is acceptably high quality and relatively free of major bugs, but that’s just the obvious. Just as important, and much more rarely considered, is the responsibility to consider the process required to ship products. &lt;/p&gt;

&lt;p&gt;To use a metaphor: Imagine a factory that produces 100 gizmos every day, and a tester finds that on average thirty of those gizmos are defective each day. In other words, the factory is operating at only 70% efficiency, even when running 100% of the time. That is a lot of wasted effort. Most factories (like software companies) can’t afford to slow down to root out the source of the problem. That’s where a good qa engineer can make all the difference. &lt;/p&gt;

&lt;p&gt;In our imaginary factory, our quality assurance engineer will go backwards up the line and figure out what the source(s) of the 30% drop in efficiency is coming from. Is it a faulty part being used? A poorly performing machine? (the metaphor begins to break down, but the point is still the same.) &lt;/p&gt;

&lt;p&gt;Often shoring up the source(s) of the problem can look like it’s slowing down production. After implementing fixes perhaps the factory is only able to produce 80 gizmos every day, but of those, the average number of defective gizmos has also dropped to 2. It may look like there has been a slow down, but the factory has increased production from 70, to 78 gizmos per day.&lt;/p&gt;

&lt;p&gt;As mentioned above, in software the answers aren’t always cut and dry, but patterns always seem to assert themselves. For example:&lt;/p&gt;

&lt;p&gt;What are the team’s metrics around quality? It can be shocking how often the answer to this question is a blank stare. The truth is most teams and companies say they want higher quality, but don’t actually understand what that means. A good qa engineer should be able to guide a team to important quality metrics, know how to track those metrics, and develop a strategy for improving them.&lt;/p&gt;

&lt;p&gt;How does the team deal with major bugs (ie: critical, or priority 0 issues)? Many companies are competent at resolving major issues, otherwise they probably wouldn’t stay in business for very long. Many, however, fail to properly manage what comes after. Major issues are major problems, and sadly some companies stop at requiring a write up of what went wrong, and what should change in the future. While that’s not bad, it’s also a document that likely gets read by three people (at most) then promptly forgotten. If it’s a major issue, teams should make a big deal about it. A good quality assurance engineer will get the right people together to discuss what failed, and put together an action plan to fix the problem. More importantly, they will hold the team accountable for following through on that action plan.&lt;/p&gt;

&lt;p&gt;All of this is just scratching the surface. There are many, many things that fall under the purview of quality assurance, the important takeaway though, is that testing != quality assurance.&lt;/p&gt;

</description>
      <category>watercooler</category>
    </item>
  </channel>
</rss>
