I broke my code today. And I'm glad I did.
You might be wondering why I would be glad I broke my code? Wouldn't I want it to work? Am I crazy?
I'm happy that I broke my code before anyone else broke it. I found the issue before it caused problems for users and before I spent QA's (quality assurance) and a peer reviewer's time. We should all be aiming to do this. It's easy to get code that works 90% of the time. But that other 10% of the time, all those annoying edge cases can be hard to even think of, let alone make sure your code handles them correctly. So how do you think of these edge cases we need to handle?
Think about your code like QA
Now I don't mean you need to do everything QA does or replace your QA person. You don't even need to write the test cases and test plans if QA normally handles that, but you need to think about breaking your code the same way they think about it.
So how does QA think about breaking your code?
Understand what your code should do
The first thing you can do to make sure your code works correctly is to understand what it should do. This might seem like common sense, but it is common for bugs to be caused by misunderstandings between project managers, stakeholders, and developers.
If you have any doubts, questions, or concerns, they need to be discussed before the code goes live. Don't worry about this coming off as pushing back or asking dumb questions. If something isn't clear to you, it probably isn't clear to someone else. I've even seen stakeholders realize they don't understand their own requirements when I asked for clarification. Think you have a better idea, bring it up. Your stakeholders and users will thank you if it improves things. If it doesn't improve things, they will still appreciate you trying to improve the product.
Test for the success cases
Now that you know what the code should do, you should test that it actually does what it should. At this time, give the program the exact input it expects and don't do anything that may risk breaking it. We will get into trying to break it later. For now, we want to make sure we have a good starting point to test for failures and edge cases.
If your project has automated tests, this would be the time to write the automated tests for all the success cases. Automated tests will allow us to ensure the success path still works without manual testing as we address failures and edge cases. I highly recommend writing automated tests. Yes, automated testing can be a pain to learn and get right, but it will save you in the future. I can't count how many times I broke code with changes that shouldn't have affected anything because of not having tests. If you aren't ready to learn automated testing or can't add it to your project, no big deal, but you will need to revisit this step later to test it again manually.
Test for known failures
You know what the code should do. You know the code does it successfully. Now it's time to test for failures you know are going to happen. Maybe these failures are spelled out in the requirements. For example, it might say, "When clicking on a link, it goes to the product page. If the product is not found, redirect to a product listing page." This is easy to test for as your stakeholders tell you what to do when it fails.
Other known failures would include things you realize may happen while working on the success case. Going back to the product page example, if the stakeholders didn't mention how to handle a not found product, you might think about this issue yourself. You may decide it should return a 404 error or redirect to a product listing on your own. Better yet, ask the stakeholders which one it should do. Or you may be working on a search and realize you need to throw a validation error if no search term is provided.
At this stage, think of anything you can that may go wrong and test your code to see what happens in those situations.
Make foolish mistakes; do the wrong thing on purpose
The amazing thing about users is they will always do what you don't expect. They will break things in a way you can't think of. They will also make things work in unexpected ways and cause outcomes you didn't believe were possible. This is the part I find fun about testing code. It's an excuse to go crazy and make dumb mistakes on purpose to see how your code reacts. This is precisely how a great QA person will test your code.
Even if we are going crazy with making mistakes, we still need to follow a process. If you just test for random things and don't pay attention to what you are doing, you won't know how to recreate the issue to test your fix. Instead, you want to make sure you are paying attention to exactly what you click on, what you type in, and what happens as a result. You also want to make sure that you remember what you have tested and haven't tested even though your testing may be random. Make a list of the tests you have run and put a mark next to those that require a fix.
So what are some good foolish mistakes to make?
Push validation to the limit
What happens if you don't include inputs? What happens if you use an excessively long string or a large number? What about a negative number or passing a string for a number?
To test validation, you want to do everything wrong with all of your inputs you can think of. You know what each input should do; now do the opposite and other silly things to see if your code can handle it.
Do the opposite of what you expect users to do
Literally do the opposite of what you expect users to do.
- Do things out of order.
- Click out of pages in the middle of a process.
- Hit the cancel button in a modal, reopen the modal and try to submit again.
- Anything else you can think of that doesn't make sense for the feature.
This type of testing could be anything and will vary depending on what feature you are working on. The point is to make sure you aren't doing what you tested for during the success cases. A great way to do this may be to act like you don't know how to use a computer or website.
Hack your code
It is your turn to be the hacker. Can you find any security issues in your code? Have you tried a SQL injection, XSS (Cross-Site Scripting), CSRF (Cross-Site Forgery Request) attack, etc.?
Security is a big topic that has its own articles, books, and courses. It's also going to vary based on platform, language, devices, etc., so I won't go into details on specific security threats. Make sure you become familiar with common threats for what you are working on so you can test for them.
Make it so slow you get bored
You don't want to make your code inefficient on purpose, but you should test features with a ton of data. Have the option to change the number of records on a page? What happens if you make that number 10,000? Is it still fast? Can you upload a file with 10,000+ records without getting bored waiting for it to complete?
It's easy to fill in 5 dummy records and start testing a feature. We often forget that real users may try to work with enormous amounts of data at once. It's usually after our first power user hits 10,000+ records, and we have an urgent bug fix because the page won't load; we realize performance is a problem. Find the limit of how much data your code can handle, then try to optimize it. Once you can't optimize it anymore, make sure your users can't go past that limit or provide a way to offload the process to a background job so the user can be alerted when it's done instead of making them wait.
You broke your code; now its time to fix it
Each of the issues you find is probably going to require different fixes. You might need to add validation checks and messages, add additional alerts for the user, prevent actions from happening, or optimize queries. The important thing is to refer back to the steps you took and what went wrong when performing them. Ideally, you would write automated tests for each issue you fixed, so you can be sure it won't happen again. If you don't have automated tests, then it just means you have more manual testing to do for all these issues every time you touch this code.
As a final note, it is essential to realize you won't find every issue every time. There is a reason QA is a vast profession and necessary in our industry. A developer doing this type of testing aims to eliminate back and forth between developers and QA and save time, not find and fix all issues.
Have any other tips or suggestions on how to test your code for edge cases? Please share in the comments or on Twitter.
Now have fun and break your code, so no one else has to know it was broken.
Top comments (0)