DEV Community

Kyle Homen
Kyle Homen

Posted on

OSD600 Lab 1

How did you go about doing your code reviews? Do you prefer an async or sync approach? Why?
I went with the asynchronous approach to doing my code reviews. I preferred this because I feel like I can provide a lot of context needed in the description of the issue, and can still answer questions afterwards if more information is needed. This also allowed my lab partner and I to work at our own pace.

What was it like testing and reviewing someone else's code? Did you run into any problems? Did anything surprise you?
It felt pretty natural to test and review someone else's code. I had some experience doing this at my co-op where I was often fixing or building upon other co-op student's work. My lab partner and I are both working in python, so I was able to give feedback based on my own work.

What was it like having someone test and review your code? Were you surprised by anything?
It was nice to have some feedback about what I was missing. Some of the issues I was already aware of and just hadn't had time to implement yet, but there is one bug that was found that showed that my code was repeating a part of the structure tree. I had not noticed this, so this is great feedback and might have been something I missed on my own.

What kind of issues came up in your testing and review? Discuss a few of them in detail.
My lab partner had not implemented some of their functions yet, so I wanted to focus on their argument parser implementation in my testing. I found that there were a few missing functionalities such as a -v flag for displaying the tool name and version info. I also tested to see if the parser was implemented in a way to take multiple arguments, such as multiple files, and found that it had not been implemented yet, so I filed that issue as well.

Provide links to issues you filed, and summarize what you found
Add --version flag to show version info
Here I found that that --version flag was not yet implemented.
Add a default filename to the -o flag
Here I found that the -o flag could not be used without specifying a file name, as it was set up to require an argument.
Cannot specify more than one file argument
Here I found the paths argument was also set up to only take one argument, and thus could not check multiple files.
Missing implementation for "write_summary" function
This issue was filed because there was no summary total of files and lines being output.

Provide links to issues that were filed on your repo, and what they were about
Information in README.md
This issue is about the empty README file. There is currently no information or instructions implemented.
Missing error message for the case of no argument input
This issue finds that there is no error handling implemented to deal with no arguments.
Repeated directory when printing structure tree
This issue found that the first directory was being printed twice.
Missing summary statistics
This issue was filed because there was no summary total of files and lines being output.

Were you able to fix all your issues? What was that like?
I've begun work on implementing the README file (overdue, though I wasn't expecting to need to have it ready), as well as implementing the summary output and an error message for missing arguments. Fixing issues is kind of gamifying the process and actually motivates me to want to get right to work, like finishing an achievement in a game. It's very therapeutic to tackle an issue for good.

What did you learn through the process of doing the testing and reviewing?
I learned how useful collaborating with others can be, how we can't possibly see every bug or edge case on our own. I researched what made a good issue to ensure I was giving my lab partner the most relevant information possible, such as providing the environment that the code was tested and ran on, which version of the code that was used by providing the git hash (this was relevant, as I think some changes were pushed while I was testing an older version).

Top comments (0)