I find that there is an accepted truth that when you ship things "to be fixed up/improved later", that "later" actually means "never".
I think it's a decent mindset to have. Don't expect to find the bandwidth to come back to this TODO later, but in my experience we do come back and fix these issues. It just doesn't happen right away. I think it's good to have the self-awareness that the team will probably move on to other things, but in terms of accepted truths, I find this one isn't always the case.
I've met plenty of folks who genuinely don't have refactoring and cleanup in the vocabulary, and that might be the root cause for this feeling of "later" meaning "never", but if you have the discipline to clean things up, things end up okay.
To me, this sounds more like a project management/culture issue rather than a development issue. Due to any of the following.
Extremely difficult testing and approval of changes post-launch. AKA: the waterfall model
Customer refuses to pay for maintenance or tweaks, only new features. So management will never allocate time for it.
High company turns over (due to other reasons), and lack of issue tracking. So the "todo" is forgotten.
All the reason above has less to do with development and more of project management which the team with discipline can work on.
Unfortunately, one that is all too common in many places. And has somewhat rightfully gained its place as a "truth" or "rule of thumb". Something I hope will improve in time.
Its probably also driven by the young developer desire to produce the best code ever, now. Instead of a solution for the user.
Coding for 20 years | Working for startups for 10 years | Team leader and mentor | More information about me: https://thevaluable.dev/page/about/
Twitter: @Cneude_Matthieu
Success on a short term doesn't depend on the quality of your code.
On the long term, it's another story. If the application grow, the technical debt will increase, at the end you will have a mess full of bug which will be difficult to refactor.
Rewriting will be your only (costly) solution.
It depends of your business, what you build, if you want a lot of features and for how long the product should work correctly...
I like this a lot, and it also tells the story of dev.to well. Early on I knew growing this thing was not about writing perfect code or even having the perfect product, it was about finding the market, having a process, and a plan.
These days, shipping code and squashing bugs is so much more important than it was before. We sort of "earned" the right to be able to focus on the code.
// , “It is not so important to be serious as it is to be serious about the important things. The monkey wears an expression of seriousness... but the monkey is serious because he itches."(No/No)
// , Now here's an "accepted truth" that bears discussion, even from the storied heights of Thoughtworks.
Rewriting an application from scratch has a business purpose, and scaling is a feature, like any feature, subject to "feature creep".
"You're sitting in a meeting, contemplating the code that your team has been working on for the last couple of years. You've come to the decision that the best thing you can do now is to throw away all that code, and rebuild on a totally new architecture. How does that make you feel about that doomed code, about the time you spent working on it, about the decisions you made all that time ago?
For many people throwing away a code base is a sign of failure, perhaps understandable given the inherent exploratory nature of software development, but still failure.
But often the best code you can write now is code you'll discard in a couple of years time."
Yesterday, I was looking for a clean way to test private methods in Java, and all I could find were comments about why you shouldn’t do that at all (see: StackOverflow). Instead, you should only test the public API. But, why?
I make almost all of my methods private, so I don’t have to worry about deprecating code in the future. That doesn’t mean I want to try to figure out all the possible combinations of inputs on the exposed methods. I’d rather do more granular testing on the private methods. Yeah, it’s likely the tests will go to waste in the future, but it’s much easier to prove that everything actually works as intended.
I've never seen any recommandation that you shouldn't define unit tests for your own private functions. As soon as you have something which might fail, a unit test may be useful. I even think my most useful unit tests are for very technical private functions at the core of my libs.
It's possible the recommandation you've read was more about exposing it just for the tests, or maybe related to functions whose signature or existence was unstable, or maybe a java specific problem ? Care to share the source of the advice you got ?
I disagree here. Your private methods are implementation details. You should only need to test the public interface to the API. If you are finding your public methods are dependent on too many various private methods it is a sign that they themselves can be broken into a class.
Yes but tests are also meant to check the implementation works as intended.
If you are finding your public methods are dependent on too many various private methods it is a sign that they themselves can be broken into a class.
It's more often the opposite: a core private function, accessed through several public functions, none of them covering the whole possibilities of the function doing the real job (and the one also which may fail). In such a case, testing all the public functions adds a lot of noise and reduces the stress put on the function which matters.
Actually, I first got this recommendation from StackOverflow. Then I did some digging, and this article is at the top of google for the key phrase “should I test private methods.”
To your point about it being a Java problem, it sort of is. You can’t test private methods—at least not directly—which makes it painful to actually go about writing any tests.
Also, for the record, I totally agree with you. I find it odd, however, that there seems to be some “accepted truth” at least on the internet about it.
But I might have a different approach to tests: It is my opinion that, just like it's useless to add a "return the thing" comment in front of every getter, it's useless to add mundane tests which can't fail. A test is a piece of code, which you must understand, maintain, and which must have a reason. A test is needed when you may imagine the tested function might fail (even if you can't imagine how).
The main reason to test a function isn't because it's public, but because it might fail.
Most public API reduce the possibilities of the underlying core, for various reasons. But as the implementer of this core, you still may want to ensure the core does what it's supposed to do.
And a public API offers most often many ways to call the same underlying implementation. Refusing to test the implementation would involve duplicating the tests for all facades, which means you bloat your tests.
There is some amount of white box testing that I think is important. How do you know a public API works if you don’t test the limits of the internals? Do you just blindly throw inputs at it and hope for the best?
That knowing about various types of tests (unit, functional, regression, end-to-end) is relevant. Tests are just tests, I don't care what types they are as long as they are automated.
That nitpicking style during code review is normal (use an automatic formatting tool instead)
That having Jenkins/CircleCI/Travis/whatever doing automated things on your build means you do Continuous Integration. No, CI means that every push goes to the master and is, therefore, integrated immediately.
That a Pascal variable of type boolean is defined as False from inception and all other types are "undefined".... not a biggie, just don't want inconsistencies in programming languages...
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
I find that there is an accepted truth that when you ship things "to be fixed up/improved later", that "later" actually means "never".
I think it's a decent mindset to have. Don't expect to find the bandwidth to come back to this TODO later, but in my experience we do come back and fix these issues. It just doesn't happen right away. I think it's good to have the self-awareness that the team will probably move on to other things, but in terms of accepted truths, I find this one isn't always the case.
I've met plenty of folks who genuinely don't have refactoring and cleanup in the vocabulary, and that might be the root cause for this feeling of "later" meaning "never", but if you have the discipline to clean things up, things end up okay.
To me, this sounds more like a project management/culture issue rather than a development issue. Due to any of the following.
All the reason above has less to do with development and more of project management which the team with discipline can work on.
Unfortunately, one that is all too common in many places. And has somewhat rightfully gained its place as a "truth" or "rule of thumb". Something I hope will improve in time.
Self discipline or good management.
That "high quality code" or "fully featured software" is gonna make the product successful / viral.
I highly believe that a poor software can be very successful if the team know how to "sell" the product.
By "sell" I mean:
Marketing the product.
Convincing other people to use it.
Letting the community talk about it.
Success on a short term doesn't depend on the quality of your code.
On the long term, it's another story. If the application grow, the technical debt will increase, at the end you will have a mess full of bug which will be difficult to refactor.
Rewriting will be your only (costly) solution.
It depends of your business, what you build, if you want a lot of features and for how long the product should work correctly...
I like this a lot, and it also tells the story of dev.to well. Early on I knew growing this thing was not about writing perfect code or even having the perfect product, it was about finding the market, having a process, and a plan.
These days, shipping code and squashing bugs is so much more important than it was before. We sort of "earned" the right to be able to focus on the code.
// , Now here's an "accepted truth" that bears discussion, even from the storied heights of Thoughtworks.
Rewriting an application from scratch has a business purpose, and scaling is a feature, like any feature, subject to "feature creep".
martinfowler.com/bliki/Sacrificial...
Truth; example: Wordpress.
Yesterday, I was looking for a clean way to test private methods in Java, and all I could find were comments about why you shouldn’t do that at all (see: StackOverflow). Instead, you should only test the public API. But, why?
I make almost all of my methods private, so I don’t have to worry about deprecating code in the future. That doesn’t mean I want to try to figure out all the possible combinations of inputs on the exposed methods. I’d rather do more granular testing on the private methods. Yeah, it’s likely the tests will go to waste in the future, but it’s much easier to prove that everything actually works as intended.
I've never seen any recommandation that you shouldn't define unit tests for your own private functions. As soon as you have something which might fail, a unit test may be useful. I even think my most useful unit tests are for very technical private functions at the core of my libs.
It's possible the recommandation you've read was more about exposing it just for the tests, or maybe related to functions whose signature or existence was unstable, or maybe a java specific problem ? Care to share the source of the advice you got ?
I disagree here. Your private methods are implementation details. You should only need to test the public interface to the API. If you are finding your public methods are dependent on too many various private methods it is a sign that they themselves can be broken into a class.
Yes but tests are also meant to check the implementation works as intended.
It's more often the opposite: a core private function, accessed through several public functions, none of them covering the whole possibilities of the function doing the real job (and the one also which may fail). In such a case, testing all the public functions adds a lot of noise and reduces the stress put on the function which matters.
Actually, I first got this recommendation from StackOverflow. Then I did some digging, and this article is at the top of google for the key phrase “should I test private methods.”
To your point about it being a Java problem, it sort of is. You can’t test private methods—at least not directly—which makes it painful to actually go about writing any tests.
Also, for the record, I totally agree with you. I find it odd, however, that there seems to be some “accepted truth” at least on the internet about it.
I get what they mean but I disagree.
But I might have a different approach to tests: It is my opinion that, just like it's useless to add a "return the thing" comment in front of every getter, it's useless to add mundane tests which can't fail. A test is a piece of code, which you must understand, maintain, and which must have a reason. A test is needed when you may imagine the tested function might fail (even if you can't imagine how).
The main reason to test a function isn't because it's public, but because it might fail.
Most public API reduce the possibilities of the underlying core, for various reasons. But as the implementer of this core, you still may want to ensure the core does what it's supposed to do.
And a public API offers most often many ways to call the same underlying implementation. Refusing to test the implementation would involve duplicating the tests for all facades, which means you bloat your tests.
Now this is an argument I can get behind.
There is some amount of white box testing that I think is important. How do you know a public API works if you don’t test the limits of the internals? Do you just blindly throw inputs at it and hope for the best?
That everyone should do TDD or something similar.
Well, you need to have a way to ensure code maintainability, TDD is a very efficient and easy way to do it.
Management works.
This isn't so much of a "truth" as it is an oddity.
DRYness is good. Decoupling is good. But the only way to increase DRYness is to increase coupling.
Recently I've tested making "util" classes/files (depending on language), where I put functions that have to be used from multiple classes.
These I try to keep as small and generic as possible, so that coupling errors don't happen (and so far they haven't).
Haha! 40 Classes to do one thing!
That knowing about various types of tests (unit, functional, regression, end-to-end) is relevant. Tests are just tests, I don't care what types they are as long as they are automated.
That nitpicking style during code review is normal (use an automatic formatting tool instead)
That having Jenkins/CircleCI/Travis/whatever doing automated things on your build means you do Continuous Integration. No, CI means that every push goes to the master and is, therefore, integrated immediately.
That 'insert framework name here' will save me time developing.
Not adding prorotypes to base Objects (Array, Number, ...) in Javascript but creating
classes method or functions instead.
it's just because you don't break the web like mootools did.
That a Pascal variable of type boolean is defined as False from inception and all other types are "undefined".... not a biggie, just don't want inconsistencies in programming languages...