DEV Community

Erik Dietrich
Erik Dietrich

Posted on • Originally published at blog.ndepend.com

Is Your Team Wrong About Your Codebase? Prove It. Visually.

I originally posted this on the NDepend blog a little under two years ago. NDepend is a static analysis tool that plugs right into Visual Studio.

I don't think I'll shock anyone by pointing out that you can find plenty of disagreements among software developers.

You can see this dynamic writ large across the internet.

But you can also see it writ small among teammates in software groups.  You've seen this before.

Individuals or small camps form around certain competing ideas, like how best to lay out the unit test suite or whether or not to use a certain design pattern.

In healthy groups, these disagreements take the form of friendly banter or good-natured ribbing.  In less healthy groups, they create an us vs. them kind of dynamic and actual resentment.

I've experienced both flavors of this dynamic in my career.  Having to make concessions about how you do things is never fun, but group work requires it.

And so you live with the give-and-take of this in healthy groups.  But in an unhealthy group, frustration mounts with no benefit of positive collaboration to mitigate it.

This holds doubly true when one of the two sides has the decision-making authority or perhaps just writes a lot of the code and claims a form of squatter's rights.

Status Quo Preservation

Division into camps can, of course, take many forms.  But I think the one you see most commonly happens when you have a group of developers or architects who have laid the ground rules for the codebase and then a disparate group of relative newcomers that want to change the status quo.

I once coined a term for a certain archetype in the world of software development: the expert beginner.

Expert beginners wind up in decision-making positions by default and then refuse to be swayed in the face of mounting evidence, third party opinions, or, well, really anything.  They dig in and convince themselves that they're right about all matters relating to the codebase, and they express no interest in hearing dissenting opinions.

This commonly creates the toxic, adversarial dynamic here, and it leaves the rest of the group feeling helpless and frustrated.

Of course, this cuts the other way as well.  Sometimes the longest tenured decision makers of the group earned their position for good reason and acquit themselves well in defense of their positions.

Perhaps you shouldn't adopt every passing fad and trend that comes along.  And these folks might find it tiresome to relitigate past architectural decisions ad nauseum every time a new developer hires on.

It probably doesn't help when newbies throw around pejorative terms like "legacy code" and "the old way," either.

Anatomy of a Fruitless Argument Campaign

So let's set aside notions of who, exactly, deserves blame.  For the general case, it's both unknowable and not important.

Instead, let's just say that you're part of the minority faction, if you will.  You believe something about the codebase needs to change.

Others don't agree with your take,  and so you're having trouble winning hearts and minds to your cause.  Believe me, I empathize.

It's at this point that building one's case can go off the rails and that bitterness can ensue.  And I think this tends to happen because, in my opinion, so many people go about building their case in all the wrong ways.

Let's consider some.

First, you'll start off poorly if you resort to hostility or ridicule.  And secondly, just repeating an argument over and over again, increasingly loudly, does not help.

Even if you're right, it won't matter.  You need to persuade others to see things your way.  You need to prove it to them.

But attempts at proof fall victim to logical fallacy pretty easily.  For instance:

Making weak or incoherent arguments hurts your position.

Prove Things with Data

What you need to do instead involves gathering data and building an actual case for the changes that you want.  If you want to adopt BDD, for instance, don't simply assert its superiority or point out that someone impressive uses it.

See if you can find some data on success that people have had with it.

Do BDD projects enjoy some measurably higher form of customer satisfaction?  Or try an experiment in your own world.  Adopt BDD for a feature and measure certain outcomes to see if they improve.

Once you start getting into the realm of building empirical cases, your arguments become harder to ignore.  And that's because they go from "just another opinion" to something more compelling.

Of course, if we're talking about code, you need to gather data differently, often by treating code as data.  You can rely on static analyzers like NDepend to offer feedback and guidance by flagging issues.  But you can also use NDepend to gather data.

Maybe you make a case against singletons in your codebase by pointing out that they incur disproportionately high type rank while resisting unit test coverage.  Back that case up with data.

You may not always win the day, but I promise you that you'll make a lot more progress this way than by tossing around questionable logic and strident opinions.

Make a Business Case

But you can actually take all of this one step further.  If you're having arguments about BDD and singletons, that means you're having arguments that interest only your fellow techies.

If management understood all of the ramifications, they might care.  But as it stands, they'd probably rather let you duke it out because it's not in their wheelhouse.

That is unless, of course, you put it into their wheelhouse.  And you can do that by turning your technical argument into a business case.

The business thinks in terms of cost, timelines, risk, schedules, and budget, among other things.  If you can gather data and build a technical case about the superiority of an approach, you can gain support among peers; if you can translate it into terms that the business understands, you can outflank opponents by winning broader support.

Let's go back to the example of you lobbying against singletons.  You could do some research beyond the code.

Try seeing what percentage of defects originate in singletons and the classes that use them.

Is it higher?

Or perhaps you find a high degree of correlation between commits to those sort of classes and features that wind up behind schedule.

When you start making business cases based on data, people will pay close attention to what you have to say.

Make a Visual Case

I'll close by offering perhaps my most powerful piece of advice.  If you make a good data-based argument, you may sway your teammates.

If you also build a business case, you may sway the business.  But if you make your case visual, you'll have much better odds of doing both.

Take a look at this.

This was an (anonymized) dependency graph that I once generated for a client with NDepend.  I did it to prove the point that a codebase had so much coupling that evolving it to modularity would take more time and cost more than starting from scratch.

I built that case with hard data, such as the ratio of edges to nodes, among other things, and I had a solid case.  But the picture was, as they say, worth 1000 words.

The folks I was speaking to made up their minds before I even got to the data.

Arguments in the programming world aren't going away anytime soon -- nor should they.  But implementing things you don't agree with will always be frustrating, and you should aim to do so as little as possible.

You think others are wrong?

Prove it.  Build a data-based argument, relate it to the business, and use something like NDepend to create a compelling visual for you.

If you want to take a Freakonomics style look at your codebase by turning code into data, you can take NDepend for a 14 day spin.

Latest comments (10)

Collapse
 
kwstannard profile image
Kelly Stannard

I would also point out that it is impossible as a salaried employee to convince most managers of anything without a personal relationship. They tend to view any sort of course change as a personal failure that will reflect badly on them so you are going to need that trust and political capital beforehand. This is especially true if you are new to the company. No one wants the new engineer to come in and quickly point out where things are going wrong.

Collapse
 
aschwin profile image
Aschwin Wesselius

A company should want the new engineer to come in and quickly point out where things are "not that obvious" to him/her.

The point is, and this is the hard part and mostly underdeveloped area: use the soft skills to put the point across. Be humble, ask questions, play "dumb" (uninformed). Repeat the available information, rephrase it, ask a question and most often than not the silliness, the absurdity of the reasoning behind decisions become apparent. And this all by just stating facts, not mumbling opinion. It's a technique which you can apply everywhere, not only in the programming world.

At the current company I work for, the state of the codebase was already obvious. People acknowledge it, own it. Good. But even though I started in March 2022, this year, I am able to make suggestions, pose proof of concepts for the betterment or at least ponder about approaches that might be useful. Now, or later on. Posing myself as wanting to bring value and improvement is highly welcomed. Just don't be cocky, arrogant, rude, aggressive or mean. Nobody wants such a (new) member on the team.

Packaging the bad news is a highly valued skill to possess.

Collapse
 
sandordargo profile image
Sandor Dargo

Thanks for the article, it's really interesting. I fear that sometimes, even data is not enough. After all, it's just numbers, you can treat it like statistics that you bend the way you want. We are humans, emotional creatures... For the human part, have you read Driving Technical Change by Terrence Ryan?

Collapse
 
daedtech profile image
Erik Dietrich

That's definitely true, though I always found that data (with visuals) was a great way to influence the emotional aspect in folks. When consulting, I'd always create a whitepaper and an "executive deck" for presenting to leadership/CEO/the board/etc. The whitepaper would be dry findings for reference, and the deck would be more oriented around "here's why you should care."

Collapse
 
xanderyzwich profile image
Corey McCarty

Great article, thanks!

On my team we have 16 applications that we manage and NOBODY fully understands even half of them. We spend a great deal of time reading into a given code base to find root causes. There is a 3 project deep dependency graph of things that we own, and every 'edge' project depends on 3 dependency jars where 2 of the dependencies also depend on the other. It's nightmarish.

When I first joined, I would find myself making suggestions of massive changes to make things more readable and manageable, but it wasn't long before I realized that there is a need for uniformity and that it takes quite a long time to apply one of these large scale changes across all 16 applications. (We had similar issues with how standups are handled in the beginning). Over time we have all come to understand the underlying problems that we all want to solve; however, the solutions are long term changes, and we still have business changes coming in as we do our current sets of changes (including evolution from Weblogic to standalone SpringBoot apps).

The understanding among team members has been paramount. I find that as we move forward it is discussion of the problems that is far more helpful than presenting a solution and trying to get everyone on board. More often than not, the solutions that are most manageable are ones with several small parts offered from different people instead of a single massive fix.

To your point though, it is the problems that we can communicate having business impact that are the ones we are able to give some focus to.

Collapse
 
daedtech profile image
Erik Dietrich • Edited

I'm glad you liked the post!

For what it's worth, that sounds like a familiar situation to me. For ~4 years, I was an indie IT management consultant specializing in using static analysis to help IT leadership make data-driven decisions about codebases. So I went to a lot of shops and sized up a lot of codebases very quickly.

And in my travels, the situation you're describing was really common, at least in the sense of sweeping dependency snarl creating a lot of inertia around a codebase or codebases. The situations where things were most tenable were ones like yours -- people with a realistic understanding of the issues, mutual patience/minimal judgement, and a pragmatic remediation plan.

The worst situations were the ones where there was deep disagreement over next steps, or even whether the situation was actually bad. A surprising number of shops just view things like pervasive global state, massive dependency snarl, miscellaneous crippling tech debt as an inevitable property of writing software.

Collapse
 
xanderyzwich profile image
Corey McCarty

What would be your take on moving such a set of applications to a mono-repo with build process for the different deployment applications. I've been discussing this possibility with one of my leads as a solution to resolve the difficulty of debugging and of having to build dependencies, upload them to artifact repo and pull into dependent project to even test a fix. It would be a large undertaking, but would result in something that I expect w I uld be much easier to manage and the depency jars wouldn't need to be stored anywhere externally.

Thread Thread
 
daedtech profile image
Erik Dietrich

Just so I understand in broad strokes, are you talking about having a mega-codebase to rule them all, and then having individual builds that plucked only the dependencies they need to advance them through promotion to production?

If so, (1) I'm kinda fascinated and (2) the first thing I'd wonder about would be the impact on the build and unit test suite. I'm inferring from "jars" that this is a Java codebase, and you'd have partial compiling as an option.

Thread Thread
 
xanderyzwich profile image
Corey McCarty

Your understanding is correct. It was an idea that I'd read before and seemed pretty interesting, but I've not seen it mentioned anywhere else. I'll have to do some more research, and may have to do some build experiments to make sure that I can actually pull off what I'm considering.

Thread Thread
 
daedtech profile image
Erik Dietrich

Hmm... it's an interesting proposition, I'd say. Putting my code/data consultant hat back on, I think the questions that would pop into my head for evaluation would be:

  • What is the feedback loop impact of developer time for compiling/running the app/running the test suite?
  • To what extent does the approach facilitate unnecessary coupling?
  • What is the counter-balancing efficiency/time savings in orchestration around a lot of little codebases, maintenance/management-wise?
  • Would it create a spike in merge conflicts?

Then I'd try to figure out ways to quantify projections about those things. (This was basically the essence of my niche practice -- define the important questions, quantify, and make a case.)