Skip to content
loading...
markdown guide
 

My current codebase!

As I'm primarily working with databases, we're still dealing with a huge relational monolith that has been created like 10 years ago, perhaps more.

  • Lots of production objects are dead (not used, or wouldn't compile anymore)
  • Objects that were supposed to go to production but never did are there
  • Databases calling objects from other databases in a manydirectional(?) way
  • Mixed up naming conventions
  • Lots of dynamic SQL
  • Business layer and sometimes even presentation layer has been put to database, primarily stored procedures which means changes often have to be done in many places and tracked
  • Lots of quick fixes and hard-coded conditionals

So yep, most of these changes mean that sometimes we're making changes on old objects that will not make an impact at all. There's a huge risk of breaking things apart, risk that someone will forget to make a change in one place due to hard-coding.

 

Oh man, are you sure we don't work at the same place? ;-)

I work with most of the above issues (10+ year old monolith), along with hard-coded values that get forgotten in some old dusty corner of the codebase and cause bugs without generating any actual errors!

In our case, management are all non-technical and don't really understand the difference or the business value in good code vs. bad code, or tech debt or any of it, really. And in many ways, that old code worked and got them to this point...but at some point you do start to pay the price.

The monolith lives on.

 

I guess I'm not the only one crying about issues we have!

Technical debt is just like a loan. You have to pay every month for it, at the end of your month you didn't really earn anything but you're in a much better shape.

During these few years I've realized that it's difficult to sell this kind of work for management, but it's possible, you just have to learn to speak their language:

  • Shorter development cycles
  • Less bugs
  • Faster deployment times
  • Possible infrastructure savings
  • Ability to ship new features faster
  • etc.
 

What's primarily preventing folks from removing dead code at the moment?

 

tl;dr; People don't know to.

Proper version:

I'd say that root cause is the way teams have been structured.

Physically, our department is just database department and we're supposed to deal with SQL. We're all co-located.

We've also got web development teams in other physical locations.

Now the issue is that SQL team doesn't know much about web codebase and technologies that product has been built with but knows business domain very well and underlying data structures (or at least is supposed to know).

Web team knows codebase but knows very little of SQL and domain logic.

And when you have to remove some dead code, it's quite a painful process because scrum teams are busy with sprint work and debt stories usually are somewhere at the bottom and are left behind due to higher priority work.

Of course, this does sound bad, but some of our managers (not every) are technical people and see value in technical debt removal. This moves on, but slowly.

Also, teams are now becoming more collaborative and try to enrich their skill set which will (not I shape people, T shape, E shape, all these buzzwords) which will help out to understand not just the one side of code, but the whole picture.

 

For my first largish project as a paid developer I was tasked with putting together an admin dashboard. Not too complicated, mostly just CRUD operations. However, I committed a number of dev sins in the process.

  • I reused very little (i.e. I rolled most of the components myself). I thought I could handle it, but in retrospect I shouldve started with a React component set like semantic-ui or material-ui.

  • unpaginated requests to backend (fetch all 1000 or so resources on page load)

  • lousy use of constants (magic numbers and magic words)

  • lots of monkeypatching to fix bugs. The PMs drilled very hard that there should be fewer bugs putting pressure on me to fix them. My lack of experience had me hard-coding more to fix them though

At this point its annoying because I've had to live with my sins or slowly resolve them. If tasked with doing it again I think I could save a boatload of time by having good test coverage.

Though I'm slowly improving it now, it has persisted for so long because at least it worked.

 

"because at least it worked" is the worst kind of tech debt! no one wants to let it go...

 

At work there is an application under active development (new features, not just bug fixes) that is written in ASP classic.

Every once in a while some manager will start making the rounds at cubicles and asking if anyone knows PERL or COBOL....

 

I code Perl daily. It can be written well.

The view many have of Perl is of Regular Expressions. They are powerful, but they are hard to read. With Regex in Perl, you can have whitespace and comments to make readable expressions, but with the Perl-Compatable Regular Expressions (PCRE) you see pulled into other languages, you often don't.

Another source of bad views of Perl is Brad's Script Archive, which is what people had before Stack Overflow. Except Brad's Script Archive was horrible Perl in the context of 1990s Perl, and with Perl Critic, Perl Best Practices, Modern Perl and others, Perl today is far better.

But because of backward compatability, that crap Perl still works.

I wouldn't necessarily advocate you learn it if it doesn't make sense in your environment, but I don't think it's a bad language.

 

Perl isn’t terrible, cobol on the other hand....

 

Early 2000s: a program is written as a TCP-based server for a new-ish application-level protocol. It's a C++ program written as a prototype, runs on someone's desktop computer, and has a GUI (this is one program, not a GUI client for a non-GUI server).

2009: I'm hired to add some filtering features to this program. It's a mess. Within any function, there are no variable names. Everything's just n1, n2, s1, etc. Sometimes a single variable is reused for multiple purposes within a function. I later wonder if the research group that wrote it has been passing us an obfuscated version of the code. The filtering project goes fine, though there's not enough time to refactor the program as a whole. The research group (out of state) that wrote it is still trying to keep control of the codebase, and we have other projects, so there are only so many changes we can make.

It still has a GUI, but my team runs it on a server in an always-open user session.

2014-ish: Windows 2003 is going EOL, and support for the always-on user session for the GUI doesn't exist in Windows 2008. Because my team has a lot of other projects, the research team is tasked with the upgrade. Besides splitting the GUI and server portions of the program, they also undertake a major rewrite of the codebase using modern C++ practices (such as naming your variables), Boost, etc. They lose funding and are laid off halfway through, and we inherit the code.

We test the new version in our staging environment and put it in production. The new version has significant problems with crashing and data staleness in production, but we can't revert because Windows 2003 has gone away. I'm assigned back to the project to help fix it. The codebase is a mess of the old code (basically all of it) plus the new code (as a bridge between the GUI and server portions of the old code). I have an idea for a fix. Basically I make some things asynchronous that were synchronous (which adds complexity, so I don't envy whoever inherited after that). I think it will take a day. It takes a month. The fix seems to help. I later learn there's still crashing, but I hadn't been hearing about it because direct communication between dev and ops teams are no longer encouraged. (I would later have an epiphany: "They just made the queues too big!" but by this point I no longer work there.)

Meanwhile, someone on the local architecture team is allowed to start rewriting the server portion of the program (in C#), and it looks really promising. But then we get new security requirements, so all devs are tasked with remediating the flood of of scan findings for about 9 months, so the rewrite is put on the back burner. Shortly after the security work dries up I leave (as did a lot of devs after being given 6+ months of mostly sysadmin work), so I don't know if the rewrite was ever allowed to proceed.

 

We have an old 2008 server that's sole purpose is to serve one site written in ASP classic and that relies on machine installed, proprietary, ODBC iSeries drivers. It is only used by employees and as a small company, that site's priority was always at the bottom of the list. Instead, the focus shifted to better public-facing sites.

Every year, we have to make sure it's odd balance of PHP 5.3.x, ODBC drivers (from a specific IBM software package) and its 3rd party PHP license (to bridge some of the things ODBC doesn't cover) is all finely polished and working. Every year, I have make sure our bill from the 3rd party foreign license isn't lost in spam and paid by our accounting department. Ever year, I have to dig up documentation from the previous year of exactly what commands to call on the iSeries for the yearly license code to be renewed. Every year, someone calls me about some random part of that website that is "no longer working" and I have to figure out what in the hell was coded 20 years ago.

There's nothing more I'd like than to see it dead. I've gotten rid of the need for the 3rd party license, the ODBC drivers and the outdated PHP, just need to finish testing and release it's new version. We're going to hold a party as we shut off that server!

 

We're going to hold a party as we shut off that server!

Have fun!

 

Maybe this counts, but it is my current front-burner.

I work in a research lab in a university, and if it works, we keep it until it doesn't. We had an instrument, and we had a spec'd out workstation for that, which was impressive for 2004 standards. But we stopped using the instrument.

We started to have need for a labelmaker, so we installed it on that.

So we have an XP machine with hardware RAID set up to print labels.

And when we lost power on Saturday, that machine lost the ability to boot.

I have a machine that will boot, but it has no CD so I cannot (reinstall and) install the labelmaker software.

And if this sounds less Dev and more Admin or Helpdesk, welcome to my stand of many many hats.

 

I'm not sure if it's the worse, but that's something I came across very recently.

I had the urge to check our unit test a bit. I found that in many of our components we have unit tests, but they are not even compiled, because they are not declared in the XML file our build tool takes.

No surprise, most of them would not even compile not to mention passing the tests. But I found a component for I wrote a lot of tests a couple of months ago. I felt shame and doubt at the same time. Was I really such a moron?

I learnt that someone sad sometime after that we don't need unit tests in this component and he removed them from that configuration file. A non-dev gave the final approval in the code review. To me that's real horror!

 

I'll generalize - the worst technical story that started with "I will refactor after the release...". Again and again, I prove myself that it is thousands of times harder to do than doing it immediately during the development and QA phase. On the business side, no manager wants to allocate the time to refactor something that just got released and is fine and working (at least on the surface)

 

Some years ago, I had to do a hack.

Once a day, at night of corse, a log database for communication had to be processed. This is files retrieved from different servers, each 1-4GB in size.

Back in 2007, this was a big deal. One does not simply download 4GB size files from 5-6 different locations world wide just like that, but since it was video conference stuff, we had lines to do that - during night at least.

The log files were processed and numbers accumulated to have nice and pretty statistics with drill-down in graphs. You know, the kind that managers and sellers like. Pie charts and graphs.

All of this were written into a database, 150-200.000 lines from each server. Now, the connector only handle 65,534 bytes before it break down and confess, loosing all data in buffer.

Since this were a POC, we used way too much time making it look nice for management so little time were spent on the back-end. Also testing never went over 10.000 from each site, so no problem before it went into production.

When production, big problem. Very big since the CEO went out to all our partners saying that we "now have this fantastic statistics tool".
One emergency fix later;
we count number of rows written, and each 50.000 we empty buffer, disconnect and reconnect

The reason it is my worst technical dept is that, just before writing this, I checked if they are still using the same code, and, yes you guessed it: They do.

Classic DEV Post from Aug 24 '19

To Code, or Not to Code on Vacation: That is My Question

I coded on vacation, and now I feel guilty about it. What tips do you have?

Jess Lee (she/her) profile image
Taiwanese American. Co-founder of DEV.