Abstraction vs. Experience/Habit

twitter logo github logo ・1 min read

This question is geared more towards people that have been doing IT "for a long time":

Do you find that, as technologies abstract and simplify things, that said "simplification" negatively impacts your ability to adopt?

Because I've been working in IT since long before GUIs (among other things) became common, it feels like whenever someone tells me "try this IDE" or "try this (simple) language/framework", etc., my brain trips over that simplification. It's like it's incomprehensible that I don't have to write all sorts of plumbing to get a task done - that I can just include/import/etc. a library or module and then write one further line of actual code rather than dozens.

twitter logo DISCUSS (3)
markdown guide
 

I started coding in 1997 and got my first professional job coding in 1999. I wouldn't say that 20 years is "a long time," but I wanted to share my thoughts nonetheless.

To answer your question:

Do you find that, as technologies abstract and simplify things, that said "simplification" negatively impacts your ability to adopt?

Not at all, as the nature of a tool and my desire to adopt it are completely independent.

I draw a sharp line of separation between the tools I use, and the goals I want to achieve. I view myself as someone whose job is to build things as quickly as possible without sacrificing quality or compromising on vision. This approach allows me to reach for whatever tool helps me best maintain my standards. Sometimes that means I use a tool at a very high level of abstraction; sometimes a tool at the lowest level of abstraction. The tool itself is meaningless to me, only the end result matters.

There was a time around 6 years ago when there was a hot debate around if people should use Angular or React. At that time Angular was the clear favorite due to the prevailing community consensus that "If Google does it, it must be good." By that time, however, I had been burned by GWT and knew that wasn't true at all. I was working for a large enterprise at that time, and was constantly being asked my thoughts on the Angular vs. React debate, and so I made an internal blog post stating my personal opinion. My post was short, but amounted to, "Angular doesn't solve problems I need solving, and React does." This apparently shocked people, as my opinion was my opinion, and I wasn't parroting back what the all-knowing authoritarian counsel of "they" said I should think. This created a bit of friction between me and the people who were preaching the church of Google internally, but I truly did not care: I assess a tool on its merits, not on its popularity.

Today I use React (and have for 5 years), and it turns out that a lot of people do as well. I have no idea what their motivation is to use it, but I doubt that it's because they thoroughly understand the pros and cons of using React. Today I also use CoffeeScript, which is a best-in-class multi-paradigm dynamically type language; and Swiflin (what I call Swift and Kotlin because they are so similar - a bit like C# and Java many years ago...Cava if you will) is a best-in-class multi-paradigm statically typed language. Neither of these languages was mainstream when I adopted them, but I did so not because they were popular, but because they were best-in-class tools. Today Kotlin and Swift are mainstream, but CoffeeScript is not. That's irrelevant to me, as "they" tend to give advice based on the philosophy that newness is more important than if a tool is best-in-class.

 

I guess my problem is that, having been burned by so many things make decisions on my behalf, my level of trust in "magic" isn't especially high. =)

Usually, said burning has been because of others' problems. "Tell me what's broken and how you think it ought to be functioning so I have a better chance of figuring out why you're experiencing a difference." Tends to force a "close-to-the-metal" habit-set on you.

 

I find higher abstractions can sometimes be tougher to use in the long term. Because the abstraction always breaks down at some level. In the end it is a trade-off that you have to decide for yourself.

For example, I use a garbage-collected language (F#). If I run into performance problems because of GC pauses, then I will have to start using tactics to avoid allocations. Like reusable pools of objects. This is basically managing my own memory but indirectly since the language doesn't provide explicit memory control. Despite this, I deem using a GC as a worthwhile trade-off for the kind of applications I normally write.

On the other hand, I avoid using frameworks of most kinds. Because long term I find them to cost more. Since all abstractions are leaky, and the bigger it is the more leaky. I eventually have to spend time learning how the framework functions under the hood, and then trying to puzzle how to invoke it in a way to accomplish what I want. All this instead of just solving my problem more directly without a framework. So I much prefer to create a solution by weaving together simpler libraries, even though a framework would be faster to get started. My perspective is shaded by the fact that I still have software in production that has evolved over the course of a dozen years.

Classic DEV Post from Jul 20

What Makes You a Great Programmer on The Team?

Majority of software developers are aspired to be not only a competent professional but also a great one.

Thomas H Jones II profile image
Been using UNIX since the late 80s; Linux since the mid-90s; virtualization since the early 2000s and spent the past few years working in the cloud space.
Sign up (for free)

dev.to is where software developers stay in the loop and avoid career stagnation.