TLDR: AI is an abstraction for coding. And like all abstractions, it never replaces the need for the underlying knowledge.
The Theory
There's an ongoing theory that AI will take most jobs, and definitely all software development jobs. I think this is highly exaggerated for a few reasons. In this post I want to look at one of the most basic ideas in software development that contradicts this.
The Law of Leaky Abstractions
This is a concept attributed to Joel Spolsky, who coined and explained it in this article, which states:
All non-trivial abstractions, to some degree, are leaky.
I would go a step further and say that "all abstractions are leaky"; the trivial ones are just naturally absorbed.
This means that all abstractions attempt to hide some process from the person using them, and fail. To be honest, this tends to happen in edge cases, but it's usually a matter of time (or scale) until you bump into an edge.
Some of the classical examples in software development of this are:
ORM (Object-Relational Mapping): Still need to know a lot about SQL.
SQL: Still need to know about DB indexes, B-trees and how the DB implementation interprets the SQL statements.
Programming languages: Still need to worry about memory usage, CPU cycles, and code injection.
This is even applicable to abstractions outside software development, like driving cars or washing machines.
The mentioned article explains in more detail some of these examples.
Mo Abstractions, Mo Problems
The thing said about money by a big famous philosopher can be applied to abstractions. The more complex the abstraction is, the more it tends to leak.
Think about the given examples of SQL and ORM. If you are building your SQL statements by hand like the good old Vikings, you do need to know about the indexes and the underlying implementation of the database of choice (Postgres, obviously).
But if you value suffering and decide to use an ORM (let's say Hibernate). Now you need to know not only that, but waaaay more when you call your getResultList():
"Why isn't this working? What SQL statement does this result in?"
"What annotation should I use on this @Entity marked class field to make it unique?"
"What the hell does this @Strategy mean?"
And you still need to know SQL, not me saying it:
Throughout this document, we’ll assume you know SQL and the relational model, at least at a basic level. HQL and JPQL are loosely based on SQL and are easy to learn for anyone familiar with SQL.
You may argue this is an anecdotal fallacy, but give some honest thought to all the SDKs, JavaScript frameworks, and low-code platforms out there. While some scream "SKILL ISSUES", I calmly say "leaky abstraction".
So it seems that the further away we are from the most basic operations, the more we need to know about the abstracted "world". In other words, the more complex an abstraction is, the leakier it tends to be.
Is this an issue?
Should we abolish all abstractions since they are cursed, radioactive, closer talker that spits when talking?
Well... no. Abstractions are not the issue. Abstracting something isn't intrinsically bad. It can hide some of the complexity away from you for a long time and allow you to move faster. The issue is the conclusion that you no longer need the abstracted knowledge. It should be seen as something to speed you up.
The original article mentions the TCP abstraction, and most people can do a lot without understanding the "flakiness" of IP. Or even, relying on HTTP/HTTPS without understanding TCP or TLS/SSL.
Most garbage-collected language developers don't understand heap memory, pointers, and CPU cycles (they should). Anyone doing simple enough tasks in these languages probably doesn't think about these a lot.
Also, I think developers hardly ever need to consider things at the chip assembly structure or operation when programming. Things like: transistor material, voltage, or temperature. Maybe L2/L3 cache size and number of cores.
But none of this "completely" removes the need for understanding the underlying/abstracted process. If you've been living so far like this, either your task is simple enough (in your context) or the quality bar for your delivery is low enough.
The AI Case
So what does AI have to do with this? I argue that AI is just an abstraction for coding. Many tasks actually, but let's focus on coding.
AI operates like any other program. It takes an input and provides an output. If you are using it to code, it also works like any other programming language. You pass instructions that are "parsed" into instructions a computer can understand and execute.
Since the parsed instructions in this case are "code", it's fair to say that AI, in this context, is just abstracting the "code" away from you.
As with other abstractions, it helps you get further without the underlying knowledge, but it doesn't "remove" it from the equation.
The next logical argument would be: "Ok, but if it takes you far enough, it might get you where you need to go."
This is true, and indeed I think it does. Using AI to implement personalized software or automated processes for a single user is a reality. The truth is that you can't go much further than this right now, unless you understand the abstraction underneath.
Many developers using AI right now see the capabilities of it in the hands of someone experienced with this (themselves) and think this will be the same for an inexperienced person. It's not.
You might be thinking that it will keep improving and the current model "is the worst it will ever be". This is true, but it doesn't really mean much, but that's a topic for another chapter.
In order to truly replace all developers, AI would have to break the "Leaky Abstraction Law".
I don't think it will.
That doesn't mean it won't change the industry. It has already, but many things have before. Your day-to-day tasks might not be the same, but the underlying knowledge you accumulated over the years is still important, and no abstraction will ever "remove" the need for that.
Keep coding. Until next time.
Top comments (0)