DISCLAIMER: I know have a reputation for my strong dislike of Javascript, but I assure you, this post has nothing to do with that opinion. If you can code responsibly in JS, and it seems to you the best tool for the job, by all means use it. This post should not be taken to imply that "real programmers use X". Real programmers can use any language well, including JS.
I've been either an observer or participator in many conversations regarding Javascript, and I've noticed a common theme among the talking points of many of the language's fans: Javascript is malleable.
They often go on to point out all the "magical" things you can do in the language; how JS shape-shifts to fit whatever you're doing. For example, its (in)famously weak typing system is a favorite feature of its followers, while being a chief bone of contention among some of its detractors.
In pondering the praises sung of JS by bright-eyed young developers, many of the same who decry many other languages as "clunky," "too hard," and "sharp-edged," I've come to formulate an alarming theory:
Javascript's popularity may be, in large part, a function of endemic mental laziness, rather than of the language's efficacy and suitability.
Look at the other trends that have emerged in our global culture in the last five years:
Shortened attention-spans have given rise to
TL;DR(too lazy; didn't read) whenever it takes more than five minutes to read an article.Frameworks fall in and out of favor at a historic rate, at the same pace as teen fashion trends.
Basic human language skills, like spelling and grammar, have been eschewed by many who scoff at the necessity to communicate clearly to others (and then who often get angry when people don't understand them.)
An alarming number of young developers dismiss the relevance of learning algorithms altogether, as "the older generation of math nerds did that for us, so we don't have to." (I wish I weren't quoting an actual person.)
I've known college students (some of them my own classmates at the time) who would whine about having to spend 30 minutes on their homework. I knew one student who argued he should be allowed to copy answers out of the back of his algebra book because, and I quote, "the professor shouldn't expect me to take the time to learn this stuff."
I've interviewed candidates who bragged about their code only just being good enough to pass a college assignment, or who laughed off their failure to read the specification for a coding challenge as unimportant and irrelevant.
Understand, I'm only documenting the trends I've observed; I have no intent on sticking labels on any individuals. Whether these statements apply to you personally, you'll have to decide for yourself.
In short, there is a growing cultural trend towards a wholesale avoidance of anything that requires concerted, prolonged thought.
When viewing some of Javascript's purported chief selling points through this lens, it is easy to see that its popularity may in fact be because it spares the developer from feeling the immediate consequences of not thinking things through.
Mind you, those consequences still manifest themselves as logic bugs and performance problems, and no one can deny that modern apps and websites are rife with both problems. One might even argue that the quality of shipped software is at an all-time low.
I've noticed that a number of these developers who use JS in this manner rarely have more than a passing familiarity with other languages. They dismiss C++ as "too hard" and "legacy". They often have to seek out regular help debugging Python code they wrote with the same slipshod "anything goes" mentality, while dismissing the admonishments of experts to change their approach.
Far from the wild stories novices claim about the language, C++'s developers often produce code that is stable and performant, as the language's own touchy nature requires the programmer to exercise care and concerted thought. Yet we seldom feel like we're wasting our time because we know that the extra effort to produce working code also minimizes our debugging efforts later. We're probably not often going to be awake at 2 AM because "prod crashed".
you're not going to thrive in this field
P.S. This doesn't mean that C++ isn't without its flaws, and many languages exist that effectively resolve some these issues (such as D and Rust).
And while Python appears to be "magical" in its own right, experienced Python developers are among the strictest sticklers for pedantic technical accuracy alive, always pursuing the best possible solution, rather than the quickest.
Once again, none of this should be taken to imply that Real Programmers Use Butterflies. The problem isn't Javascript. It's the trend towards apathy that has bled into the world of programming, brought in largely by individuals who embraced cultural laziness, and now wish to use it to shortcut their way to a high-paying job.
These same individuals, I would argue, often wind up embracing Javascript not because it is suitable or well-designed, but because it panders to their resistance to putting forth meaningful mental effort.
Denizens of this crowd exist among the users of every language. They're usually the ones to avoid -Werror in C/C++, because "they can't get their code to compile." They're the ones who complain about linters and other static analyzers, because they "just fill the code with a bunch of red sqiggly lines." They're the ones who use double any time their value has a decimal point, instead of considering float, because they don't want to think of the actual needs of their data and calculations. I could go on.
If you're reading this and seeing a reflection of yourself in my warnings, I hope all this comes as a cold splash of reality for you. You can become a good programmer, but the road to get there is not by any means easy or level.
Bad code is bad code, whether it compiles or not.
You must learn algorithms, paradigms, and design patterns, no matter how "magical" your favorite language seems.
Just because high-level languages and tools abstract away the underlying computer logic, that does not excuse you from understanding said underlying logic.
A good programmer can write excellent code in any language, including Javascript, because he or she puts in the time, effort, and careful thought necessary. There is no shortcut.
If you're using Javascript (or even Python) just because of how easy the "magic" makes everything, maybe it's time you studied under a harsher master, like C, Haskell, or Rust. If that sounds like a waste of your time, then this field is not a good fit for you.
NOTE: It's unfortunate that some have chosen to misinterpret my words as "gatekeeping", either because they didn't fully understand what I said, or because they chose to willfully make a controvery out of this.
A good programmer can come from anywhere, just as a good football player or violinist can come from anywhere. But, as I've already said in the article and repeatedly in the comments, one must consider two things:
Programming isn't right for everyone. Not everyone will be a good programmer, just as not everyone will be a good football player or violinist, because we're all different. Our skills, abilities, talents, and interests are varied.
If one wishes to be a good programmer, they must count the cost of what is involved in becoming a good programmer. A football player must practice the sport, build muscle strength, spend a lot of time on drills and fitness. A violinist must put in massive time for practice, learn music theory, play both technical exercises and songs over and over again, and take proper care of their hands and their instrument. Count the cost. There will be one; only you can decide if it's right for you.
To pretend that programming requires nothing of the programmer, that it is a free-for-all wherein anyone can build a successful career with nothing more than casual play, does no one any service.
As I've said repeatedly, only you can decide this for yourself. You're welcome to come through the gate; no one is stopping you. But you must decide for yourself if you really want what's on the other side.
Latest comments (47)
Great article and very interesting topic!
Regarding this:
"They're the ones who complain about linters and other static analyzers, because they 'just fill the code with a bunch of red sqiggly lines'."
I couldn't agree more!
Good developers (a.k.a. seniors) CARE.
They care about code readability, reliability and performance.
And they DO CARE about the issues flagged by a linter.
This video showcases a simple and clear example of that --> youtu.be/iQC-9qxhfOk
For whatever reason, I was immediately thinking of...
Honestly, I think that most people are missing the real issue which is financial. And I apologize for the blow out but this topic really frustrates me and unfortunately very few are willing to acknowledge the issue.
In general, people and therefore developers as well, tend to put the minimum effort required to achieve a goal without much consideration into the future. Think about it. Where the industry imposes severe restrictions and quality standards, like for example buildings, the requirements by themselves elevate the good engineers. The requirements are also what keep non-relevant to the job people out of the industry or decision making paths. You can't have a linguist (no pun intended) who doesn't understand physics, drive the project management of building a bridge. For the same reason that you can't have a kindergarten teacher who doesn't like children or for the same reason that you can't have a nuclear factory engineer who don't understand nuclear physics. They are unsuitable and even dangerous.
But, with software and especially software that is not life or mission critical, we do. We have so many people that have not the necessary background, driving decisions and usually sacrificing quality for cost, even when the sacrifice means a shity output. We add and expect numbers over quality. Although DEVOPS is upon us, we still have in many places the business driving features with complete lack of understanding of the technical challenge and often don't even care. I'm not saying we shouldn't do difficult stuff, but the impact needs to be understood from all aspects, including if and whether the right people are there. When decisions are driven in the dark, then there is a problem. When the qualified people find themselves amond irrelevance they get worse because there is nothing to challenge them. And unfortunately, we are doing this more and more often. To be clear, the issue is not with training inexperienced people but with the potential and mindset compatibility or rather lack of it them which plague the industry.
I've got multiple examples and I think the situation is getting worse. With the tools improving, there is a general consensus that less skill is required to be a software engineer that is one of the best paying jobs. Anyone would "smell" the gold and would like to get in the party. And why not? Nobody is saying to them no! Instead we are even punishing the skilled and qualified ones by adding an extra burden without a return for the investment. As long as some managers could argue that they reduced the cost. Cost is important but would you remove the foundations of the building to cut cost? Would you live in that house? No but still in software we do these things. Another example is during hiring processes, where your skill and talent is impossible to convey because the managers and recruiters involved in the hiring are unable to evaluate your skill because they don't understand it. Anyone who wants to manipulate the system can get in and then we wonder where the problem is.
And yes, universities and schools adjust to this mindset. They get paid afterall to produce numbers without true quality gateways. People who would normally look down to the developers are going after this industry and why wouldn't they? They get paid well to do something that they don't really understand nor can they relate to. But they still get paid much better than other jobs.
And as a small reminder, similar things happened in the past as well. Anyone is old enough to remember why and how Visual Basic became so successful? Anyone is old enough to remember how people were hired in the DOTCOM era in the banks? I remember people getting hired at banks with really high salaries just because they could do an
alert("Hello")with VBS because back then just mentioning the word code was frowned upon. Former colleagues of mine were biologists who got into the software engineering because they had a university degree. They turned out fine but I'm sure others didn't and are probably in the industry.The immaterial nature of software makes it challenging but also a place were irrelevance is also dominant. This is THE elephant in the room and if you don't have any they you are really lucky and I do hope that you don't get to have one.
I think I agree with what I'm seeing of your point here.
Of course, the "financial"/traditional business sector has had relatively little to no bearing on FOSS over the past twenty+ years, so if it were primarily financial, these problems wouldn't be creeping into little indie dev-run projects too.
I know I've recommended it elsewhere in the thread, but check out "Dreaming in Code" by Scott Rosenberg. It speaks a lot to the hiccups in software development over the years, both in FOSS and proprietary sectors.
I assume you this is the book. Looks interesting but since I've read both The DEVOPS handbook and The Phoenix project I would like to ask you if you still recommend it and what would be different with these books. That is of course if you've read any of them. If you haven't, then they are a must.
I have read neither, but at a glance (yay Amazon) I can tell you that this isn't even remotely similar. Dreaming in Code isn't a technical book; it is a first-hand account of an actual FOSS project, from its inception to its first stable release, and all of the ups and downs along the way. Rosenberg also takes time to explain the history behind the various project management phenomenons the project experienced. In many ways, it reads more like a story than anything.
In short, I definitely still recommend it.
Thanks
I agree that you have to dig deep to be successful, but see the point in other's argument that Javascript is not necessary more magical than others.
For example, eventually under JS there's the JIT engine (like V8). Once I had to debug into the V8 internals to see why some code got the slow path. It's magical to the extent that it is rarely encountered.
Similarly, while Haskell on the surface is a smooth language, the compiler internals and the runtime of GHC is pretty complicated (rewrite rules, blackholing..). These are magical in the same way, that they rarely dealt with directly.
The common thing is, once you hit the "magic barrier", unless you have the willpower to dig deep, you are stuck and have to rely on others to fix your problem. Which is fine sometimes. Other times too costy.
Does that sound good?
I think it sounds reasonable. Again, I'm not actually picking on Javascript in and of itself. I've simply noticed that an apparently significant reason it's "trendy" is because it abstracts away so much at the elementary level, perhaps more than most languages.
The problem isn't that some JS devs are apathetic because JS, but that many apathetic devs use JS because apathy.
Javascript can be used responsibly, and I certainly don't know much about its deeper principles. Thanks for the insight from the other side of the "magic barrier" on that language.
I think your point about the magic barrier is pretty much what I'm saying.
Thank you for this article, you're absolutely right. While the whole world runs on C, young programmers are confused into lazy thinking which is a dead end.
The shade, the shade of it all! 😂
I agree, to know programming as a side-skill and to be a programmer are two different things. I certainly don't mean this for anyone who simply keeps coding in their back pocket merely as an auxiliary skill.
Great tie-back to chess, by the way. Very true.
Know thyself. You can stroll on through the gate if you like, oh greenhorn, but be forewarned, you may find nothing you desire on the other side. Count the cost of this road.
Surely you see that much as gatekeeping?
Like, I understand you say you're not doing it… but if someone is stabbing another person while saying they're not murdering the other, the words don't really mean anything. 🤷🏾♂️
I see gatekeeping as telling someone "you can't come in," when I have demonstrably stated that "if
X, then you may not find this field to be a good fit for you."To read everything herein and still label it gatekeeping requires deliberate misinterpretation. In any case, it's willfully misrepresenting my point as being exclusionary and discriminatory, when I have said absolutely nothing to that effect.
Not every person alive will be a good programmer, just like not everyone will be a good football player or a good violinist, because it's not right for everyone. But if someone wants to be a programmer, they must know what's involved, and consider the cost for themselves, the same as a football player or violinist must consider the effort necessary to achieve mastery.
But as I've said again and again, it is up to the reader to determine if this is true of them or not; I have never, and will never, make that judgement call myself.
Gatekeeping is explicitly stating they shouldn't come in, but it also refers to creating an unwelcoming environment—regardless of intent.
Stating another person will not enjoy something they've yet to experience is part of that. Maybe they'll grow into learning the fundamentals as it becomes more practical. Maybe not.
If people are playing videogames and a girl tries to join, it would be gatekeeping to say "this is only for boys". Same with generalizing it to any sort of person.
I argue it is best to let the new person join, show them the best practices, and let them invest however much they want.
Additionally, when I play basketball, I'm not trying to put in the effort to become the next Michael Jordan. But it's still fun. I'm sure there's plenty of things you approach in the same way.
You would say this activity is not right for me?
Just because we do not agree with your takeaway does not mean we are actively misinterpreting you.
Your comment is not the first to show that someone misinterpreted the article.
If the article is read soberly, you can clearly see that gatekeeping and other things were neither meant nor mentioned.
To have a discussion on such a topic is in principle productive and desirable. There are, however, rules for discussion which must be clearly adhered to and which distinguish a discussion from a dispute. Verbal direct and indirect attacks of another person, clear insinuations in offensive wording and snappish answers are not desirable.
On behalf of the community moderators I ask you to discuss this matter objectively and fairly. The discussion should not be stopped or censored, but it is everyone's right to be treated with fairness and respect!
We want to keep up to these standards in this community.
In what way am I verbally attacking said person, either directly or indirectly? I presented questions and scenarios that highlight my concerns. No insults or anything of the sort.
Furthermore, I was specifically referring to the comment I responded to. It seem to be joking about literal gatekeeping.
I feel the same, its basically disenchantment all over again.
I like Nim for that reason.
On constructive note (after long discussion with author and realising the author is adecvate person).
I have troubles with categorical tone in articles, which say there is only one right way (this is how see this article, but I may misread it). Because the definition of right is constantly changing in our field. There are pros and cons to all. I mean all
There are even pros for bad code. Seriously there can be cases when bad code is better. You will regret less to throw away clearly bad code, and clearly bad code easier to refactor. I assume that you can spent less time (or money) to write bad code rather than good. Which means there can be economical benefits to it even if you would need to pay 2-3 times more later (so called tech debt).
Depends. Eventually you will learn all of this, but the phrase reads to me that you should jump into learning all of it. I have strong arguments against learning "design patterns" (if we talking about Gang of Four book). People later try to use it everywhere even where not appropriate. You need to practice some amount of programming to understand when to use it and when not.
Why not? A lot of people can write some business logic without deep understanding and get deep into the field before it will become a problem. I see how lack of knowledge of underlying technology may hold you from professional growth at some point, but as well a lot of people will be just fine without knowing it.
There are million of shortcuts. Take all of them. Learn what you need right now for your job or life.
There are different opinions on all subjects. This is just some counter points to the article, so that reader can decide themselves.
Four counter-counter-points:
Bad code is unmaintainable and bug-prone, which costs exponentially more in hours, money, downtime, and lost productivity. There are entire studies documenting just the quantifiable cost of this. Read "Dreaming in Code" by Scott Rosenberg for a thorough overview. I can't do it justice in a comment.
There's a profound difference between deferring and avoiding. You don't need to learn everything now, but you should learn these things eventually. I explicitly addressed avoidance.
Abstractions often break, conceal inefficiencies, and/or unexpectedly mismatch logical intent. When that happens, only those who understand said abstractions can fix their code. Every programmer will slam into this painful reality sooner or later (or else live in denial of it and ship unusable code.) Again, see the difference between deferring and avoiding.
There are no shortcuts to becoming a good programmer. You have to put in the time and effort. Those who skip all the hard work eventually find their lack of drive has closed doors. Someone who has been ostensibly coding for eight years, and yet lacks much of the general knowledge of their peers, will be regarded as a burden to any development team who has to carry them up. That is no way to build a rewarding career.
If you're two years into coding, I don't expect mastery of any of the above by any means! Simply moving towards it is all that can be expected. But if you're six years into coding, and in the same place you were when you started, there's a profound problem. The entire point of the self-examination I called for is to prevent that stagnation.
There are different opinions on all subjects, but these points are beyond opinion. I'm simply restating truths that have been proven time and again. For their efficacy, I appeal to the entire history of software development.
P.S. For decades, many have appeared claiming that "what's right in programming" is somehow changing. They've invariably been proven wrong every time. Abstraction doesn't change the underlying logic. The car doesn't replace the engine.
I mean Turing was right no arguments about that and deep nature of computation doesn't change. But "best" practices, languages, paradigms changes a lot. At some point people were like SQL is dead, because it doesn't scale nosql is our saver, and then google made spanner (which is possible only due to huge progress in our field, they need to make custom clocks to make this thing work)
Those are still only abstractions around the same logic, and many of the arguments for or against any of those practices or technologies are rooted in an understanding of said logic.
No matter what costume you put the duck in, it's still a duck. We should not pretend it is now a moose.
This is oversimplification, if we will follow this logic we all can write programs in assembly (or brainfuck, which is Turing complete). Why even bother with those higher level languages and abstractions?
...and we literally could. Albeit, it wouldn't be a good use of our time.
Because they allow us to save repeated implementation effort, not because they prevent us from thinking about the underlying concepts. For example...
When I use the
sorted()function in Python, I know I'm using Timsort, which has a worst-case algorithmic efficiency ofO(n log n). Whether that has an implication on what I'm doing depends on the situation, but I'm not in the dark about it in any case. (The abstraction means I don't have to reimplement Timsort myself.)If I'm storing the high and low temperature of the day to a single decimal place, and I'm working in C++, C, or Java, I will use
floatinstead ofdouble. I know I don't need double precision for a number which is merely meant to be scientifically approximate, so I don't want to lazily waste the extra memory or processing time. (The abstraction means I don't have to shove the bytes into memory myself.)If I'm writing a function that takes a list as an argument, I need to be intentional about either my use or my avoidance of side effects, and that requires an understanding of how the list is being passed: Copy? Reference? Assignment? (The abstraction means I don't have to fiddle with registers.)
When I am deciding between using a loop, recursion, or a generator expression (in, say, Python), I need to understand the pros and cons of each. The wrong decision here can have significant impacts on the performance of the code. (The abstraction means I don't have to mess with assembly jump instructions and/or manual loop unrolling.)
If I'm storing a collection of data, I need to understand how it needs to be accessed. Do I need random-access? Am I only adding and accessing values from the front or the back? Does it matter what order it's stored in? Those are just some of the fundamental differences between how a list, a stack, a queue, and an array are stored and accessed in memory. The wrong data structure will at best waste resources, and at worst introduce significant bugs. (The abstractions means I don't have to reimplement these data structures.)
In all these cases, I'm using abstractions, but I'm understanding what the implications of those abstractions are. One can afford to "wave off" a few of these things now and then, but if one habitually ignores them altogether, their code is invariably more prone to bugs, errors, inefficiencies, and maintainability problems. Wanton carelessness in these areas is why we have (for example) web pages that take up more memory than entire operating systems of yesteryear.
It's very hard to talk to you, I'm not sure are you being serious or trolling.
Following this logic we as well need to understand electronics, otherwise we use abstractions of hardware without realising implications (rowhammer attacks).
Turing tarpit it is.
I'm quite serious, and in fact, there is a degree to which programmers do need to understand some important principles of computer engineering. (Which principles depends on which abstractions you're using, and thus, unpacking.)
As you follow each abstraction down as you encounter it, you learn more and more about the underlying principles of computer programming, and yes, sometimes even hardware. These have profoundly positive influences on your programming skills.
I think a lot of people feel defensive about this because it seems intimidating. They think "I'm not a real programmer because I have no idea what a register is!" To that, I'd say no, you ARE a real programmer. Every day will present a new opportunity to learn. You don't have to try to learn everything, nor do you need to learn it all right now.
The important point, the universal distinguishing characteristic of a good programmer, is simply the willingness to keep learning. When you realize you must make a decision about an abstraction, when you encounter a new tool, when you have a more senior developer point out a pitfall you overlooked, you take the plunge down the rabbit hole and fill in the gaps.
Moment by moment, day by day, fragment by fragment, you uncover the deeper truths underneath the "magic", and you become a better programmer for it.
Example: if you followed just one of those rabbit holes — data structures — all the way down to the silicon, you're really only going to encounter basic memory addressing and CPU caching (and maybe a bit of binary jazz). Neither is as scary or complex as they sound, and both are incredibly enlightening. Once understood, they become as elementary as multiplication.
Yet in that same scenario, understanding (say) how the power supply unit and how it provides voltage to the RAM is utterly irrelevant; it has no meaningful effect on the fundamental concept.
Leaving aside the whole gatekeeping tone of the article, because I see other's have covered it already, I do agree with you in the point that there are others, normally young, and inexperienced developers who tend to try to go for the shortcuts and yes, the JS community has done an incredible job to help there.
I say incredible because, for experienced developers, who already know those basic concepts, higher-level tools are heaven-sent. The problem is that those new developers need advice, need mentoring, and need good role models. Through mentoring, you can show them they're missing out on a lot of things. Through properly highlighting relevant information using the different mediums at our disposal, we can show them what they should be learning.
But we should never, at least in my opinion, blame a language (and I know you're not directly blaming JS here, but you're cutting it very close) for the shortcomings of a small group of its users.
Maybe what we're seeing here, playing the devil's advocate, is the start of a new trend. How long have we been coding using the same concepts? We might see some old trend become new again, such as with React, but we're not making any real advances (at the fundamental langauge level of course). What if this is an indication that we need a higher level platform to develop in? What if we need to stop coding like we're doing right now, and take follow the example of these "shortened attention-span" developers like you called them, and create something that lets them (and us) build software faster, quicker, and without having to worry about lower-level concepts as we do now?
Such has been called for hundreds of times since the genesis of programming. The ball always rolls a little ways, maybe produces a new language, only to discover that we can't get away from the fundamental concepts we're building on.
Read "Dreaming in Code" by Scott Rosenberg.
The "order of blame" is kinda easy to get flipped. Said group isn't apathetic because they use Javascript. They use Javascript (arguably improperly) because of their apathy.
Interesting, I'll look it up. Thanks!
Some comments may only be visible to logged-in visitors. Sign in to view all comments.