markdown guide
 

I have this fear sometimes. I am not a real expert in any technology, but I have a decent grasp of many. Over time, I have developed more of an expertise around browser networking, if I were to pick anything, but I'm still no expert there.

Over time, I think the goal should be to develop a "t-shaped expertise", that is, that you have a broad array of skills, but some certain area has become your greatest expertise.

I personally am more than happy to hire "generalists" as long as I think they are good learners and could develop an expertise if the situation calls for it. Anyone who is too strictly focused on one specific language or environment might be too rigid to shift when the shift needs to happen.

 

I have recruited mostly generalists in the past but with a programming twist they are capable of taking a more holistic approach to a problem instead of just I have to do a piece of code that compiles and pass unit tests.

 

The underlying question is: What is the basis people bring with them and are you - as an employer - able to work with them.

The notion of "generalist" or "specialist" is bias laden in one way or the other. And if you take the dimensions of our industry, we are all at some point "specialist": tied to only a subsection of our field.

What is missing in the figure: the area of requirements, which mostly overlaps the generalist as well as the specialist.

The gaps of knowledge are uninteresting. Interesting is: "Are they willing to fill the gaps"?

 

There is always a need for generalists. People who would otherwise identify as strictly focused on one specific thing are also really generalists, because they have had to adopt other skills to drive their specialization. Whether your workplace can subsist solely on generalists depends on how important and technical your work is.

You can tell this in a team where the core of the team, even just one key person, lives in a very narrow band of language|environment|technology. Not because they are too rigid to shift, but because they work at the very edge of what technology can provide, whether that is performance or reliability.

Most languages|environment|technologies can do neither. If you aren't very concerned about getting the most, while taking the least, out of the hardware you are running on, your work isn't that technical. If you don't have to assure bulletproof reliability because there is a reasonable belief that downtime wouldn't put anyone's life and limb in jeopardy, it probably isn't that important.

But if there are people already doing both, why aren't we building on that expertise instead of enervating the industry with rubbish fad tech?

 

The "Jack of all trades but a master of none" phrase is a misnomer. It's bullshit.

Originally "Jack" or "Jack of all trades" was used as a saying for someone who was a generalist. Someone who didn't have expert level knowledge of thing things but lots of general knowledge of multiple topics. The "master of none" was later added just to turn it into an insult. So the first thing here is to drop the "master of none" as it serves no purpose but to be insulting to yourself.

Having said that, here's the rub: a "Jack of all trades" doesn't specify a depth of knowledge for each "trade" it references. Software development is incredibly complex and almost no one has what I would consider "full depth" on a topic. There are just too many abstractions. So what you may consider shallow enough to be a generalist, someone else may consider deep enough to be a specialist in a specific topic. It's a vague phrase attempting to be applied to an area of immense complexity.

It just doesn't work.

Something else to consider is a "Jack of all trades" likely knows enough to be a Master or Expert at picking the correct tool or at integration whereas someone who is, say, an Expert in JavaScript may want to use JavaScript for every single job even when it's inappropriate / not a good fit.

 

There's also something to be said for new perspectives - every language/technology I've picked up has taught me things that apply to all of them. The quirks of each can shift your perspective enough to make something clear that was previously foggy. Even after I abandon something that's no longer relevant (cough...actionscript) - I know I got something out of learning it.

 

You can pick at the author's choice of phrasing, but there is a real danger to anyone's career that he's speaking to.

I did this for most of my early career - focusing on one shiny thing after another, never really developing any depth in any one topic. This way lies madness, because you'll look up after a great many years and think "What have I actually accomplished?"

 

Depends on how one perceive things. For some people, it might be a win-win situation for working over a large number of technologies while others may think of it as a hold back to their careers.

 

It's entirely possible. Sometimes I don't even feel like a Jack of all trades. I feel like because I've worked with so many different languages and only created 1 or 2 projects in them, I barely know anything about programming at all.

I hail myself as being a Java developer, but I don't remember the last time I wrote anything in Java.

I do however think it's possible to become a mixture of both. You can know one or two languages very well but also be flexible to create projects in languages you don't use often.

 

This is a question I have struggled with in the early part of my career. Coincidentally, this is the exact question I have been discussing in a blog series. I'm linking to these posts at the end. Read them (and watch the videos) for details. Here is the summary:

  • In the early part of the career, do as many things as possible. This way you will know what you like and what comes naturally to you. This includes languages like go, swift, node etc, but also other business functions like marketing, business analysis.
  • Once you nail few areas, go deep into them. Build your expertise.
  • Know that you don't have to know everything and that too not now. We take about 5 - 6 years to master a skill. So you can build your career in a layered fashion.

If you follow this pattern, then you become a T-shaped expert as Ben says. The top guys in the military are known as "generals".

Like every advice, this advice is contextual.

If you need more details, read these posts:
Should you specialize or generalize? What Jack Ma & Derek Sivers says about this: jjude.com/specialize-or-generalize/
The curse of everything and now: jjude.com/all-and-now/
You got 11 lives. Live to master 11 areas: jjude.com/11-lives/

 

I like most of these points, however for accuracy "marketing, business analysis" are not developer roles.

 

Right Andrew. But when you come up in career (even as a developer), these skills (marketing, especially) help you accelerate your career.

 

I find that generalist and specialist are the wrong axes to measure. Each is problematic. When it comes to developers I like to compare those with a deep understanding of computer science, who can pick up any language as they see fit vs. those who learn enough coding, algorithms, and data structures to pass a job interview. The later can be effective when the waters are calm, but when the storm hits (and it always does), I'll take the former every time. Scientists vs. opportunists.

 

I chanced to read this thread and it resonated deeply. This has been an issue that has dogged my entire technical life (well over a decade) in the software industry.

When I first started in the space, I definitely felt that depth was better than breadth. Better to be really really good at one thing than to know a lot of things. So I got a PhD (in distributed systems & reliability engineering) and my first job in that domain. I worked in it for all of 10 months - then got an opportunity to work on a new project in mobile. I knew nothing about anything in that space, but I was younger and more fearless I guess because I jumped in without worrying about it.

It changed my life and I never worked in the area of my PhD expertise again.

Instead I have since worked on multiple projects (large and small), in different domains, with different stacks, languages and devices. And had to pick up new ideas, tools and technologies along the way constantly. And my takeaway was this. It was not the topic of my PhD that helped me -- it was the capacity it gave me to look at a problem, explore it from different angles and try to innovate solutions by using a wide variety of tools from analogies to trial-and-error to multi-disciplinary collaborations.

So my answer to you is: It depends on YOU. On how you approach the whole idea of learning and applying what you learn. And what your GOAL is when you pick up a new language to learn.

Don't learn for the sake of learning. And don't simply chase after every cool language or technology. Definitely spend an hour reading and absorbing the details but after that ask yourself what your goal is. Then set aside the right amount of time and plan the effort out so that the task itself goes from "just something I did for fun" to "this is what I learnt from doing X and now I can apply it to doing Y later"

In my case, I try to do one of three things for every new language/technology I learn

  1. I spend just an hour or two, understand it, then think about where I would apply it. Then file that away in my head and move on. It comes in handy when having discussions with others or in seeing patterns in other contexts.

    1. I spend a few weeks on it. I build something concrete with it (side project or hack) and see if it lives up to my expectations. If it does, then I try to do a talk on it. (I am trying to start blogging regularly but I do love interactive discussion better). At this point my goal is to share knowledge and learn something new in the process. It also solidifies my basic understanding and gets me out of "beginner" mode.
    2. I spend a few months on it. I build something for a client, or I build layers to a side project. And I get a real sense for how it could fit into some bigger project or platform. I look to collaborate with others who might need that piece or might be able to fill other pieces that I need to make that project come to life. And at that time, it has become a new tool in my coding tool-belt that I can pull out and use for future needs.

Hope that helped :-)

 
 

I think you're building up a skill in addition to these languages - the skill it takes to fearlessly pick up a new language! It can feel like you're not mastering any one particular thing, but I find that's hard to do until you have a real reason to solve a specific problem with a specific language. If you've got basic programming knowledge in a variety of languages, you've probably got a good grasp of the abstract concepts - possibly better than if you'd stayed with just one.

Still, it's a good idea to keep an eye out for situations where you can deepen your knowledge and mastery in an area you're interested in. Try solving new problems, or implementing design patterns, rather than solving the same problems in a new language.

 

There is a great book called "The pragmatic programmer" by Andrew Hunt and David Thomas. In this book they say we need to learn at least one new language every year. Learning new programming languages helps us broaden our thinking. In general learning is not a bad thing. It's something good.

 

I thought of this question when I came across this concept: en.wikipedia.org/wiki/Lazy_learning

If you applied it to humans, I'd think the moment you become a "jack of all trades" is when you've consistently lazy-learned to the point when you've experienced enough instances where you've learned just enough to solve the problem that the knowledge begins to overlap to form a sort of generalization.

 

Misnomers aside, sometimes you just have to be a Jack of all trades, but it doesn't really mean that you become master of none.

First things first: I'm very far from being the greatest developer, or sysadmin, or data analyst.

For several reasons I ended up doing bioinformatics (mostly because I really like doing bioinformatics), but academically I started being a chemist.

What changed?

For starters I always thought that everything could be explained, modelled and even proved theoretically, thus I had to learn computer science the hard way (by myself).

When I realised I could do my own algorithms and my own programs, it was time to share those tools with the scientific community, that is, porting my tools from command-line interfaces to more human-friendly interfaces like GUIs or web-tools. Thus I had to learn even another set of languages.

At some point it was really hard for me to say that I was a chemist, since I never really was a chemist. I did a masters in chemical-biological sciences majoring in microbiology, and then my PhD in genetics, all of them using computer science.

Jack of all trades, sure, master of none, most likely.

However it hit me some time ago:

Being a Jack of all trades does not mean that I ought to be master of none.

If you ask me what I am, I'd say without even thinking: "bioinformatician".

And that's my speciality, and that's what I do, and that's what I'm master of, although it requires me being a sysadmin, a backend developer and a data analyst.

What no one tells you, is that some jobs are for "those with a deep understanding of computer science, who can pick up any language as they see fit" either during calm times or when the storm hits. It's hard, but in the end it pays real well (sadly, I'm not speaking in terms of money).

 

Everything is relative. I believe that starting on a new language It's the most difficult part. It's really great if you already achieved that part and with the basics you can continue to any path.

Complex programs requiere a base knowledge and complex algorithms are made of multiple basic algorithm.

I like to learn new things, I'm stuck on PHP because my work. Here sometimes you find experts that doenst know how to make a simple html form and others newbies that can do almost anything with a good guide.

Keep it simple.

 

It's great that you're asking yourself that question. From where I sit the answer is easy: Are you attaining mastery in any of them?

Or to be more specific, have you reached the point in any of them where coding in them becomes almost like second nature? Where the syntax just flows from your keyboard?

If not, consider doing what I did and putting a moratorium on learning new languages for a while. Doesn't have to be forever, I took a couple of years off and really dove into learning Python. Best career choice I ever made for myself.

 

Hi, Hannah! My opinion is that it is worth becoming a master of at least one (probably, ideally, two), and that these phases of learning lots of new things and learning a lot about a specific area should come in alternating waves.

It is worth mastering at least one because it will give you a very in-depth competence that you can rely on when you want to play with an idea. You can rely on your knowledge of that language to remove the language as a hurdle when you want to try building an HTTP server or a tool to get something done that you're interested in / find useful. One thing to keep in mind is that when you're learning something new, you're paying a lot to get past the learning hump. For example, I've found that really new things require somewhere between 5 and 25 failures (like real solid attempts and just thoroughly failing) before I can use them to accomplish a real goal.

It is worth alternating these phases because it will open your eyes to new possibilities. Consider the 80/20 "rule", which says 20% of your time is spent getting 80% of the value, and 80% of your time is spent getting the next 20% of the value. So if you only spend your time learning new things, you're constantly in that high-effort for low-payoff area. But if you only spend your time in an area you're already competent in, then you might have mastered the tool in your toolbox, but that tool is a hammer and is a very poor fit when you need a screw or a saw. So by alternating, you allow yourself to build a competence (that honestly translates generally), and also a breadth of familiarity.

Since you're asking about languages, I'll recommend you try to get the depth of familiarity in something to address the browser, the server, and something lower-level like a systems language. Paradigm-wise, I'll advocate you use one "object-oriented" language and one "functional" language. Sort of trying to pick a set of languages that is fairly small, but gives you an ability to address a lot of domains and experience a lot of ideas. I could talk a bit about what probably fits well here, but I don't know your goals and experience well enough.

I'll also point out that you can gain a massive amount of knowledge in a short time by implementing your own mediocre versions of tools you use. In this case, that means writing a language. Based on what you're saying, I'll recommend "The Elements of Computing Systems" aka "NAND to Tetris", which will have you write something like 5 languages, each building atop the one before it. The only one that is really difficult to build was the very last one (because that language was sophisticated enough to require nontrivial parsing).

 

Oh, just looked at your profile and I see it says "A die hard software developer. I do Ruby, Javascript, DevOps and currently learning Golang" That seems like a really good set of core languages to me. JavaScript gives you access to a ton of domains, it's the most popular language right now, and will only continue to grow. Ruby is a wonderful language, its the best scripting language that exists IMO, and has a nice ability to be refactored towards the level of abstraction that you need (ie it scales from "script" to "program" very nicely). Go gives you systems access, it's super fast, it does concurrency better than either of the other two, and exposes you to a type system. So investing further in each of these languages will set you up very well!

 

A computer language is not like a human language. Only very few human languages share something common, so speaking a lot of them could make you a jack of all trades, but a master of none.

Computer languages instead are not only strict on their regularity, they also share something in common according to their purpose and supported dogmas.

There are, for example, C-like languages like Java, C++ and C# of that sort. C and C++ appear more robust and powerful - you can do a lot wrong with them. But they share rules among each other which makes it easy to learn across the languages.

There are also script languages like JavaScript, Ruby or Python. They have their own feature set, each of them, but they also share things in common.

You'll learn to master all the languages you learn if you understand the principles behind them. That is:

  • Which paradigms are supported?
  • How they deal with memory?
  • How they deal with data types?

You'll learn to cluster languages among these three poles and to understand how these languages support their paradigms will make you find similarities between these languages, which makes it easier to learn a new of a similar kind, and eventually, to master them all.

 

If you have a deep understanding of a lower level language (like C) and a higher level one (like Java) plus maybe a functional language, then adding more languages should not make you a Jack of all trades.
Instead you get the pattern of a new language faster and you should have a gut feeling of what is going on when you use certain features of those languages.

I started (in that order) with C, Assembler, C++, Delphi, Java (functional language still missing). Knowing how to connect C functions with Assembler routines is not a thing I need for my day to day work, but it helped to understand the concept of by-reference and by-value parameters, stacks, scopes etc.
Knowing how to allocate and dispose memory by hand (and how to avoid leaks using certain patterns) helps me also in languages with a garbage collector to avoid leaking resources and to avoid wasting CPU time by producing a lot of short-term objects for instance.

Conclusion: The point is to have a deep understanding of maybe two languages. Otherwise you would be a master of none, and not even a Jack of all trades.

 

Maybe this is not an entirely relevant quote, but my mentor said: 'knowledge how to and especially where to search for something, is knowledge too.' Of course this doesn't cover all bases, but his intention was to point out that I already knew the language but not the framework they were using that well. I really like that quote because it's true, however one should always show the will to learn more. That is how I got my first job in IT anyway.

So, generalisation indeed is the way to go, but specialise at least in something, just to show your own value.

 

I think that there are two diferent things here. If all you do when you code is learn how to create a hello world in
each language, then you are screwed. But if you as a developer put some effort in becoming a better programmer, and think in coding clearer and better, then those concepts Will follow you no matter what paradygm or language you use. I am a tech lead un nodejs and I can't remember if the length of a string is a method or a property. Stackoverflow is there for solving formal stuff. But when I make a code review, I analyze if things are named correctly. If the architectonic patterns are followed. If refactoring is needed and sutch. You must distinwish wich things of a language are just formal concepts and which concepts will make your code more readable and mantainable. After all, the most important role as a developer is create a piece of software that can be understood, replaced or modified by anyonoe. Because requirentes will change and programmers will change. Those are the only certenties

 

This is an awesome question!

I agree with many of the points raised thus far in this discussion, which include prioritizing adaptability and flexibility over deep, intense, narrow knowledge. I've thought of myself as a "generalist" or "jack of all trades" (or for those Final Fantasy players among us, a "red mage") for most of my web development career. Overall, my broader understanding of web development and related disciplines has helped more than hindered.

One benefit to being a so-called generalist is knowing who to call upon when deeper knowledge and insight is necessary. Connecting colleagues and team members with the subject matter experts who can power through specific code challenges does wonders for building your professional/personal network and pushing you into other business-related skillsets that can be a big boon as you advance in your career.

Plus, if this means you get to partner with someone who's a sheer dynamo in their language/profession, you can learn an immense amount from them, and can potentially call upon them as a mentor for whichever path you take in the future.

 

Here's an old, dusty post I wrote on my oft-neglected blog about being a "red mage" generalist: blog.bright-matrix.net/2010/03/31/.... I hope it's helpful in some way to the folks who read this thread.

 

pbs.twimg.com/media/DAkJJG9XoAAak3... This sums it all up according to me. I think they are two sides of the same coin. Each has its advantages and disadvantages.

 
 
 

This is a false dichotomy. Knowing several things is never a problem. The problem is if all you have is surface level understanding.

 

The more languages u can code in the more you can shit on other people's language and cooler u will b

 
Classic DEV Post from Mar 28

Joel is stepping down from Stack Overflow

Joel Spolsky @spolsky We’re looking for a new CEO for Stack Overflow. I’m s...

Hannah Masila profile image
A die hard software developer. I do Ruby on Rails, Python and Javascript frontend. I am also an ambassador for mental health awareness

Sore eyes?

dev.to now has dark mode.

Go to the "misc" section of your settings and select night theme ❀️

(There is also a pink mode)