We all know that the impostor syndrome is really common between developers. We've all been there. But what if the data don't show us the same resul...
For further actions, you may consider blocking this person and/or reporting abuse
You caught me. I think part of the problem is that we compare ourselves against the limited scope of our experience. I think well I'm good at that or well x people come to me for help, so therefore I'm a little above the average of the people I talk to day to day. In reality, there are lots of people we don't interact with so we don't see the actual average. It becomes relative, based on time in the chair or titles that had more to do with HR than us.
I had some similar thoughts and questions today too...
Am I (not) a mid-level dev yet?
Austin Standing ・ 1 min read
Well that's really interesting: how do we define "the average" anyways? I think it's better to keep growing personally and professionally without looking that much to what the rest does.
Definitely.
Funny this was something that caught my eye as well. I believe our industry is in an interesting spot right now where the work is in high demand and it leaves a lot of us feeling very valuable with higher than average salaries in an industry that you don't exactly have to have a formal education to succeed in.
Maybe the demand is so high that if they don't value your work in Company X, you know that there's some Company Y that will definitely hire you. Given that, is easy to feel proud of yourself and the work you do (and sometimes that's good too!)
That's not exactly true. The median competency would have to go up because there's by definition 50% of the people above the median and 50% below it. Now it's entirely possible that only 10% of people are below the mean competency if they're really really bad coders.
Now that begs a more relevant question in my opinion : How the hell do you determine mean competency and median competency ? And what does that even mean ?
Hi Nico! Of course that it's entirely possible; it does not represent a statistical impossibility. However, as I quoted above,
The 70% who considered themselves to be above the average would have to be really close to it, whereas the 10% below should be extremely below. That's why I disregarded that option.
I totally agree with you in that you cannot determine competence just like that. And I don't think that it would mean anything, in any case.
However, I just wanted to point out the bias and to contrast this results with the well known impostor syndrome.
Since skills are widly spread in tech, I don't think an all-in-all average makes any sense. How would you compare let's say a dev at a start-up with widespread but shallower skills (vulg. Full-Stack, don't @ me), with a dev at a larger company who is an expert in one field. Both may deliver similar value and earn the same.
Even with certain technologies. Do you count knowledge of syntax, or the produced result?
Do you look at the artist's neat brushwork, or their cultural impact?
In the end it's a gut-feel. At least we know when we really suck at something and if we usually help people out with things, we might be a bit above average.
My wild guess is that I'm well above average at CSS and average, maybe slightly above on JS and HTML. Definitely far below average in Elixir. And the worst person on earth when it comes to any testing.
Nice analysis! It's really hard indeed to measure competence (and competence in what exactly, anyways?).
I think the most interesting side of it is the cognitive bias we all experience, regardless of the way of measuring competence.
Stackoverflow surveys are the worst indicator of any genuine data.
Could you please elaborate on that? I read your post about how you don't think stack overflow surveys really represent technological choices.
But how is that related to this?