The notion of the “10x Developer” stems from an infamous study that measured and compared the productivity of programmers. The study found that although the programmers had roughly the same amount of experience, (7 years) some programmers were far more productive than others.
In fact, the best programmers averaged more than 10 times the productivity of the worst. Hence, 10x. (note: I said average, debugging time was a ratio of 20:1!)
These radical results have been a source of debate ever since.
I’m very interested in assessing and improving my own skills so I took a deeper look to see what exactly was so hot about this topic.
It boils down to these 4 things:
Note: links below!
The original study tracked several measurable abilities, things like program size and time-to-debug. These things are easy to quantify, but what about the things that are harder to graph?
Here are a couple questions that the study didn’t address:
- How maintainable was the code written?
- How elegant were the bug fixes?
- How well did the programmers deal with murky requirements or requirements gathering?
- Were the solutions user friendly and accessible?
Do you consider these abilities to be part of what it means to be a programmer?
The original study only looked at twelve programmers, over the course of a handful of hours. That is not nearly enough people to look at, and it also tells you a lot about the types of problems they were working on. How might the results of the study have changed if the test looked at larger tasks or tougher bugs? Would the top performers have burnt out at their accelerated pace?
Despite the small sample size similar studies have corroborated the findings. Moreover, there have been similar studies in other fields that have reported similar gaps in abilities. Lots of studies. (scroll for links!)
So sure, the sample size is terrible…but does that mean it’s incorrect?
Are there really programmers who can out produce 10 other developers? Granted, a single person may be able to out-maneuver a team due to communication overhead – but this study measured at the individual level. Can one professional programmer, with the seven years experience, really out perform the independently accumulated efforts of TEN?
Of course not….right? Well..The study didn’t report that the 10x developers did 10 times better than the average programmer. They measured 10 times better than the worst. It doesn’t take much to imagine a scenario where this could be the case, and besides…wouldn’t one of the further studies have debunked this by now?
A lot has changed since 1968, particularly in the realm of programming. Programming techniques, tooling, and languages have matured a lot since then. C hadn’t even been invented yet, would the results have been different in the programmers had been using a more modern toolkit?
Well, like above studies have been done since then and this study was intended to study the programmer – not the program’s productivity. The question isn’t about how productive the technology stack is, it’s about how productive the programmer.
Not satisfied with my summary? Well, go read the study and see what you think!
Sackman, H., W.J. Erikson, and E. E. Grant. 1968. “Exploratory Experimental Studies Comparing Online and Offline Programming Performance.” Communications of the ACM 11, no. 1 (January): 3-11.
- Origins of 10X – How Valid is the Underlying Research? (If you only read one, then read this one)
- Great discussion on StackOverflow
- The 10x Developer is Not a Myth
Those similar studies that I mentioned…
- Boehm, Barry W., and Philip N. Papaccio. 1988. “Understanding and Controlling Software Costs.” IEEE Transactions on Software Engineering SE-14, no. 10 (October): 1462-77.
- Boehm, Barry, 1981. Software Engineering Economics, Boston, Mass.: Addison Wesley, 1981.
- Boehm, Barry, et al, 2000. Software Cost Estimation with Cocomo II, Boston, Mass.: Addison Wesley, 2000.
- Boehm, Barry W., T. E. Gray, and T. Seewaldt. 1984. “Prototyping Versus Specifying: A Multiproject Experiment.” IEEE Transactions on Software Engineering SE-10, no. 3 (May): 290-303. Also in Jones 1986b.
- Card, David N. 1987. “A Software Technology Evaluation Program.” Information and Software Technology 29, no. 6 (July/August): 291-300.
- Curtis, Bill. 1981. “Substantiating Programmer Variability.” Proceedings of the IEEE 69, no. 7: 846.
- Curtis, Bill, et al. 1986. “Software Psychology: The Need for an Interdisciplinary Program.” Proceedings of the IEEE 74, no. 8: 1092-1106.
- DeMarco, Tom, and Timothy Lister. 1985. “Programmer Performance and the Effects of the Workplace.” Proceedings of the 8th International Conference on Software Engineering. Washington, D.C.: IEEE Computer Society Press, 268-72.
- DeMarco, Tom and Timothy Lister, 1999. Peopleware: Productive Projects and Teams, 2d Ed. New York: Dorset House, 1999.
- Mills, Harlan D. 1983. Software Productivity. Boston, Mass.: Little, Brown.
- Sheil, B. A. 1981. “The Psychological Study of Programming,” Computing Surveys, Vol. 13. No. 1, March 1981.
- Valett, J., and F. E. McGarry. 1989. “A Summary of Software Measurement Experiences in the Software Engineering Laboratory.” Journal of Systems and Software 9, no. 2 (February): 137-48.