DEV Community

Ambrose Little
Ambrose Little

Posted on • Edited on

Technical Interviewing is Broken, But We Can Fix It

Happy Holidays by the Hearth

Don't worry. I know I'm not the first person to point this out (ex: here and here and here and here--geesh, 4.3K claps! AND HERE and here and here and here and here and here). This post comes both from my personal experience as an interviewee over the years, as well as my over fifteen years experience hiring other technical people.

First, allow me to attempt to establish at least some credibility. While I am no "famous" developer, I have had my fair share of accomplishments in my over eighteen year career. Not everyone will be impressed, and I'm totally fine with that. There are absolutely many far, far, far more impressive individuals in this industry, but I do think it's helpful to realize where I'm coming from in this critique and why I think I have at least some chops to make the critique.

  • I've worked as a dev, architect, technical product manager, developer interaction designer, team lead, and director.
  • I've worked in seven industries (and depending on how you count it, more), at tiny companies to as large as Verizon.
  • I've worked as a consultant, individual contractor with my own business, and employee, both on site and remote.
  • I've managed and hired developers/engineers, architects, graphic/visual designers, interaction/UX designers, dev advocate/evangelists, and dev tools product managers. As part of this, I've designed numerous versions of hiring processes from scratch, and iterated on them, and done my best to learn from others in the process.
  • I've been responsible for multimillion dollar budgets and large globally distributed teams.
  • I have co-authored four developer books, more dev articles and blogs than I can remember, and spoken at many developer events.
  • I have been awarded three patents, received the Microsoft MVP award eight times, and was inducted into the INETA speakers bureau.

Believe it or not, I am not a raging egotist, as I hope most of my colleagues would attest. I suffer from imposter syndrome at regular intervals as much as anyone. I only lay these qualifications out for context. With all the many things I do know about making software, I know there are many, many more that I do not. I mainly write the above to remind myself that, yes, maybe I do have something meaningful to offer in this ongoing conversation. More than once I decided to not write or publish this, but I am doing it!

I have a degree in European History with a minor in Humanities. I have taken a few CS courses, but the vast majority of what I know comes from on-the-job learning and extracurricular self-teaching. I don't, in general, devalue CS (or related) degrees, nor would I suggest that not getting one is a better way to go if you want to get into software development. I have tried to make up for the gaps in my formal training, and I dare say that I've been relatively successful in doing so, but it's just plain harder without a CS degree in this industry.

So What is Broken?

I already touched on this and offered some suggestions for a better way here. But I'll just lay this out there. Despite the above, I have been told by interviewers in the last year that I am "not technically strong enough." Say what?

Please keep in mind that I am talking about more or less average full stack and/or front end jobs, within the scope of the tech stacks I am familiar with. I know my limits. I'm not going around applying for embedded development or OS development or development in pretty much any low-level language/domain. I am not applying for highly specialized domains like data visualization, machine learning, etc. I am not applying to jobs with stacks that I have no experience with. No, these are run of the mill jobs for which I have oodles of evidence that I am more than capable to do effectively. After all, why would I seek out a job that I know I'm going to immediately be found out to be an imposter?

And just how have these folks determined this definitive fact that I am not technically strong enough? Care to guess? If you said, "a live coding test with a random brain-teaser type problem you'd find on codewars.com," you'd be right. Now don't get me wrong, I have also passed interviews with these sorts of tests. Hmm.. Isn't that odd?

And you know what else is odd? I have never, not once, not ever, been let go due to technical incompetence. I know for a fact that code I wrote over fifteen years ago is still alive and kicking in production. I've written code that is in daily use by at least hundreds of thousands of people. I have never, not once, not ever been told by a manager that I really need to shape up on my technical strength.

What I have been told by managers and colleagues, however, is that I write clear code. I have been complimented on my thoughtful, usable architecture, my attention to documenting my code where necessary, and my care to leave the campground cleaner than how I found it. I have been told that my code has fewer defects than some of my colleagues, both in testing and in production. I have been told by fellow senior devs, senior architects, CEOs, CTOs, and senior VPs that my architectural and technical contributions and advice are highly valued.

And yet this silly little coding challenge that (I suppose) they think I didn't do well enough or took too long to solve or whatever is definitive evidence against all this that I am not techincally strong enough to do essentially the same job I have successfully done for well over ten years? I say something stinks, and it's not my cat's litter box.

And importantly, keep in mind that I am but just one example of this problem. As noted above in the article with 4.3K claps, I'm definitely not the only one! I just want to add my story to the mix. And if you want examples of folks more noteworthy than I, check out rejected.us.

The Stifling, Strangling Elitism

As I was pondering all this, it occurred to me to ask my mother (just purely coincidentally as we were driving from the airport) how they do interviews in her line of work. She has a BS, MSW, and MPH and serves veterans in the VA but has also worked in other contexts. She started explaining PBI to me, and I was like, "Yeah! We do that, too, but we also have these little arbitrary and largely irrelevant technical challenges..."

You know what she asked me? She asked if this was a recent problem in our industry. She asked if something had changed to cause this dysfunction. I actually don't know how this started, but I have some speculations. And I think it started largely from the same source that our massive problem with the gender gap has come from. To put it bluntly, it comes from a bunch of nerds who get their self worth from a sense of intellectual superiority. To be fair, I count myself in that lot, but I have tried to mindfully get better as I have become more aware of this shortcoming.

Somewhere along the line we started this mythos of the computer geek, and it is basically a story of a not-fully-socially-able male who counterbalances that deficiency with his remarkable genius. I hardly think this stereotype needs me to provide evidence for it. Just go hang out on HN or Reddit, or, well, pretty much anywhere that there is a preponderance of wannabe alpha male geek types. And somewhere along the line, this (almost-completely) boys club decided that it wanted to stay exclusive. It started early. If you read the book Hackers by Steven Levy, you'll find its germinating seeds.

Thankfully, our industry has been waking up more and more to this problem, but our practices are still catching up, and I suggest that this interviewing tactic is a manifestation of this problem. It in no meaningful way is a predictor of job performance, but it is a great way to 1) make the interviewer feel smart about himself and 2) weed out the "riff raff" to keep the club exclusive. After all, the only way to be a technically strong, productive member of the team is if you can "whiz through the easy stuff at 100 m.p.h."

You might be wondering where that last quote is from. It's from what I'd say has likely given this form of interviewing its greatest gust of hot air: none other than the founder of StackOverflow Joel Spolsky and his "Guerrilla Guide to Interviewing." That guide has been promoted in the dev community through his (otherwise largely good) "Joel Test" that companies rate themselves on for StackOverflow job postings. Most of the twelve questions are good indicators, but #11 is the one I'm calling out here: "Do new candidates write code during their interview?"

And of course if you follow the breadcrumb, you will arrive at his Guide. As with the Test, his guide has good points, but in the section on "writing code," he not only illustrates massive elitism but also even advocates for the much maligned whiteboard coding technique. Perhaps the outmoded notion of whiteboard coding could be forgiven based on when the guide was authored, although there's no good reason he couldn't have walked a candidate to a local desktop even in those earlier days.

But I digress. I made the assertion of elitism, so I should support that. Note his story about how rare "good" programmers are, how so few make it through pointers and recursion in classes, how just learning Java is so terrible. Let's look.

"If the basic concepts aren’t so easy that you don’t even have to think about them, you’re not going to get the big concepts."

The problem with this is the ambiguous notion of what "basic concepts" are. I mean, if they really are basic concepts, then the statement is a truism. The problem tends to be that every dev, every interviewer seems to have their own notion for what "basic" means. It was recently asserted to me that using a bitwise XOR to find the number in an array repeated odd times is "simple" (i.e., basic).

Joel continues, mocking those he says can't cut it in CS degrees: "...They just don’t understand anything any more. 90% of the class goes off and becomes Political Science majors, then they tell their friends that there weren’t enough good looking members of the appropriate sex in their CompSci classes, that’s why they switched. For some reason most people seem to be born without the part of the brain that understands pointers." Do you hear the condescension dripping? Only the special anointed few have this brain part. That there is a neurological mechanism that only some people have is quite an extraordinary claim, when you think about it, with essentially no evidence offered to support it than his personal anecdote.

He then derisively uses the term “script jocks” and says, "all good programmers should be able to handle recursion and pointers, and that this is an excellent way to tell if someone is a good programmer." He even says, "I want programmers to know programming down to the CPU level." I seriously wonder how that is relevant in these days of serverless, containers, and immense virtualization. But even in 2006, it was hardly relevant to 90% of software application development (the kind of thing his company did). I'd even suggest it was in some ways less relevant then, but that's a topic for another post.

It's not entirely clear if he means to say that recursion, pointers, and CPU-level knowledge are basics or just what makes a "good programmer." It is clear that even if he does not think these are "basic" that he considers them important and shibboleths for determining good vs. bad programmers.

Now I know there is gonna be a subset of folks in this industry who will eagerly nod their heads at what Joel writes here. They are gonna look askance at me, agreeing that knowing that const circleArea = r => Math.PI*Math.pow(r, 2) is something you just can't get by without knowing in this industry. They believe that const startsWithCap = s => /^[A-Z]/.test(s) is just critical to know by heart, as is const sumArray = arr => arr.reduce((acc, curr) => acc + curr, 0).

(For the record, I had to remind myself that Math.PI exists in JS rather than hard-coding 3.14159 as an approximate, and I had to look up the pow function--I was thinking maybe there was an exponential operator like ^, but I remember now that's for bitwise XOR. I guess that tells you how often I've had to use those things despite being a productive full-time JS developer for years.)

I honestly don't disagree to some extent that these problems seem basic to me, but the problems themselves tell me almost nothing meaningful about a candidate's general software development ability. And I heartily disagree that recursion and pointers are the litmus test of a good developer. I think they are useful things to understand, but at best they are relatively mundane details. Many devs are productive and get by with a passing understanding. Learning how to traverse a tree is something easily learned, and again, you really don't need to know it by rote for most app dev. I'm not really sure where grokking pointers has much practical import, but I'm all for having deeper-than-strictly-necessary understanding. I'm not saying that knowing these is not helpful but rather that they are hardly indicators of what makes a "good" programmer (whatever that means).

Frankly, if all a company did was ask me Joel's example problems or even quiz me on recursion and pointers, I'd find that to be a relief! But in my experience, the current state of tech interviewing in 2018 tends to be more on the clever brain teaser side of things. For example, in one case I was given the problem of rehydrating a binary tree from two arrays. In another, I was asked to find the coordinates and bounds of N rectangles in a two-dimensional array (where 0 is background and 1 is part of a rectangle). One company's interview protocol even suggested that one ought to prepare by going to codewars.com and practicing. I did that, and that's where I ran into the aforementioned solution const findOdd = (xs) => xs.reduce((a, b) => a ^ b) was highly voted as a "best practice" if you can believe it.

The point is, these kinds of problems are about as irrelevant to what makes a good developer as is how they might choose to make their coffee or tie their shoes, i.e., not at all. Being able to quickly solve these kinds of problems is a discrete skill that can be learned, as Laszlo Bock notes in his great book, Work Rules! (which I highly recommend!). These brain teasers don't predict future performance; they just tell you the person is good at brain teasers (i.e., they've practiced them).

I also once had a colleague (who had interviewed and passed me) tell me that he just wants to see evidence that a person can code. His particular tool of choice was the relatively simple challenge of reversing a string. Okay, fine, you want to see someone can actually code, but really? Is that really a real concern? I would like to know if anyone has ever hired someone who claims actual experience making software (or formal training) who literally cannot code. Seriously! If so, that's an easy termination of employment conversation.

Give me 10 minutes with a candidate, and I can ask a handful of structured interview questions that give me more insight into their abilities as a developer than 100 of these ridiculous tests give you. AND I will be able to confidently tell you if the person can write code, without ever having seen them do it. I once had a manager give me a hard time, because he asked me to "tech" a candidate, and I spent the time asking PBI questions. I told him that it would have been supremely clear to me based on their responses whether or not they actually could do the job. And I was right. This particular dev was highly competent, if a bit too clever in some of his solutions (in my opinion).

I have digressed a bit here, but only by way of illustration. The typical "watch someone code" interview problems are no more than a way to screen out folks who don't perform well on the fly, artificial, stressful situations, on isolated problems that have little to no relevance to the business of making software.

So if these tests have (and have been shown to have) such little predictive power, why are they still, after all these years, the go to screening tool? I doubt there is one answer. I think laziness is part of it, or more generously, priorities. In smaller shops, devs (and barely trained managers) are expected to run technical interviews, which makes it quite the ad hoc affair. They likely remember how they were drilled, do a quick internet search for tech interview questions, grab a few, and go to town. I've seen this happen more than once!

But even in software companies that have fully staffed recruiting organizations, they use these. There is even a cottage industry of folks who claim to "solve" this problem for companies, by offering this vetting service for them! The one I mentioned above that was encouraging people to practice codewars was such a firm--one of many. People who have the budget and staff to do better are just not.

Naturally, we nerdy engineering types like to try to solve things with "data." We think most human problems can be better handled through some kind of software intervention. So we frame the problem in terms of questions like, "how can we remove bias?" (which is definitely valuable), but we never seem to zoom out that extra level and question whether these kinds of evaluations are actually meaningful and predictive. It's the equivalent of trying to figure out how to make a drill a better hammer.

I think anyone would be hard pressed to show a real correlation between performance on these kinds of questions and actual competence on the job. And that's in large part because measurement of competence is hardly an objective activity in itself. Much like processes, every dev shop has its own ideas on what metrics are important to measure, each company has different things it values, and increasing these values is what constitutes success and competence. As with most complex human endeavors, once you isolate things down enough to be able to somewhat objectively measure them, they are too focused and distanced from the whole to really be meaningful.

And yet we cling to these seemingly more objective tools to evaluate candidates. We pat ourselves on the back for using "data" and leveraging our special engineering skills to solve the very difficult human-centric problem of predicting success in a role on a team delivering a product or service for other humans. And when data doesn't align, we will of course do the human thing and ignore it or reinterpret it or question its validity for this that or the other--because our nature is ever to justify what we already believe rather than face up to evidence that contradicts it and demands alteration of our beliefs.

A Persistent, Dangerous Mythology

So it's a persistent mythology, at best. But it is a dangerous mythology. Because unless you are one of the big, top tech companies with oodles and oodles of highly qualified applicants beating down your doors, the chances of passing over great candidates who happen to just not do well with these kinds of tests is very high. While the top tech companies can afford, due to sheer volume of applicants, many false negatives, most others in the industry really cannot. The demand for talent far outstrips supply, as is well known, and companies that create even more hurdles--particularly those that don't really tell you anything meaningful, suffer as a result.

There is also the likelihood of perpetuating the diversity gap, because implicit in these kinds of tests is a "let's see if you are like me" mentality. In this case, the "me" is the special snowflake genius developer who can solve such problems on the fly easily. Women already suffer in our industry because the false generalization is still popular enough that they are less capable in this kind of work. We saw this on ever-so-public display thanks to James Damore's "manifesto," and folks defending it. Whatever socio-cultural and biological influences may contribute to averaged differences in interest, the more problematic beliefs lie in the perception that such gendered interests indicate capability in individuals. In other words, these stereotypes result in women who are actually capable and interested in STEM being institutionally and pervasively dissuaded from pursuing it.

As this article notes, "It is, for example, a hallowed tradition that in job interviews, engineers are expected to stand up and code on whiteboards, a high-pressure situation that works to the disadvantage of those who feel out of place." In other words, these kinds of interview techniques disproportionately disadvantage those who already have a sense that they do not belong (i.e., are in the tiny, looked-down-upon minority of ~18%). Add that to the relatively high likelihood that the (most likely) male alpha geek interviewing suffers from bias (unconscious or not), and it further skews the results of such evaluations towards the negative.

On top of that, as this article by Triplebyte notes, there are other admitted and stated biases against perceived "types" of developers. So-named "enterprise" and "academic" developers suffer from a strong negative bias at these companies, and I've personally seen that to be true in the elitism against C# (which I happen to have a strong background in) and .NET in general. For many, that I was awarded MVP by Microsoft is bizarrely an anti-indicator of my competence. Thankfully, I get some creds for my deep JS experience (which is fairly ironic to me considering how long JS has been poopooed by "real" developers). In any case, it's clear if you're not hip to the "cool" tech and backgrounds, such bias will lead to skew interviews towards the negative.

So-called enterprise devs are also more likely to not have formal CS backgrounds. They are more likely to focus on solving actual business problems with useful and capable abstractions (like .NET or Java or whatevs) than tinkering with lower level algorithmic problems or, e.g., writing their own compilers. Thus they are less likely to perform well on these kinds of tests. AND YET, they are absolutely productive, capable members of teams creating profitable software for businesses.

To restate, the danger for companies is missing out on otherwise well-qualified individuals due to largely arbitrary, biased, and non-predictive technical evaluations. They also tend to reinforce tribal, alpha geek monoculture that excludes perspectives and experiences from those with more diverse backgrounds, talents, and aptitudes.

We're All Special Snowflake Unicorns

I think the popularity of these kinds of evaluations is also driven by an over-inflated sense of the difficulty of what we do. Every company likes to imagine that it's special and has unique needs that can only be filled by "the best." I've never seen a company that doesn't claim in some way that it "only hires the best." Of course, we all know that can't even remotely be feasible, but we like to say it and like to believe it.

This belief in our own tribal superiority leads us to over-estimate the importance of our interview evaluations. And our industry exacerbates this problem by focusing more on things that ultimately matter less to our team success. These tech interviews illustrate that. When we "screen" people, we screen based on things like lists of technologies and/or how well candidates tech out.

Contrariwise, we often neglect or only pay lip service to the more important aspects like "plays well with others" and "is hard working" and "readily helps others" and "isn't a primadonna." We like to throw around "passionate," but why isn't this part of initial screening? Team collaboration and people/soft skills are just as every bit important to a company's success as particular technical competence. I would argue they are much more so, on average. As I said here, "working together is about relationships as much as it is about skillsets."

Further, tech is always changing and evolving. The way we build apps today is significantly different from the way we did it 10 years ago. The tech stacks we use have evolved greatly in the last five years, both on the front and back end. This churn shows no signs of decreasing. Unless you're just hiring a short-term contractor, it's as, if not more, important that someone has evidence of effectively learning new tech and new patterns. Why aren't we screening for that up front? (And why aren't companies more serious about ongoing learning? Why aren't more companies investing in training up people who even don't have experience?)

The reality is that with few exceptions in specialized domains, most of what we do is not that special and not that different from the next employer (or your candidate's prior/current employer). Do we really think that what we do requires so much more vetting than the last place (or 10 places) the candidate has worked for? Do we really think that our vetting processes are so damn special that we're gonna magically get better results than the rest of the industry? (Hint: The right answer should be a resounding "No." If you doubt me, go browse this site for a bit.)

So How Can We Fix It?

First off, if you have done these kinds of interviews, you shouldn't feel stupid or bad. They are still basically the "state of the art," and despite the enormous evidence that they are not predictive, most folks just don't know that because they're too busy to know that. A big part of why I wrote this is to help raise awareness. But now you know. Now you can do better.

Unless you're hiring a person with little to no experience, chances are high that they've already successfully created software at other places. Your goal then is not to determine basic programming competence. I'd say your goal is primarily to get to know them, and let them get to know your company and/or team. Talk openly and honestly about your environment, your practices, your current initiatives and future aspirations. Hopefully they'll have questions for you. Use Performance-Based Interviewing (look it up) to drill into how they've done things in the past. Read up on techniques to eliminate bias in doing these structured interviews. It will be a handful of hours well spent, I assure you.

One option is to ditch technical interviewing altogether. The author makes some good points. Ultimately, it is about when the rubber meets the road. This company even offers a "$7,500 quitting bonus" within your first 90 days, which I think is a little brilliant because it makes it safer in the off chance it isn't working out--neither you nor they need to "feel bad" about the separation. Something like this can also mitigate against recruiter fees and bonuses, which typically get paid out after 60-90 days. These approaches do require not being lackadaisical about those first days; you probably want to be ready with a way to determine if everyone is happy with the fit.

But I think we can still do some degree of interviewing--as long as its to give the applicant a chance to show their strengths. And that is where the PBI approach helps, by focusing on what they've actually done rather than quizzing on hypotheticals or arbitrary evaluations. I've found these as great tools to get to know folks better and get a very good sense of their strengths and weaknesses. Drilling into what people have actually done, why they did it, and asking deeply technical questions around this gives a ton more insight--the single biggest predictor of future performance is past performance.

But if you really, really must see someone code, at least make it have some semblance of what they'd be expected to do on a daily basis. And if you're open to giving them room to pick up "your way" of doing things, as I hope you are, then maybe instead of asking for a task based on what you do, ask for one based on what they say they've done. You are in a better position to map what they've done to your needs than for them to try to guess/learn your expectations on the fly.

I previously advocated (and used) the take-home coding task approach. That is still much better than typical in-person coding challenges, but it is onerous and raises the bar a lot for applicants in terms of personal investment, which isn't good in a competitive market like we have. It also disadvantages those in life situations that don't allow them to invest much time outside of their primary jobs (e.g., single parents or parents of larger families, people with second jobs, etc.). Even if you offer compensation, it still is a big ask. What I found in using that approach was that I got much more value from bringing someone on site to review their solution. We learned a lot more from them during the design and code review than the task implementation itself. Which leads me to..

A better option, I think, is how my current employer does it. They ask folks to bring in a project that shows their work, walk the team through the solution and review it. One of the better indicators of future performance is a relevant work sample, and nothing is more relevant than real work they've actually done. We like to encourage folks to bring something they're proud of, but personally, I would be just as happy going over something someone wasn't proud of and having them explain why and how they'd do it differently next time. In addition to being a much, much, much better indicator than random coding challenges, it is also 1) far less artificially stressful and 2) often mimics what you might do in real life during a peer code review.

If someone doesn't have a project they can readily review with you, you can still offer a task as an option--even collaboratively create it.

Obviously, there's a lot more to it, but I hope, at the very least, this critique makes those who use the stereotypical coding challenge approach to reconsider. I hope that enough of us continuing to share why they are so problematic will eventually turn the tide in our industry towards more effective approaches being the norm than the exception.

Here's to a wonderful new year!

P.S. There is another option for those of us on the interviewee side--to boycott these kinds of interviews. I admit I am very close to thinking that is a good thing to do. To be honest, one reason I personally choose not to is as a personal challenge--I am egotistical enough to want to see if I can pass despite knowing how meaningless the tests are. Still, unless you're just really hard up for a job or really want a particular opportunity, it might be a good option to gracefully bow out. You can point them here or to one of the many other things online by way of explanation. :)

UPDATE (1 Jan 2019): Someone pointed me to this post about how (at least parts of) Microsoft has also recognized the deficiencies of the old interviewing approach and has worked to change it. Now that's a pretty in depth process, but the benefits are many. Well done!

Top comments (9)

Collapse
 
ky1e_s profile image
Kyle Stephens

Amazing piece. Should be required reading for all interviewers.

I think it was DHH who wrote that he considers himself to be a "software writer". I.e., it is his job to write his intentions, through code, clearly and unambiguously to be understood by others.

If we're honest with ourselves, unless we're writing seriously low-level code, this is what most of us do. Those who believe otherwise should come off of their high-horses and learn that people and not code is the most important part of their work. Their interviewing approaches should reflect this.

Collapse
 
keithb profile image
Keith Barrows

I agree. I've done well on many of the "coding challenges" and failed on others. Another interview point that really gets to me is asking about patterns & practices, diving deep into specific ones, like SOLID, or Clean Code, passing those and getting hired - only to find the company does NOT use those patterns or practices! I've had several jobs where the interview did not at all match the expected job execution.

Collapse
 
johndbro1 profile image
johndbro1

I like a lot of your thoughts, and as a 25 year veteran with a CS degree and masters, who did have to learn how pointers worked and write a simple interpreter, I probably would still fail some of those codewar tests.

Two things stand out as areas of lukewarm disagreement:

1) I find that having some decent familiarity with the way the low level architecture works (assembly, pointers, call stacks, etc) can be beneficial in solving some otherwise incredibly weird technical problems in business applications. And yes, even in the world of virtual machines. This is one of the areas where my education has resulted in savings of tens of thousands of dollars of debugging effort.

2) There are some "developers" who know how to google and fix syntax errors, and that's it. They appear to be developers, but are unable to produce almost anything non-trivial without a template. Those people are not common, but they exist, and I would prefer not to have to clean up after them. Having said that, esoteric coding tests are not the sensible fix.

Thanks again!

Collapse
 
dimitarkostov333 profile image
Dimitar • Edited

I agree with you to some extent, but in todays world knowing the basics is no longer enough. Knowing CS theory and a low level programming language well wont get you a interview, you have to know a myriad of frameworks and libraries as well that change every 5 years or so.

If you are hired to code vanilla C++ or Python then yes what you are saying is true but most of us need to know a whole bunch of extras on top of that its almost impossible not to open Google search, and you cant say that im not a true dev because I am not 100% familiar with the inner workings of a obscure framework that came out 5 years ago.

Collapse
 
ambroselittle profile image
Ambrose Little

I agree with you. The CS things I've learned do come in handy on occasion!

Collapse
 
marshallyount profile image
marshallyount

This is a very thought provoking article.

I'm having a little bit of trouble imaging how to apply the PBI format to programming. Would you be interested in writing a followup post that gives a few example questions along with good vs. bad answers?

Collapse
 
alex0112 profile image
Alex Larsen • Edited

I enjoyed this article. Not to be nitpicky but "qualified" is mis-spelled as "qualifieid" which I don't believe was intentional in the sentence "To restate, the danger for companies is missing out on otherwise well-qualifieid individuals"

Collapse
 
ambroselittle profile image
Ambrose Little

Thanks. Sometimes I sure wish I had a proofreader. :)

Collapse
 
mrlarson2007 profile image
Michael Larson

Can't agree more!