About two years ago, a significant event happened at Stack Overflow: a new system, named Providence, was released. Providence would allow us to t...
For further actions, you may consider blocking this person and/or reporting abuse
Great post. I have a general question: how much value does your team put into the interpretability of the model? I'd guess the highest priority is making good matches and getting users to click, but at what point are you willing to sacrifice slightly better performance for the ability to explain what's going on to someone without statistical expertise?
So far, we've put our focus exclusively on optimizing for job applications. We've had plans to make the inner works of the algorithm more transparent, and this blog post is a step in that direction. I'm not sure we'll compromise relevance of results for the sake of making them easier to explain. But explaining within the site how we sort search results is something we'd like to do.
+1 Good Question
Thank you for sharing! One of my favorite things about Stack Overflow is that they don't stop when something is done, not even when it's close to great, it must be the best. :)
Matching algorithms are no small feat, and yet they're popping up all over (dating sites, adoption / pet adoption sites). Good work & brilliant insight.
This is fascinating. It's clear Stack Overflow's mindset is that they can build a moat in terms of data to deliver the ideal user experience with a really solid product. It would be easier to fall back on the mindset "we have the biggest community, let's just throw up a job board and it will be useful to some people".