After my first role in Digital Product Management, I was doing an informational interview with an old colleague of mine over Zoom. I asked the exceptionally broad question "what is a commonality among the most successful ecommerce sites and to what degree is this a product of picking the right tools?". What he said really stuck with me: "The tools of the business alone are not the drivers of success, although they certainly can limit your success. Where I've noticed success is in brands that blend content and commerce and leverage data to help make decisions." This simple synthesized idea led me look on some technical system implementations(good and bad) and consider why they were successful.
For example: A adept merchandiser will be able to better merchandise than an AI algorithm(at least in today's world). When we were running a legacy search and merchandising platform, our director asked the customer success manager of the tool if they could accurately ascribe the conversion boost their tool provided. How do we measure success? To this day this was the worst meeting I have ever been in. The manager explained our conversion rates back to us and attributed their platform to helping to get where we were. Of course, you can't attribute your customer's entire conversion rate to your product, because this doesn't identify the incremental lift in the conversion gained with the platform. After some tough questions, we decided to bake our current tool off with two other competitors in a multivariate test. We got the competitors to agree to test before we would consider switching and began our month long experiment. When the results came out the other side, we were somewhat surprised: Our current platform was performing better than both competitors! We realized the reason for this in the post-mortem: The tools we had implemented did not have adequate merchandiser optimization like our site-wide platform. Those who know our product best will obviously have a better idea over the AI(especially one that hasn't had the cumulative learning advantage it needed). The other data we did have however, suggested we did need to make a change. Customer feedback was exceeding positive in the call center and all the search results were more accurate and targeted(ie. less searches until purchase and less null search results) among othjer good results. We eventually decided that an improvement in our platform would need to strictly use the data available on the strength of individual pieces and assume they would be more successful after merchandisers actively took over the product. After we picked our winner and went live with it, this we saw significant improvement in key KPI's by making the switch.
On another note, I also note that tools can hold an organization back, but too much change too quickly can hurt an organization more than it helps. I have been though numerous prioritization plans where too much was put in the pipeline at one time based on high-level strategic initiatives. When this happens, people get burnt out and eventually something breaks and no one know what went wrong or how to fix it. If too much is changing at one time, then the chances are higher that something critical will occur and also that the fix will take longer to identify.
A good example of this is changing both your front-end as well as your back-end architecture at the same time. When we tried to change the look and feel of our frontend as well as speed up the site performance by changing the site to a popular JavaScript framework in addition to integrating a new order management system, we were working with too many moving parts. We ended up mitigating the issue by being more conservative in our sprint plans, but we ended up slowing some projects so we could complete our new JavaScript framework implementation first.
Top comments (0)
Some comments may only be visible to logged-in visitors. Sign in to view all comments.