This was my response to a question in one of the Slack Forums I’m a part of. I wanted to share it here because I think it would help PMs faced with a similar (seemingly trivial) situation.
In your experience, how valuable are in-product surveys (like asking “How did you like your experience? ” during the user flow, or first conversion, etc)
It depends. From my experience, this has not been valuable. I’m in the camp of asking for feedback just once per transaction (assuming yours is an e-com product). And post-order fulfillment NPS is a well understood standard for this. I do understand the temptation for an individual team to do this as a more direct way to measure their part of the experience.I’ll tell you the history of how this evolved in my org and why we didn’t do this.
I was responsible for NPS at my last org and we were a Naspers portfolio company. We are an online travel aggregator. The NPS metric would be reported all the way up to Bob (CEO of Naspers). The leadership team at Naspers would compare NPS across their portfolio of companies and ask founders/CEOs on trends. Given the importance, there was an external audit team (likes of KPMG) who would come in every year and check if we are counting repeat users, repeat clicks, inclusion of cancelled orders, inclusion of payment failures, how many times we would ask users and on what channels – pretty elaborate. They would give us a list of changes to make this a fair comparison across the Naspers portfolio.
Around 2018, management started linking NPS targets to variable payouts – in a bid to improve NPS. This set of a race within teams to address questions about the quality of the part of the experience they owned. Ex – the transactions team started doing what you mentioned, asking for a simple feedback after a conversion. Since they were a strong PM/tech team, they pushed this task in one of their sprints and even started reporting this. Their argument against NPS was that it was a motherhood metric. It was good as an output metric but not useful as an input metric if you wanted to focus on improving the experience. This was a fair argument. Multiple other teams too wanted to do something like this. However, we started fearing for the end user’s experience if they received multiple feedback prompts – one for booking, one for order fulfillment and for other touch points. As you can see the conversation quickly devolved into an undesirable dynamic. We either had to arbitrate which teams had the special right to send feedback prompt or dismiss the validity of their arguments. Both seemed like bad ideas.
My side of things – this development made my job harder because I was being asked questions on NPS, but the teams I depended on (each) wanted to use a proxy. Sure, there would be some correlation between the metrics. But I would have to track each one, ensure the same rules applied in collecting the metric and manage the narrative on how they correlated. My team was too small to do all of this, we also had other metrics to focus on. I was trapped in a problem that a lot of horizontal product teams face.
Luckily for me two things happened. First, the response rate of feedback prompt post conversion was low. We learnt that after a conversion event users tend to be distracted. They are busy double-checking if they ordered the right thing, right quantity and other details. They were also checking the communication from their payment provider, if the right amount was deducted. Second, management insisted that they were interested in NPS and not an intermediate metric. Teams could use separate metrics, but team leaders were measured by NPS and nothing else. So if they were to use a separate metric then they had to answer how this correlated with NPS. Also, management delegated the concern of the negative experience of multiple prompts to the relevant product folks (me) to decide on. This clarity greatly helped me.We eventually settled on using only the NPS. Each category/subcategory within NPS would be used as an input metric for individual teams. We avoided using a post conversion feedback prompt because the incentives were set that way.
I later learnt that NPS was intentionally kept as every team’s shared metric. This reflected the fact that quality of user experience is, at the end of the day, every team’s concern – even if the dots to linked incentives didn’t seem obvious.
All of this took 6 months to play out. And yes, we did improve the NPS in the final quarter that year.
I’m not sure if this helped your question. Hopefully it gives a broader perspective to make a decision for your context.