When Good is Perfect

Analytic MVP

A business case for adopting the minimum viable product approach to analytics. 


We all want to deliver our best, right?  But does our absolute best mean producing the absolute best product of our technical capabilities? Or does it mean delivering what is best for our business?  The answer is usually the latter.

Often, as analysts and data scientists we can lose sight of this.  We want to try new techniques and show off our magical skills.  Sometimes, though, it’s important to step back and remember that it’s better to be practical than to be perfect.


A phone company has a new customer retention package that they would like to offer to customers to customers at high risk of churning.  They need to identify users who are likely to churn, so they can reach out, offer the package and hopefully retain those users.

Weighing the B grade vs. the A+ grade Solution

Within a relatively short time period we can produce a workable classification model that predicts user churn with an 80% accuracy.  Awesome!  We roll it out to a limited set of users, see how it performs, and tweak it from there.  We might tune the audience, perhaps the offer, maybe the model.  Either way, we deliver our minimum viable solution.

But if we're honest with ourselves, an 80% model can often feel unsatisfying, and look unappealing to our stakeholders.  We know we could do better if we just put more time into it.  We want to optimize our models, spend more time massaging the data set, and implement any other technique that can give us an additional boost in performance.

Reality check

The above steps sound well and good, but the problem is that perfection takes time.  In reality, we need to think very carefully about the cost of taking more time to solve problems to perfection.  Without a solution to the problem, the problem continues to grow, and the cost repercussions continue to mount.  Consider, Netflix took crowd sourcing and three years to increase their rating prediction model accuracy by 10%.  In the above scenario the phone company continues to lose customers, and therefore continues to lose money.

Additionally, having our analytic team members working in pursuit of perfection on one issue means that they are not able to tackle the next problem.  In the time the analytic team knocked out the A+ solution, they may have been able to deliver two B solutions that achieved an equivalent or greater success. 

Finally, we need to question what the net gain is to seek perfection on the first try.  If we are lucky, we might learn some new silver bullet insight about a particular data transformation needed or discover that a specific algorithm is an excellent fit for this problem.

Conversely, by testing out the MVP in the real world we will likely learn more powerful insights about the problem.  Typically, we find that there have been some incorrect assumptions about our data set, problem definition, or even success metrics.  This is not a failure; this is a success because we have learned new insights which we can build back into our solution for higher performance.

By the numbers

Let’s compare rolling out the 80% model within the first month and waiting to roll out the 95% accurate model in the second month. 

The quick, 80% accurate model

Let’s assume the company has 10 million customers and an industry average monthly churn of 2%

This means that 200,000 customers are churning monthly (10 million customers * 0.02 churn)

The prediction model has an accuracy level of 80%.  Using this model we can correctly predict roughly 80% of the churning customers.  That gives us the ability to predict 160,000 customers who will churn this month (200,000 customers *0.8 prediction success rate)

Let’s assume that if we target all of the churning users and on average 15% of the customers accept the package and remain customers.  This means we keep 24,000 customers that first month. (160,000 customers * 0.15 package acceptance)

With the industry average wireless phone bill being $73, the company saved $1,752,000 in the first month alone.  (24,000 customers * $73 average monthly spend)

This amount of $1,752,000 is also monthly recurring revenue.   This means that the money spent by the customers saved continues to roll in every month until they may churn. 

Considering the natural 2% monthly churn rate, that first month savings of $1,752,000 amounts to a total savings of $18,858,815 over the course of 12 months.

The 95% accurate model, 1 month later

Using the same math as above, we calculate that a 95% accurate model generates $2,080,500 savings in a month.  

This means by waiting a month, our more perfect model delivered an extra $328,500 per month.  This appears to be a great result, but did it save the business money?

The Net

The B model being released a month early has very large benefits.  It's execution in month 1 alone compounds to a net gain of $18,858,815 over the course of 12 months.  BUT the A + model will provide a recurring higher monthly savings.  Starting with execution in month 2, the A+ model delivers a delta of $328,500.  This delta adds up to a $3,272,987 over an 11 month time span. 

How does this net out?  The results show that waiting to execute the more accurate model in month 2, we lose a net of $15,585,828 over the span of a 12 month period.

Knowing When to Cash Your Chips

I am all for the pursuit of excellence.  It is an admirable trait and can absolutely be appropriate in some scenarios.  For example, with very mature data models in the financial or insurance sector, every small increase in accuracy has a large impact. 

However, when spending your time tweaking the model for higher accuracy, you really need to consider what this time is costing your business.  Tweaking your models to perfection can be a very time consuming and costly game.  So keep striving for perfection.  Just try to define perfection based on what is right for the business. 

Written by Laura Ellis