Spotting the right artificial intelligence use case for your product (POCs)

“With great power comes great responsibility”, and for product executives that is no exception. Product leaders are constantly on the alert for ways to enrich the company product, and to add elements to strengthen its impact and increase the company’s revenue. 

You’re aware of the huge buzz around AI and ML. You’ve heard that machine learning could significantly improve your technology through endless use cases such as fraud detection, demand forecasting, pricing optimization, customer behavior optimization, product personalization, and product recommendation, but with so many options, how do you choose the right use case for machine learning within your unique product?

It’s likely that some of the items in that list of commonly-adopted machine learning uses for technology products piqued your imagination, conjuring up visions of your tech on steroids, not to mention a few more zeros at the end of your quarterly revenue projections. 

But it’s critical to choose the right use case for your MVP. The success of integrating machine learning with your product depends greatly on identifying where it can make the most impact and add the most value in the shortest amount of time. 

Here’s a handy step-by-step guide for determining the right use of machine learning for your product.

Step 1: What seems to be the problem?

First, consider the most pressing challenge around your product. Think about who might be suffering most from this issue, and how solving it would impact their work and wellbeing. 

Let’s take “churn”, for example. Is this a problem that is impacting your business and your team? What value would a solution to this problem bring to your team, and to the business as a whole? 

At one of my first product management roles, my team was building a trading platform and we noticed that we had concerning churn ratios at one of the on-boarding steps (this is easy to find using analytics). However, understanding who will churn isn’t that easy. You can try grouping users according to similar traits and build a rule-based system to detect who will churn, but this takes constant tuning and is never fully accurate. If we’d had a strong predictive analytics tool back then, we would have saved months of work, and the results would likely be more accurate.

Use case evaluation process

Step 2: Do you have the right data?

At times you might have a fantastic use case in mind while lacking the right data. For example, you wouldn’t try to solve a churn or LTV problem without product or service usage data. If you don’t know how a user is using or not using your product, you’ll have a hard time building the right dataset for churn prediction. Understanding the complexity of getting the data is also important but I’m going to cover that in detail below (MVP ROI comparison).

Using these factors, you can start understanding the ROI for the AI MVP.

What will be the ROI for your AI MVP?

In order to determine the ROI for adding machine learning to your product, you must perform both an impact analysis and a value estimation. This is the point in the use case consideration process in which you link the machine learning initiative to your product KPIs.

This step is very important. If done correctly, it will enable you to convince other organizational stakeholders to help you with the productization. Who wouldn’t want to take part in a new AI initiative that’s easy to understand? Everyone would! They just need to understand the value that it will provide.

1. Perform an impact analysis

This is my sweet spot; it’s what I do all day, every day, at Firefly.ai. Regardless of whether your product is business-facing or consumer-facing, it represents one or more user business flows. For this step, start by choosing the user business flow that has the most users.

Let’s assume that you’re a B2C product manager and you’re concerned with your conversion funnel in your product or application. If your use case is at the top of the funnel, it’s going to impact more users. The deeper you go within your funnel the fewer users you’ll impact. 

On the other hand, if you’re a B2B product manager, you’ll start with the areas in the product or platform that your users use the most, or those at the top of their business flows. 

In fact, this process should be used for every product feature that you promote. So if it sounds familiar, you must already be implementing it, which is great!

2. Estimate the added business value

Once you’ve completed your impact analysis, you should think about the value. For example: If I present a “special offer” to a user with high LTV prediction, or a user who’s about to churn, what value can I anticipate for the user? 

Value can be measured in actual dollars (e.g. User LTV is $23), or in other metrics such as customer stickiness (e.g. It will be X times harder for the user to abandon my product). True, this is a high-level estimation (some would call it, “guess-timation”), but remember, our goal is to be able to compare the different options without getting caught in “analysis paralysis”. 

3. High-level complexity estimation

Assuming that you’ve got the right data, understanding the complexity of using that data is very important for your initial success. Your first artificial intelligence initiative should be successful, so you can then gather momentum. Here are the important things to check for:

  1. The ability to pull the initial data. Without this there can be no MVP.
  2. The ability to continuously feed new data into the model. Without this there can be no long-term deployment.

Using these factors, you can start understanding the ROI for the AI MVP.

Next, compare the impact and value to the complexity of building an AI MVP. Once you have the basis for your decision making, it’s time to compare a few problems, each with its corresponding complexity versus impact analysis and its value analysis. This is what you will be using to recommend business decisions to your stakeholders.

Experiment time!

After you have identified the problem that will yield the highest impact at the highest value, it’s time to plan the MVP. At first, you should regard planning your MVP as an experiment. A PM should approach an AI MVP initiative in the same way as they would build an A/B test plan.

How does planning your AI MVP initiative compare with planning your A/B tests?

When planning an A/B test to prove a specific hypothesis, you wouldn’t settle for only one test, right? (That could have some serious ramifications!).

With A/B testing, you’d typically plan a series of tests which would lead you to the winning option. The AI MVP isn’t so different. When you have a hypothesis and need to test a few options, testing variations can range from the data itself to data science related “tricks” (feature engineering). This is where a data scientist can really add value to your process because they’ve spent their entire careers working with data and ML models.

Success Criteria

Specific metrics are used to measure ML models. These can vary from MAE and RMSE for regression problems to Recall Macro and Precision for classification, for instance. Your data scientist knows this like the back of their hand. But as the PM it’s important for you to use your domain expertise to help shape the target metric to represent the value to the user as much as possible. 

It is crucial to use the right metric for each model. Let’s consider the previous churn prediction example. And let’s assume that sending an offer to a client who’s not going to churn is very costly (and may end up in the client’s churn in addition to monetary cost). Using the right target metric for reducing false positives will be critical for your initiative success. This is a great way for a product manager to align the company’s business goals with the AI initiative KPIs.

Experiment list

Make sure to plan your experiments from beginning to end and leave room for tweaks should something new come to light. Establish a clear timeline and identify specific success criteria. This is definitely something you’ll want to include in your presentation to stakeholders.

How can Firefly.ai’s automated machine learning (AutoML) platform help?

Personally, I love to test new ideas, but like most product managers, I can’t do this all day. The beauty of AutoML is that you don’t need to think so much about the model’s development time. Once you have the right dataset and you’ve defined your problem accurately (like the awesome product manager that you are!), you could run as many experiments as you want on your way to building the right model for your problem.  

A process that could have taken months (depending on the number of experiments) can now take mere days. You won’t need to compromise on the model’s accuracy by saying, “It’s just an MVP; it’ll be better once we go live”.

Think of it as a new, refreshing, faster way to test your assumptions and embed machine learning capabilities to your product.

Stay Tuned!

If you learned new things from this article, you’ll definitely want to stay keep your eyes peeled for Part II and Part III of this series where I’ll discuss how to deliver an AI MVP, and how to implement AI productization fast. 

About Firefly.ai

Firefly.ai is an automatic machine learning platform that empowers product leaders to spot the right use case, create a machine learning model, and quickly deploy to their product.

Erez Shilon

Erez Shilon is the Head of Product Management at Firefly.ai. He likes solving problems that involve people and data. Previously, Erez managed both products and the people who manage them. He led the development of machine learning-driven products and data decision-driven teams. Erez holds a B.Sc in Industrial Engineering and Management from Ben Gurion University and an MA in Law from Bar Ilan University, and in his free time he enjoys free-diving and building things.

Glossary

Term  Explanation 
AIAI stands for artificial intelligence. It refers to anything done by a machine that we consider requires intelligence. Generally, AI should include learning, reasoning, and self-correction to count as true AI. There are many types and approaches to AI – click here to learn more about them.
MLML stands for machine learning. It’s one of the many types of AI. Machine learning programs use algorithms to learn from their own data and then predict future patterns without being specifically programmed. ML plays an important role in many apps. To learn more about ML, click here.
LTVLTV stands for LifeTime Value. It’s the way that SaaS and B2C companies measure the total net profit they can make from a single customer. Click here to learn more about LTV.
Churn Churn, or churn rate, is a way of measuring how many customers leave your service each year. Businesses that provide subscription-based services use churn as part of their calculations for a customer’s LTV. To learn more about churn, click here.
ROIROI stands for Return On Investment. It’s a way of calculating the overall value of a tool, process, or investment to check whether or not it’s worth the cost. Click here to learn more about ROI.
MAEMAE stands for Mean Absolute Error. It’s a way to measure the difference between two variables. In the world of data analysis and ML, MAE is a common way to calculate the gap between the prediction and the true value. Click here to learn more about MAE. 
RMSERMSE stands for Root Mean Squared Error. Data scientists use RMSE to measure the accuracy of their data models, by calculating the difference between the predicted values and the actual values. It’s often interchanged with MAE. To learn more about RMSE, click here