Don’t Make Perfect (Data) the Enemy of Good (A.I.)
By Barrett Thompson
May 09, 2019
There’s no better time than the present to address the mounting data obsession in organizations. Everywhere we go inside the world of business and technology, there’s talk about how messy or imperfect data is. Data seems to be debilitating when it should be anything but.
A common refrain we hear from companies goes something like this: “Look, I know there’s value in my data, there are insights that will help me be more efficient and responsive and profitable. But it’s too disorganized/incomplete/error-filled for me to tackle an AI project.”
Skip the “Data Deferral Loop”This line of thinking – what we call the Data Deferral Loop - is consistently given credence by industry leaders as well. Take Accenture, who recently said the following to Wired UK, in an article authoritatively titled “No AI Until the Data is Fixed”: “As much as we can dream up different ways that AI can change the world or make a business hugely successful, (companies) need to look at their data first. Without that fixed, there’s no AI.”
I respectfully disagree with that last sentence. Let me explain.
Of course data cleanliness is important. I agree with Accenture’s approach in the article for the most part – let the use case drive AI investment, solve an existing problem rather than waiting around for the best problem, make sure your data team is heavily involved – but where we differ is on the minimum threshold of data quality to get started. The question for us is really whether the data is good enough to allow Zilliant to go hunt for specific problems and answers in the way of pricing. The criteria for action should be: can we do better than the current pricing practice using the data we have now?
In my 20+ years working with customers I have yet to meet a data set that was incapable of improving on the current situation, and every one of those data sets had known problems. Given the volume of unstructured data coming in and how often it changes hands, the data will never be perfect. But the point is there are certainly quick wins to be had without waiting for a costly and time-consuming data clean-up.
At last month’s MDM Pricing & Profitability Summit, one of the keynote speakers echoed this point:
A challenge for Profit Isle, said CEO John Wass, is working with C-suite executives who either aren’t ready to take action until they get increasingly precise data, and are therefore immobilized, or who direct their full attention to their money-losing customers and declare, “We’re going to solve that problem!” instead of seeking early wins. “They're not getting the value that they could have gotten if they focused on their strengths,” Wass said.
Exactly. What we’ve found is most companies overestimate their data issues and, in turn, underestimate their potential for rapid return on investment. The Data Deferral Loop prevents profitable action, but it shouldn’t. To illustrate further, we need to deconstruct perceptions of AI a bit.
Let’s deconstruct perceptions of AIFor those who think that perfect or near-perfect data is required, I believe they’re unconsciously trapped in a paradigm where AI is supposed to come up with a result that’s far better than human comprehension, and to do so is assumed to require near-perfect data inputs. From our point of view that is a misread. One of the most valuable promises of AI is that it can produce answers that are as good as the best humans, only much faster. While it might take an expert an hour to make 10 decisions on changes to their pricing sheet, an AI model can make a million of those same quality decisions in 60 seconds. The benefits come from speed and scale of decisions made, not from a radically different decision per-se.
So if machine learning models just need to be as intelligent as humans, let’s consider how good people actually are at making decisions when the data they receive is ambiguous and fuzzy. We separate the wheat from the chaff all the time as a species. For context, here’s a workplace example and a social one:
A Zilliant customer recently reviewed one of their SKUs. It was an item they had sold tens of thousands of times, almost always at a price point between $30-50. But they noticed that a couple times they sold it for under a dollar. Did they pore over this for hours and launch an investigation to root out some excessively generous sales rep? No, they immediately identified the input as an obvious keying error, removed it from the data set and went on with their analysis. At a crowded cocktail party, you notice a friend across the room is asking your colleague where you are. Despite the stimuli and noisy conversations taking place all around you, you’re able to focus in on what the friend is saying and understand immediately. Walk over there and hand them a drink.
It’s actually an evolutionary trait. The world is disorganized and constantly coming at us with unpredictable inputs. Without the ability to filter out the noise to make informed decisions, our ancestors wouldn’t have survived the saber-tooth.
Modern AI technology may soon be noise-immuneAll this is to say that it’s not unreasonable to expect modern AI technology to separate the signal from the noise just like we do every day. In fact, many AI models are becoming self-correcting as it pertains to the noise in data. Soon they may be noise-immune.
We’re all for cleaning up messy databases, but we encourage our customers to run data projects in parallel with our Price Optimization work. It’s not worth it to wait to implement pricing changes because, chances are, you’re in a much better position than you think you are.
It takes fewer lines of data than most expect to identify patterns or areas to improve upon that will start reaping revenue, margin and lifetime customer value benefits right away.
Plus, a comprehensive and costly data project becomes a lot more palatable with proven, AI-driven dollars pouring in simultaneously.
Get in touch with us to start your intelligent pricing initiative today