23948sdkhjf
Innehållet nedan modereras inte i förväg och omfattas därmed inte av webbplatsens utgivningsbevis.

A Single Source Of Truth

| Medlemsnyhet | Företag Blue Yonder Nordic AB

Why Data Centralization And AI Predictions Should Represent One Business Strategy.

by Dr. Michael Feindt, Strategic Advisor, Blue Yonder

Business decision makers, and especially CIOs, are facing an inconvenient truth — to enact a successful artificial intelligence (AI) and machine learning (ML) project, there can only be one single source of truth when it comes to their data. Among larger companies, in particular, the data deluge has reached almost unmanageable levels. Big data as an all-encompassing concept is not a new challenge; however, repeat data is. This is data that exist in numerous places across an enterprise empire and isn’t necessarily consistent.

In a retail context, you may have a data set — such as prices, a current stock gauge, a future demand prediction or seasonal stats — in a local store. You may have that same data at headquarters, on a different system, in a different country or being merged and transferred following an acquisition. Several computers, databases and data warehouses are not uncommon consequences of the big data surge. It only takes an anomaly in one location to threaten the validity of the data.

This is why we’re seeing so many organizations investing in data integration projects as part of their digital transformation efforts. It’s a way to bring those disparate networks together and to establish a single “source of truth.”

Data Integration Is Even More Important to AI Vendors

But reaching this pristine, easily viewable, consistent and unique central storage system is not simple. It requires the heavyweight digital players of this world, and with that also comes a hefty investment, significant time spent, failures before successes and a delay in obtaining the goal behind the investment.

It’s important to mention that this often isn’t the fault of these heavyweight digital players aiding companies. It’s a misunderstanding about how urgent it is — on the company’s side — that causes these projects to often stop and start, and take as long as 10 years. Simply, companies are focusing on the presumed need to bring data into one location and not on what this single source of truth actually yields in terms of results.

This is where AI and ML come in. Critically, automatic predictions of supply and demand are exclusively predicated on the quality of the data being fed into the system. If there’s an anomaly, then the initial calculations will inform equally incorrect actions, which can snowball when they go undetected.

That’s why reputable AI vendors incorporate data integration projects into the onboarding process — to get companies to that single source of truth. If a single piece of information is wrong, then the AI vendor’s systems will automatically feed off that wrong data, which can — over time — become very costly, as you start over-purchasing items that you already have enough of or under-purchasing items that you’re soon to run out of.

Just 80% Accuracy Isn’t Okay

The biggest challenge for AI vendors to overcome is that many CIOs or decision makers see these two initiatives — bringing data into one location and data integration — as individual investments. Many are either in the midst of a data integration project that they feel will solve efficiency issues and yield more accurate forecasting capabilities, or they’re foregoing that aspect, hoping that 80% accuracy of data across the network is fine, and wanting to accelerate AI’s implementation to leverage that (almost) perfect data network.

There needs to be an understanding that 80% accuracy isn’t okay. In fact, it’s counterproductive to the value they’re trying to generate from stronger predictive capabilities if those predictions are skewed by incorrect information. That’s why reputable AI vendors are so strict about including data integration into the onboarding process.

It doesn’t always appeal to those who want things to move more quickly or more cheaply, but in the long-term, they see how that money and time is paid back if you begin at 100%, and not 80%. Getting this automatic process right from the outset — with a single source of truth — sets the tone for all future decisions in the AI context to be made correctly.

A Business-Based Source of Truth

Once that penny of perfection has dropped, then the idea of merging the two transformation projects into one makes sense. While native integrators attack the process from a tech infrastructure perspective, the AI vendors’ emphasis is on the data and the repercussions of that integration.

This means that, during the onboarding and integration stage, AI vendor experts are on the constant lookout for nonsense values and anomalies that would derail their future work. For AI to thrive, it needs one well-connected and completely accurate set of data. So, all the focus during that integration and onboarding stage is on enabling that situation.

The consequences of a data anomaly are severe, as they affect the entire proposition and potential of ML and have huge mathematical and technical implications from that perspective. But, more than that, it’s a business concern.

The digital transformation landscape is complex, and it’s so easy to make investments that threaten to lose companies money — either immediately or in the long-term — rather than generate value. By merging data integration and AI forecasting as one in the same investment, a stronger and more collaborative relationship can be formed from the outset that instills a greater sense of trust in those parting with their money.

Medlemsnyhet

Företag
Blue Yonder Nordic AB

Vasagatan 23, 8e vån
111 20 111 20
Sverige
Blue Yonder Nordic AB

Sänd till en kollega

0.063