Use uncertainty modeling to better forecast demand

0

The Covid-19 pandemic has triggered widespread supply chain disruptions across the world: chip shortages are forcing auto and medical equipment makers to cut production, while the Suez Canal blockade and the lack of shipping containers have inflated delivery times and shipping prices. . Their effects have been exacerbated by management practices such as just-in-time manufacturing that aim to reduce layoffs in operations: with layoffs the safety buffers previously available to corporate supply chains have vanished.

Of course, companies understood the risks of removing buffers from the supply chain as they increasingly invested in sophisticated data analytics. If they could better understand the bottlenecks in their supply chains, companies could in theory operate with less redundancy without incurring additional risk. But the disruptions persist.

Our research across several industries, including pharmaceuticals and fast-moving consumer goods, shows that the reason for this persistence is less due to shortcomings in the software than to its implementation. To begin with, managers tend to base their analysis within departmental units. While sales and marketing teams can provide important information and data, their input is often not solicited by operational decision-makers.

Additionally, analytical solutions focus tightly on the company’s own supply chain. Best practices remain case-specific, and analytical models too often remain disconnected from trends in the wider ecosystem. As the examples cited above illustrate, an apparently local disturbance can snowball around the world.

How can companies best avoid these pitfalls? Let’s start by taking a closer look at what data analysis involves.

What is data analysis?

Data-driven analytical methods can be classified into three types:

Descriptive analysis.

These deal with the questions “what happened” and “what is going on? And are rich in visual tools such as pie charts, scatter plots, histograms, statistical summary tables and correlation tables. Sporting goods chain The Gamma Store, for example, uses statistical process control charts to identify customer engagement issues in-store.

Predictive analyzes.

These are advanced statistical algorithms to predict the future values ​​of variables on which decision makers depend. They address the question of “what will happen in the future?” The forecasts generated are generally based on observed historical data of the decision’s response to various external changes (eg, from changes in interest rates or weather conditions). Retailers like Amazon rely on predictive data on customer demand to place orders with suppliers, while fast-moving consumer goods producers such as Procter & Gamble and Unilever have invested in predictive analytics to to better anticipate the demand of retailers for their products.

Prescriptive analysis.

These support decision-makers by informing them of the potential consequences of their decisions and by prescribing concrete strategies aimed at improving the performance of the company. They are based on mathematical models which stipulate an objective function and a set of constraints to place real world problems within an algorithmic framework. Airlines leverage prescriptive analytics to dynamically optimize ticket prices over time. Logistics companies, such as UPS, also apply prescriptive analytics to find the most efficient delivery routes.

Companies typically use all of these methods and they reflect the stages of decision-making: from analyzing a situation, to predicting key performance drivers, and then to optimization analysis that results in a decision. The weak link in this sequence is prediction. It was the inability of its famous predictive data analytics to accurately forecast demand and supply that forced Amazon to destroy around 130,000 unsold or returned items each week in just one of its UK warehouses.

The reason predictive analytics fail is in most cases related to assumptions and choices about the generation of the analyzed data. Abraham Wald’s study of post-mission aircraft during WWII provides a classic example. The research group he belonged to was trying to predict which areas of the plane would be targeted by enemies, and they suggested strengthening frequently hit areas. But Wald disputed that recommendation and advised strengthening unspoiled areas, as damaged planes were more likely lost and missing from observed data. It was watching how the the data has been generated that the military officers were able to correct the decision on the aircraft zones to be reinforced.

The solution lies in an approach to analysis known as uncertainty modeling, which explicitly addresses the issue of data generation.

What is uncertainty modeling used for?

Uncertainty modeling is a sophisticated statistical approach to data analysis that allows managers to identify key parameters associated with data generation in order to reduce the uncertainty surrounding the predictive value of that data. In a business context, you create more information about the data in a predictive model.

To understand what is going on, imagine that you are a business-to-business company that receives an order every three weeks from a customer for one of your products. Each order must be delivered immediately, making the request delay negligible. Now suppose the customer’s first order is 500 units and they plan to increase that quantity by an additional 500 units for each new order, but does not inform the company that this is their plan.

What does the company see? The customer will order 500 units in the third week, 1,000 units in the sixth week, 1,500 units in the ninth week, etc., resulting in monthly demand values ​​of 500, 1,000, 1,500, 2,500 and 3,000 units for the first five months – an average of 2,100 units per month. But since the actual demand data show substantial deviations from the average, the latter is a very uncertain forecast. This uncertainty disappears completely, however, once the company obtains the information that the customer systematically increases their purchases by 500 units with each order.

For production managers to spot this kind of information, they need to look beyond purchasing numbers. In most businesses, customer order information is stored in an order management system, which tracks data such as when orders are placed, delivery dates requested, and what products are requested in what quantities. This system is generally owned, managed and maintained by the sales department. Once customer orders are satisfied, aggregated information about completed orders is transferred to the demand satisfaction system, typically owned by production and operations, which those responsible for these functions then analyze to predict future demand.

The problem is that the aggregation process often results in a loss of information. With uncertainty modeling, however, managers can apply key parameters identified from the order management system in order to restore information in their prescriptive analyzes.

Information rescue in Kordsa

Kordsa, the Turkish tire reinforcement supplier, gives a concrete example. The company receives large orders from its customers (tire manufacturers) but the number of orders, as well as the quantity and the delivery date of each, is uncertain at each period. Previously, the company simply aggregated customer order information to calculate historical monthly demand values ​​which were then analyzed. As a result, the number of uncertain parameters dropped from three to one, resulting in a significant loss of information.

Using uncertainty modeling, we showed Kordsa how to avoid information loss and achieve significant performance improvements based on key performance indicators (such as inventory turnover and turnover rate). execution). By applying advanced algorithms such as Fast Fourier Transformation, we were able to integrate key customer order parameters that we identified by studying the company’s CRM data into the company’s demand prediction model.

To better harness the power of uncertainty modeling, Kordsa has since created an advanced analysis team drawn from R&D, sales, production, planning and IT. Team members regularly interact with different departments to better understand and identify data and sources used in decision-making processes outside of their own functions, which can then be factored into their predictive analytics.

This type of border crossing should not stop at the doors of the company. It is not only the decisions of its customers and suppliers that can affect the uncertainties of demand – the decisions of players in adjacent industries producing complementary or substitute products can also affect demand. Getting closer to the data generated by these actors can only help reduce the uncertainty about which performance drivers you need to be able to predict.

. . .

Although manufacturers and retailers invest in data analytics to improve operational efficiency and demand satisfaction, many of the benefits of these investments are not realized. Information is lost as data is aggregated before transformation across silos, which amplifies the level of uncertainty around forecasts. By applying the mathematics of uncertainty modeling to incorporate key insights into how data is generated, data scientists can capture the effects of previously ignored parameters that can dramatically reduce uncertainty around demand forecasts. and supply.


Source link

Share.

About Author

Comments are closed.