I started my career at Argos.
This was 25 years ago. Before you could buy online. Before buy-online-collect-later. The sale happened if the product was physically in the shop, and did not happen if it wasn’t.
The stores had small warehouses. The catalogue had thousands of SKUs. The maths was tight — you could not stock everything, so you had to know what would sell.
And Argos did. Not at the level of how many rings would sell in a given week. At the level of how many rings in each size would sell in a given week, in a given store.
Twenty-five years ago. Without AI. Without real-time data. Without any of the tools a retailer can buy off a shelf today. In the early days, Argos had a maths professor running the forecasting. That was how important it was.
Since then I have worked with retailers, franchise networks, and multi-site businesses of every shape. And I have been constantly surprised by how few of them can tell you — with any real confidence — how many units of a specific SKU will sell in a specific location in a specific week.
Most have a number. A spreadsheet someone updates before the board pack goes out. A target the sales director has been told to hit. Last year’s figure multiplied by something. What they do not have is a forecast that anyone believes, that anyone plans against, or that anyone would stake the business on.
And the business is staked on it. Whether anyone has acknowledged it or not.
The Forecasts You Already Have
Here is the thing about businesses that tell me they do not have a sales forecast. They have several. They just have not acknowledged that is what they are.
The buyer placing an order for next month’s stock is forecasting. They may be using last year’s number plus ten percent. They may be using the figure the regional manager pushed back on last week. They may be using a gut sense of how things are trending. Whatever they are using, they are committing real money against an implicit prediction of future demand. That is a forecast.
The operations manager building next week’s rota is forecasting. The rota is, at root, a prediction of how many customers will walk through the door at each hour, and what service they will need. In most businesses, the rota is built on last week’s rota, which was built on the week before’s, which was built on a template laid down years ago. That is a forecast. It is just one that has never been examined since it was written.
The marketing team setting the campaign calendar is forecasting. Every pound of spend is a bet on incremental demand that it will create. Most of those bets are never tested, because the forecast was never made explicit, so the counterfactual — what would have happened without the spend — cannot be calculated.
The finance team building the cash position is forecasting — in fact, they are consuming the forecasts of every other function, usually without knowing it. When the stock forecast is wrong, the cash forecast is wrong. When the labour forecast is wrong, the cash forecast is wrong. None of these errors are coordinated, so finance is left absorbing noise they cannot trace.
None of these people would describe what they are doing as forecasting. They would say they are ordering stock, building rotas, planning campaigns, updating the cash position. But that is what forecasting looks like inside a business that does not think of itself as a forecasting organisation. It happens in fragments, inside multiple heads, uncoordinated and unmeasured. And because it is not called forecasting, nobody is accountable for the errors it produces.
And once you scale this up from individuals to departments, the fragmentation only gets worse. In most of the businesses I work with, the finance team produces a sales forecast for the budget. Supply chain produces a different one for purchasing. Labour scheduling builds a third for rotas. Marketing, if they plan at all, have a fourth. None of these four are the same.
Each function has built the forecast it needs for its own decisions, using its own inputs, its own assumptions, and its own time horizon. This is not a coordination oversight. It is four teams, each doing reasonable work, producing four different pictures of the future — all of which cannot simultaneously be right. And because no single person owns the forecast, nobody notices that the business is committed to four inconsistent views of what is going to happen.
When reality arrives, everyone is wrong. But nobody is wrong in the same way. So the business cannot diagnose what actually went wrong — and nothing changes for next time.
A business with three forecasts has no forecast.
And so —
The absence of a forecast is not the absence of forecasting. It is the presence of many bad ones.
What It Costs When the Forecast Is Wrong
The reason businesses keep tolerating this is that the cost is almost impossible to see on any single line of any single P&L. It does not show up as “forecast error.” It shows up everywhere else.
Take a food franchise network. Domino’s, where I spent several years as IT Director, is representative of the pattern. When the sales forecast for a given week or a given store is wrong — and in most food businesses, it is wrong more often than anyone would be comfortable admitting — the cost cascades:
- Wasted food. Stock ordered against a forecast that did not arrive expires and is written off. This sits in gross margin, where it gets noticed as a COGS variance and blamed on shrinkage rather than planning.
- Unproductive labour hours. Rotas built for demand that did not materialise mean staff are paid for hours when there are no customers. This sits in labour cost percentage, where it gets blamed on the shift manager for not flexing the rota — never on the forecast that told them how many people to book.
- Wasted transport. Lorries deliver stock into the network against a central forecast. If that forecast is wrong, the lorry is still paid for, the driver is still paid for, and the stock either sits in the store or comes back the other way.
- Wasted warehouse labour. The picking, the packing, the shift that pushes the stock out — all of it is paid whether the stock sells or not. This sits in central overhead, where it disappears into a function nobody questions in detail.
- Mistimed marketing. Campaigns built against one sales forecast are launched into the demand pattern of another. Spend lands on the wrong day, the wrong week, or the wrong site. Marketing measures it as “campaign underperformance” and adjusts the creative — never the forecast the plan was built on.
- Discounts given when they are not needed. When the forecast suggests demand will be soft, the commercial team issues vouchers or markdowns to drive volume. If the underlying demand would have arrived anyway, margin has been given away for no reason. This sits in promotional spend, where it is explained as “building traffic” — never as a correction for a forecast that was wrong in the first place.
Each of these costs is owned by a different function. Each is explained by that function on its own terms. None of them are aggregated. The true cost of the forecast being wrong is, by design, invisible.
This is why nothing changes. Not because the cost is small — in a mid-sized multi-site business it will be material — but because the cost is scattered, and no single person’s bonus is tied to fixing it.
Which leads to the most uncomfortable question in the room:
How much money does your company lose, every year, because its sales forecast is wrong?
Most business leaders I ask cannot answer. And that, more than anything else, is the reason the problem persists.
What Good Looks Like
This is where most conversations about forecasting go off the rails, because people jump straight to accuracy numbers without first asking what the forecast is for. A good forecast has four properties.
It is granular enough to drive a decision. A total-company revenue forecast tells you almost nothing about what to buy, who to roster, or where to spend marketing. Forecasts need to be at the level of the decision they inform — SKU, store, week, hour. Argos forecasting rings by size was not technical showing-off. It was the level at which the decision “what to load into the warehouse next week” actually got made.
It is accurate enough to be trusted — and honest about where it is not. Serious retail operations typically aim at SKU-week Mean Absolute Percentage Error (MAPE) in the region of 15–25%, with fast-movers at the lower end and slow-movers accepting higher error as unavoidable. What matters more than the headline number is that the business knows what the number is, tracks it by category and by site, and knows which parts of its operation forecast well and which do not.
It is unbiased. A forecast that is consistently 10% high (or low) is worse than a forecast with 15% random error, because it drives systematic over- or under-investment across every function that consumes it. Measuring bias separately from accuracy is the single most neglected discipline in most forecasting processes.
It is updated on a cadence that matches the decision cycle. Weekly rota decisions need a weekly forecast. Daily stock movements need a daily forecast. Annual property decisions need a longer view. Most businesses have one rhythm — usually monthly — and force everything through it, which means the forecast is too slow for operations and too noisy for strategy.
None of these properties require artificial intelligence to achieve. Argos achieved them twenty-five years ago. What they require is the decision to treat the forecast as a business discipline rather than a spreadsheet exercise.
Two Questions That Tell You Where You Stand
If you want to know the state of forecasting in your business, you do not need a consultant. You need two questions — and the honesty to sit with the answers.
1. What is your forecast accuracy on the decisions that actually cost you money?
Not your forecast accuracy on total company revenue. That number is almost always reasonable, because errors cancel out across a large enough aggregate. The question is about the decisions the business is actually making: the SKU order the buyer placed last month, the rota the ops manager built last week, the marketing spend committed to last Thursday. How accurate is the forecast underpinning those?
If the answer is “we don’t measure it” — that is your answer. A business that is not measuring forecast accuracy at the level of real decisions is not forecasting. It is acting on guesses and hoping.
If the answer is a number, the follow-up is: how do you know? And which functions forecast well, and which do not?
2. When the forecast is wrong, how do you find out — and what changes as a result?
This is the question that separates organisations that learn from organisations that just absorb the cost.
In most businesses, when the forecast is wrong, nothing happens. The buyer builds their next order from the same inputs they used last time. The rota gets rebuilt using the same template. The marketing team plans the next campaign using the same assumptions. There is no feedback loop between outcome and forecast, which means the forecast cannot improve.
Organisations that take forecasting seriously have a regular rhythm — weekly or monthly — where they look at forecast versus actual, identify where they were wrong, and change the assumptions that produced the error. That last part is the one almost everyone skips.
If nothing changes when the forecast is wrong, the forecast is a ritual. And rituals do not improve with age.
Where to Start
If you recognise your business in any of this, the good news is that you do not need to buy a forecasting platform or hire a data science team to begin. You need to make one deliberate choice, and then build everything else from it.
Start by forecasting sales. At unit level. Then extract the value.
Sales are the forecast that drives everything downstream. Labour rotas are a function of expected customer transactions. Stock orders are a function of expected unit sales by SKU. Cash forecasts are a function of expected revenue. Marketing is the lever that shapes expected demand — and its effectiveness can only be measured against a credible forecast of what demand would have been without it.
Get the unit-level sales forecast right first — by store, by SKU, by week — and every other forecast in the business either falls out of it or can be anchored to it. Try to start anywhere else, and you are building a second storey on someone else’s foundations.
Once the unit forecast is in place and measured, extracting value is a matter of cascading it into the decisions that need it. Which decision is costing you the most right now? Start there. For most franchise and multi-site businesses, stock is the biggest pot — better forecasting routinely takes a meaningful percentage out of working capital and markdowns before anyone touches the labour or marketing lines. But the sequence is less important than the fact that every improvement is now measurable against the same underlying forecast.
One forecast. Owned. Measured. And used, everywhere, for the decisions the business is already making.
Back to Argos
At Argos, the forecast was not a document. It was the way the business thought about itself. Every decision — what landed in the catalogue, what shipped into each store, how many of each ring size sat behind the counter on Saturday morning — came from it. The forecast was not a function. It was an organising principle.
That discipline is rarer than it should be. Twenty-five years later, with more data, better tools, and cheaper compute than anyone at Argos in 2001 could have imagined, most multi-site businesses still cannot tell you with confidence how many units of a specific SKU will sell in a specific store next week.
It is not the technology that is missing. It is the decision — the decision that the forecast is the thing the business plans against, owned by someone, measured honestly, and used by everyone.
Most businesses have already made the opposite decision without realising it. They are running on the forecasts they did not write, carrying the costs they cannot see, and wondering why the same problems keep coming back.
In the early days, Argos had a maths professor running their forecasts. That was the level of investment the business made, because the forecast was that important.
Today, with AI, there is no excuse. The tools to do what Argos did in 2001 — at a granularity, cadence, and accuracy Argos could only have dreamed of — are sitting on the shelf for any business that wants to pick them up. The discipline is easier to achieve than it has ever been.
The only question is whether the business is willing to name it, own it, and act on it.
Colin Rees is the founder of Xpera, where we help franchise networks and multi-site businesses make smarter technology and operational decisions. If this conversation resonates — or if you would like help building a forecasting discipline that drives real decisions — get in touch.

