Forecasting is the foundation stone of WFM. If you get the forecast wrong, the whole process is unstable. It therefore makes sense that forecasting takes center stage in measuring WFM team performance. There are four major elements to consider:
1. Are you taking care of all the elements of the forecast?
It's not just about the volume of work (number of calls, emails, chats, white mail, etc.). Don't forget the average amount of time required to complete each unit (AHT) and the total workload (volume multiplied times AHT). And make sure you are using the most appropriate metrics - e.g. offered calls not answered calls. While it is common for centers to analyze the call volume forecast, it is less common to see analysis of the AHT. Both are equal partners in the workload calculation and should be analyzed separately as well as in combination to identify opportunities for accuracy improvement. You should question each metric and understand underlying causes.
For example, in some centers AHT is relatively consistent across all time frames and in others there is large variation; e.g. AHT during the night shift is much longer than during the day. That begs the question of why this happens. Is it a function of less supervision, more new hires on unattractive shifts, customers calling with more difficult problems when they have time to talk, customers wanting short calls during the working day or a combination of effects? Digging into that type of question can not only improve the accuracy of forecasting the actual workload in each time period of the day, but can also help in identifying opportunities to reduce AHT through other measures.
2. Did you select the correct time-frames for analysis?
The period over which you analyze forecast accuracy is important. Analyzing accuracy at the monthly or weekly level serves as a reasonable scorecard - but does little to help you to discover where the forecast may be consistently over or under the actual demand. If you have one hand in the oven and the other in the freezer then on average you are comfortable! It is similarly misleading to measure service level or ASA over long periods.
There can be dramatic fluctuations within the week or month that offset each other, making the overall average look good. You need to do the analysis at the interval (e.g. 30 minutes) level to focus attention on those elements of the forecast that can be improved. Dealing with wide swings at the daily or half-hourly interval level puts an unrealistic demand on the Operations team. The goal is a consistently high level of accuracy, not an high average level over time.
3. Are you using the proper methodology for analyzing accuracy?
Percentage variation, standard deviation of the variation and correlation coefficients can all be used to identify pattern anomalies and measure accuracy. The percentage by which actual varies from forecast (forecast minus actual divided by forecast) is the most commonly used metric of forecasting accuracy. At the interval level, it is also a pretty accurate picture. Some methods for presenting the analysis of percent variation is shown in the charts below:
Where there is a wealth of data to analyze, it is helpful to have an easier way to put a finger on the pulse of the accuracy over a long period and that is best found by calculating the standard deviation of the variation percentages. A small deviation is desired rather than wide swings in the variation and the Standard Deviation calculation will be revealing even if the average variation seems quite small.
Another tool to analyze variations over time is Correlation Coefficients, which compare patterns from one period to another. The correlation coefficient analysis can be applied to the variation percentages but is probably most useful when applied to the arrival patterns of work volume and the changes in AHT over the intervals. It compares two periods to see if the patterns are a match or not.
For example, the typical Monday might adhere to a relatively consistent pattern, but the correlation analysis may reveal that one particular Monday varies in pattern even if the total volume of workload is within normal boundaries. This would suggest that further understanding of what happened on that Monday is useful. This level of detail is also critical to determine which historical data is "normal" and which is not when deciding to allow the data to average into the history kept for forecasting.
Data which is outside of an acceptable range should be considered for adjustment, storing separately as a sample of a particular repeatable event, or even being discarded as an anomaly, such as a power outage, that is unlikely to recur.
4. Are you collaborating with colleagues whose actions can influence workload?
People outside the WFM team have influence on forecast accuracy. Any anomalies in the actual workload should be identified and tagged with a reason, e.g. marketing campaigns, mailings, billing cycles, etc. so that future forecasts are better able to accommodate and predict these drivers.
A good WFM team will make sure that it communicates effectively with other departments. In most cases, full understanding of what makes customers and staff behave differently from usual is essential to improving the accuracy of the forecast. Is this something that your team does on a regular basis?