Mathematical models are powerful tools that are used to make clear the repercussions of business decisions. In the contact center industry there are really three types of mathematical models that get wide use. As a call center analyst, you are probably familiar with all of these.
1. Predictive models – used to forecast volume, handle times, agent shrinkage, customer experience scores, etc.
2. Descriptive models – used to simulate a call center environment to determine the expected service provided given alternative staffing scenarios
3. Prescriptive models- used to determine the best capacity plans (hiring, overtime, training, etc.) or agent schedules.
One of the first things I learned in Modeling School (I’m a super modeler by trade), was that any computer model built must be validated and proved to be accurate. Without this proof, everyone, including the model-builders, will understandably question the validity of any analyses coming from a “black box.”
Validation for contact center models is a straightforward and powerful process. It uses real, historic contact center performance data and compares this data to a model’s prediction of the performance. When building these simulations, you need to model the call arrival patterns, determine the likely hourly staff allocation, determine how patient your customers are, and understand the variability of handle times -just to name a few considerations. How do you know if all of the items that feed the model are correct?
In the validation example below, I used hourly call volumes, handle times, and available staff from an ACD and compared the actual service levels provided with service levels that would be predicted from simulation models.
This graph demonstrates a very accurate model, mimicking the operations’ actual performance. During times where the contact center produced high service levels, the model predicted well. During the times where service levels were low, the models also predicted accurately.
But, it is important that all predictions be validated. The graph below is a test to ensure that the simulation model accurately predicts the number of abandoned calls.
As you can see, this model is also very good. When abandons are high, the model predicts accurately. Similar for when abandons are low. This validation does a few things:
- It makes plain the biases of the model. If it does well when service is great, but poor when service is not so great, then the model is still usable to staff—but only for high service levels. Models that are accurate for service levels good or bad are outstanding for performing what-ifs.
- It elevates the discussions amongst decision makers in a very healthy way. Because the analyst proves that the model is accurate, decision-makers must focus on the what-if scenario at hand. There is less uncertainty tied to the modeling technique and the results of the analysis, so the discussion healthily turns toward the inputs of the what-if. Do we think the scenario will really happen?
- It helps improves decisions. The validation process helps to get rid of poor models and poor analysis. If the models do not validate well, it means that the modeler has to improve said models!
Here are some parting tips for call center analysts:
- Validate your models—be it a forecast of volumes, handle times, sick time, or agent attrition.
- Models that don’t validate need to be scrapped (see Erlang C).
- Publicize these validations. Nothing reduces skepticism like a great validation.
- Own your model’s biases. A validation will make these biases clear, so make sure any analyses with these models are within the good operating ranges of your models.
A fun fact about simulation modeling: the most widely used computer models are probably the computer games our kids (and let’s be honest, us) play. For example, my son loves the Madden football series—it is a simulation model of a football game.