The Importance of Model Validation in Contact Center Staffing

Mathematical models are powerful tools that are used to make clear the repercussions of business decisions. In the contact center industry there are really three types of mathematical models that get wide use. As a call center analyst, you are probably familiar with all of these.

1. Predictive models – used to forecast volume, handle times, agent shrinkage, customer experience scores, etc.

2. Descriptive models – used to simulate a call center environment to determine the expected service provided given alternative staffing scenarios

3. Prescriptive models- used to determine the best capacity plans (hiring, overtime, training, etc.) or agent schedules.

One of the first things I learned in Modeling School (I’m a super modeler by trade), was that any computer model built must be validated and proved to be accurate. Without this proof, everyone, including the model-builders, will understandably question the validity of any analyses coming from a “black box.”

Validation for contact center models is a straightforward and powerful process. It uses real, historic contact center performance data and compares this data to a model’s prediction of the performance. When building these simulations, you need to model the call arrival patterns, determine the likely hourly staff allocation, determine how patient your customers are, and understand the variability of handle times -just to name a few considerations. How do you know if all of the items that feed the model are correct?

In the validation example below, I used hourly call volumes, handle times, and available staff from an ACD and compared the actual service levels provided with service levels that would be predicted from simulation models.

simulation1

This graph demonstrates a very accurate model, mimicking the operations’ actual performance. During times where the contact center produced high service levels, the model predicted well. During the times where service levels were low, the models also predicted accurately.

But, it is important that all predictions be validated. The graph below is a test to ensure that the simulation model accurately predicts the number of abandoned calls.

simulation2

As you can see, this model is also very good. When abandons are high, the model predicts accurately.  Similar for when abandons are low. This validation does a few things:

  1. It makes plain the biases of the model. If it does well when service is great, but poor when service is not so great, then the model is still usable to staff—but only for high service levels. Models that are accurate for service levels good or bad are outstanding for performing what-ifs.
  2. It elevates the discussions amongst decision makers in a very healthy way. Because the analyst proves that the model is accurate, decision-makers must focus on the what-if scenario at hand. There is less uncertainty tied to the modeling technique and the results of the analysis, so the discussion healthily turns toward the inputs of the what-if. Do we think the scenario will really happen?
  3. It helps improves decisions. The validation process helps to get rid of poor models and poor analysis. If the models do not validate well, it means that the modeler has to improve said models!

Here are some parting tips for call center analysts:

  1. Validate your models—be it a forecast of volumes, handle times, sick time, or agent attrition.
  2. Models that don’t validate need to be scrapped (see Erlang C).
  3. Publicize these validations. Nothing reduces skepticism like a great validation.
  4. Own your model’s biases. A validation will make these biases clear, so make sure any analyses with these models are within the good operating ranges of your models.

A fun fact about simulation modeling: the most widely used computer models are probably the computer games our kids (and let’s be honest, us) play. For example, my son loves the Madden football series—it is a simulation model of a football game.

Ric

Ric Kosiba

Ric Kosiba

I joined Interactive Intelligence in August 2012 as part of the Bay Bridge Decision Technologies acquisition. I helped found that company back in 2000 and thoroughly enjoyed working with our brilliant development and operations research team, which helped us become the leading U.S. supplier of long-term forecasting and planning solutions. In my current role as vice president of the Bay Bridge Decisions Group, I’m responsible for the development and enhancement of our contact center capacity planning and analysis product line. I tripped into the call center industry about 22 years ago and can honestly say that I still love it. I hold an M.S.C.E., B.S.C.E., and Ph. D in Operations Research and Engineering from Purdue University (go Boilers!). I reside in Maryland with my wife and four children. I love being a dad and enjoy coaching kid’s football, basketball and lacrosse.