The first time I heard about calibration, I had been conducting quality monitoring in a contact center for a handful of years. Well, I was doing our version of quality management, which included an empty office outfitted with a tape recorder from 1984 dialed into a queue to record calls until the tape ran out, and then someone would go into the room to put in a new tape.
Eventually, our system was upgraded to Interaction Recorder and we could pull up recordings to review on a whim. We had fun with five of us grabbing calls and judging the abilities of our agents based on a questionnaire form we created. Then it happened. I heard an agent give an incorrect greeting (and when you’re a third-party contact center answering a call for customer A with customer B’s name, that is a big no-no). I reviewed the call and the scored questionnaire and found that the agent had been given passing marks by another member of the quality management team. What?! This was an automatic fail! When I spoke with the manager who passed the agent her reply was, “I know that she meant to say the right greeting.”
Obviously there was a disconnect between what I thought was our policy and what at least one other member of the team thought. We needed a call to receive the same score regardless of who listened to it. We should have been calibrating our scores all along!
We weren’t the first to assume that everyone was “on the same page.” If you think your quality management group needs to do some calibration, here’s a quick how-to list :
- Pick a call – Make it an easy one without many nuances for the first one.
- Score the call individually – Don’t do this as a group. Everyone should weigh in independently so you can check for consistency.
- Discuss –Talk about your reasoning behind why you scored the call the way you did. (Among the five of us, we had five different reasons that the call failed!)
- Pick another call – Select another call to evaluate and up-the-ante on the complexity of this one. (We picked a few other calls that weren’t so obvious and, at best, ended up with just three people agreeing on the scoring.)
- Review differences – Again, discuss why you scored the call as you did. (Eventually, after much debate and some changes to the forms, we were all in agreement which also meant expectations could be clearly relayed to the agents. This wasn’t something accomplished in an afternoon over a beer.)
- Make changes – If you’ve discussed your scoring rationale and come to a consensus on some edits that should be made to the questionnaire form, take action to make those changes.
- Do it again – This can’t be something you do once and then assume everything’s fine. Many companies choose to go through calibration sessions every six months just to make sure nothing has slipped out of alignment.
I’m wondering about the contact center managers who actively choose not to calibrate. What are your reasons?
Thanks for reading!