COVID-19 had a negative effect on bricks-and-mortar businesses. However, the contact center industry is growing apace.
Where companies adopted a fully online model, contact centers are often the only way that they interact with customers.
For banking, telecommunications, travel, and other services, the 'product' they offer varies little from one vendor to another.
Competition happens at the service level. Companies must define interaction standards to satisfy regulators and customers and define their brands.
This is usually done using an evaluation form. Team leaders or quality specialists use it to evaluate recorded calls.
Where Do Standards Come From?
Quality standards should reflect what matters to the regulators, who can shut the business down, and what matters to the customers, who can vote with their wallets or credit cards.
How Do We Create These Standards?
I use the following 6-stage method:
1. What happens during the call?
To answer this question, I talk to agents. They tell me what really happens on the call in 10 to 15 bullet points.
Flow diagrams showing how the workflow is structured in the CRM system are a poor substitute.
Agents frequently develop their own 'workarounds' to address poorly designed workflows rather than submit enhancement requests and wait for a response that never comes.
The list doesn''t have to be meticulously detailed. If the data produced on the back of this call is important, then this also needs to be included as an event in the process.
2. What is 'good enough'?
You can''t raise standards until you have defined them. Look at each stage and write down what makes that stage 'good enough' for regulators and customers.
Many contact centers do this internally. The drawback here is that your standards reflect the opinions of contact center managers, not the general public.
A customer is standing on the street on a dark, cold, winter evening calling his bank to find out why the ATM won''t accept his card. He doesn''t care how many times the agent used his name in the conversation.
One way to get the customer''s view is to arrange a focus group and ask them. Another way is to look at what customers complain about. A third way is to ask agents what part of the call is most likely to upset customers.
Once you''ve defined what makes each part of the call good enough, then you need to reformulate that as a question.
The question needs to be as specific and as unambiguous as you can make it. Your answers also need to be specific and understood consistently.
If your question is 'How well did the agent explain the function of the product?' and the answers are 'Excellently,' 'Quite well,' 'Poorly,' or 'Very poorly," it''s unlikely that any two evaluators will agree on the answer in relation to the same call.
Evaluators find 'Yes/No' answers to questions defining a specific standard a lot easier to use.
4. What about behavior?
Most forms have at least one evaluation criterion related to how polite/respectful the agent was on the call.
There may be others related to how clearly or quickly the agent spoke.
There may also be a 'bucket' question on how well the agent adhered to compliance rules.
Put these at the end. An evaluator can answer them easily after listening to the whole call without having to scroll back up the form.
5. How much detail do you need?
Contact centers evaluating calls manually will evaluate no more than 1% of calls handled.
It takes longer for a person to evaluate a call than the length of the call itself.
To evaluate all calls manually, the contact center would need more evaluators than agents handling the calls. That is not going to happen.
The faster you can evaluate a call, the more calls you can evaluate, so a shorter form will lead to a better sample size.
It''s a good idea to look at your questions and decide which ones are 'necessary' and which ones are 'nice to have,' then cut out the 'nice to haves.'
Catching enough detail is a delicate balance. Normally, less than 5% of randomly selected calls will have any significant problem at all.
Once you know what a 'problematic call' is like in terms of length, which queue it serves, which agent handled it etc., you can focus part of your selection there to catch more of them.
Always select some calls randomly to validate your assumptions, however.
One idea is a 2-stage form. Start with some global questions on script/process adherence, correct information given, and politeness. Only move on to the second stage of the form if there is a problem to be solved in the first stage.
This form may lose people their quality bonuses or even their jobs, so we need to test it to be sure the results are accurate.
The read through
Get someone not involved in the form creation process to read it through. Authors tend to fall in love with their work. An external critical eye may see things that otherwise won''t be seen.
Get evaluators to use it to evaluate some real calls. They will point out how questions need to be changed to be useful
Calibration is when multiple evaluators evaluate the same call with the same form. In theory, they should all produce the same results.
Where a question leads to disagreement, you will need to agree on the 'correct' answer, then adjust the wording of the question and pre-set answers to lead to this result.
If you''re struggling to create or improve a form. Try following this process. Tell me how you get on! I''d love to hear about it.