There are lots of potential statistics that can be measured in any given IT support department. An internet search will throw up many. But how do you assess whether you’re actually any good at providing the right service?
Probably the most ubiquitous measure of the IT Support industry is the Service Level Agreement (SLA). This is an overall commitment to delivered levels of service across IT. It can affect everything, including guarantees on application continuity, systems response times, availability of disaster recovery provisions, network security and so on. In IT Support, it usually majors on a statement of intended response and fix times for given categories of user request.
The SLA prevails in external support, in its traditional form of a binding contract, in deference to the business-to-business nature of the client relationship there.
In internal support, however, the SLA has fallen out of fashion. This is probably because here, the SLA was so often a unilateral IT provision in which the user played little or no part. In any case, users do not measure support against service levels, but against their contentment with the service, so such a provision is often irrelevant.
Another flaw in the SLA approach is its anachronistic adherence to bygone conditions. These are often termed by the misnomer of ‘priority’, so certain requests have a ‘Priority 1’ and so must receive a response within, say, four hours. One can immediately see the roots of the SLA, in external support, in the form of mainframe hardware maintenance and repair contracts. Response times were measured in hours, with the most expensive service having the shortest timeframe, meaning the manufacturer had to keep an engineer and spares holding onsite. Longer response times were cheaper, because it meant the engineer’s costs could be spread between several sites.
Nowadays, the four-hour, or two-day response is largely pointless for the typical end-user computing sector at least, because there is typically no need for such contractual constructs in the internal support arena. For hardware failures, Support does not repair, it swaps out and invokes warranty. We keep these stipulations for two reasons; first, they are built into our support management software anyway, and second, there appears to be nothing better. In other words, prevailing SLAs in IT Support have more to do with tradition than practicality. Even simple challenges topple the SLA as an idea. For example – why four hours? Why not twenty-seven-and-a-quarter minutes? The SLA typically uses targets plucked out of the air, rarely with any meaning except in the context of 1980s hardware external maintenance contracts.
However, it is that question of ‘meaning’ that leads to far better ways of determining how appropriate is IT Support’s measurement of its delivered service. How to know whether you’re any good? First, determine what is meant by ‘good’. And that is almost impossible to know by measuring the speed of service in isolation. It must be qualified.
MISD’s ‘Big Four’ Statistics
In the ‘Mastering IT Support Delivery’ curriculum of IT Support professionalism, there are not one, but four key levels of statistic, known as the ‘Big Four’. It is not enough to know how many we did of what, how quickly (Quantity). We also have to know if it met any predetermined target of achievement (Performance), otherwise the quantity is meaningless.
But then we also have to know whether the target itself has any meaning, so we have to assess whether achieving it also meets our customers’ needs (Quality). And finally, we have to deal with the biggest decision of all, which is whether it makes business sense to deliver at that level at all (Value). MISD’s ‘Big Four’ statistics of Quantity, Performance, Quality, Value are inextricably linked and applicable throughout the support chain.
Only One Reason
Mastering IT Support Delivery teaches that there is only one practical reason to produce a report of a statistical factor – and that is to inform a decision. Once we have appropriate statistics, we can determine a course of action. In the case of delivered service, that action may be to carry on, or make changes. To stick or twist, with the gamble mitigated by management information.
The question is not what we should measure, but what decisions will we need to take later, and so which numbers would best inform those decisions.
The end-to-end service level only provides information after the fact. We can know what we delivered. But in order to govern that delivery, we have to know how we made it. All the way along the IT Support production line, contributing events and elements come into play; knowledge levels, staff availability, diagnostic complexity, and so on. All these elements have parametric numbers of their own. The question is which of these need to be adjusted, by how much, to produce a given service level. These are our operational statistics, and MISD maps them in detail.
How to know whether you’re any good is not just a matter of one number delivered as the result of time-resolved minus time-reported. That is the SLA approach and frankly, the 1980s went thataway, mate. For real management decisions, our statistical references must be more sophisticated that that. It is vital – because a decision without proper information is guesswork. And business success needs better than guesswork.
Follow MISD on social media