Service Level Measures
Service levels are used in most IT service provision contracts as important ways of gauging the performance of the service provider in delivering the contracted services. They are usually intended to give an overall indication of the extent to which the service provider is complying with its obligations under the contract, although each individual service level gives an indication of the service provider’s performance only in relation to that particular service measure.
This means that the choice of service levels, and the choice of the means of measuring whether the level has been met, to be included in an IT service provision contract involve important decisions. If inappropriate service levels are included within a contract, the service provider will be incentivised to achieve measures which may not be particularly relevant to the customer’s objectives.
It is very difficult to create service level measures which will reflect an overall lack of performance, lack of attention, lack of leadership, failure of project management or failures of understanding. These are usually at the root of any dispute over poor performance. Although it is likely that, if there are such problems, there will also be service level failures, the service levels will not tell the whole story.
When choosing service levels, customers are frequently advised to select items which can be objectively measured. If service level measures are included which cannot be objectively assessed there is an increased risk of disputes between the service provider and the customer over whether the measures have actually been achieved. In order to avoid selecting unimportant but easily measured services, parties often devote substantial effort to identifying ’emblematic’ measurements: simple measurements that signify that a less tangible (but more important) service has failed.
In practice, the common forms of service level measures in IT service provision contracts include availability targets and various response time targets. Availability service levels are particularly applicable for infrastructure and service provision arrangement, such as IT outsourcing, software as a service (SaaS) and cloud service arrangements, where a continuous IT service is provided and can be measured on a continuous basis. It is now relatively common for availability measures to be recorded by the service provider through IT tools which continuously measure the ‘uptime’ of the IT service provision. This provides greater assurance and objectivity than arrangements which rely on the customer to notify the service provider of any downtime in the availability of the services. Of course, where the service provision to the customer includes elements which are outside the control of the service provider, such as Internet transmission, the service provider will want to exclude any downtime which relates to problems in these elements which are outside its control.
With the increasing reliability of IT service provision arrangements, the ‘uptime’ availability commitments that are being offered by IT service providers tend to be getting ever closer to the elusive 100% commitment. Uptime commitments of 99.99% availability are now achievable, but the higher the availability commitment, the greater the cost to the customer. The detail of the commitment needs to be examined carefully. Paradoxically, from a customer’s perspective it may be preferable for an availability target to be measured on a working hours/working days basis, rather than a 24×7 basis. Over a monthly measurement period, a 99% availability commitment on a working hours/working days basis allows downtime of 1.6 hours, whereas on a 24×7 basis the allowed downtime is 7.2 hours. It is unlikely that downtime of one day a month will be acceptable in any practical circumstances.
Response time commitments typically relate to support services provided by the service provider and, less frequently, to system response times. Common service response time measures include the time taken to respond to support calls, the time taken to commence work on defects and, less frequently, commitments to provide bug-fixes and work-around solutions to defects. Again, the behaviour encouraged by the service level measurements needs to be analysed carefully. Most service providers will be very resistant to any form of ‘fix’ response time commitment on the basis that defects can be very difficult and time-consuming to fix in practice, particularly if third-party support resources are required to resolve them. However, in the absence of a firm commitment to provide fixes, the ‘second best’ options need to be treated with care. For example, it has been found in some instances that a service level requiring an initial phone response within a short period of time can incentivise the service provider to populate the support team with greater numbers of less experienced support personnel who can give an initial response but lack the ability to solve many real-life problems.
System response times relating to the speed and responsiveness of the IT services are less common but are coming under increasing focus in SaaS and cloud computing arrangements. One of the key customer concerns under these forms of IT service delivery is whether the speed of IT usage will be impacted by delays in system and network responsiveness. Unfortunately, the measurement of IT system activities is not easy. Usually, simple exemplar activities will need to be measured, such as the time taken to log on, the time taken to open an application or the time to open a web page. Again, if elements of the IT service provision are outside the control of the service provider, such as Internet transmission and latencies arising from the customer’s networks, the service provider will seek to exclude delays arising in these areas from the service level measures.
Other service level measures include customer satisfaction measures and, more recently, the achievement of sustainability objectives. These tend to be difficult to agree because of the subjective nature of the measurements concerned.
Service Credits
Service credits are pre-specified financial amounts which the customer becomes entitled to whenever a service level is not achieved. There are a number of ways in which service credits can be calculated, such as percentage rebates from the service charges for each percentage point that the service provision falls below the service level target or the use of service credit ‘points’ across a range of service level measures which are then converted into service credits based on a formula, usually on a monthly basis.
Service credits are frequently ‘capped’ at an overall percentage of the monthly or annual service charges. In larger or long-term contracts, it is often argued that the service credits should remove the supplier’s profit, but that it would be counterproductive to allow the contract to become ‘loss making’. At first sight a ‘capped’ service credit may appear to be advantageous to the service provider as it would appear that the service provider has an almost guaranteed revenue stream despite the actual level of performance and despite the pain caused to the customer! However, if the contract is structured so that the customer has a common-law right to recover its actual losses in the event that the level of performance falls below the service level/service credit regime then the customer’s position in these circumstances can be more advantageous than if there is an uncapped service credit mechanism. Any such uncapped service credit mechanism is likely to operate as a limit on the amount that the customer can recover in the event of breach.
In order to incentivise the rectification of the root causes of problems, service credit regimes frequently include mechanisms which impose multipliers on the service credits that are payable in the event that problems re-occur within particular timescales. This is especially useful if the problems are trivial, but annoying. The intention is to prevent the service provider from behaving as though small lapses in service (and the ensuing service credits) are a part of its overheads.
It should be recognised that any approach to the calculation of service credits may affect the other remedies that would otherwise be available under the contract or at common law. In particular, whilst it may be attractive from a contract management perspective for the service credits to cover a wide range of service performance difficulties, if the contract attempts to cater for every problem it may make it difficult to work out the parties’ intentions and whether or not there has been a breach of contract. In particular, unless the contract is carefully drafted, an approach which attempts to impose service credits on a ‘no service – no pay’ basis can lead to situations where the pre-specified service credits become the customer’s exclusive remedy for serious performance failures and it becomes difficult to determine whether or not there has been a breach of contract which would allow the customer to terminate the contract.
In practice, when disputes over performance arise, the monitoring of service levels may be the most complete record of performance under the contract. The customer may wish to show that there have been breaches and, indeed, that the failure to meet the service levels is a symptom of bigger failings. However, the supplier may also wish to rely on the service level reporting to show that it did everything that the parties regarded as being important.
Legal Status of Service Credits
The legal status of service credits is an important but often overlooked issue in the drafting of IT contracts. It is frequently assumed that as service credits provide a pre-specified financial remedy in the event of poor performance they are a form of liquidated damages. Depending on the drafting of the contract, this can and often is the case. If so, in order for the service credits to be enforceable by the customer, they must not exceed a reasonable pre-estimate of the customer’s likely losses in the event of poor performance. This can frequently be quite a difficult issue as there may, in practice, be relatively little financial impact on a customer if the service level target is not achieved by a small margin.
An alternative approach, which gives greater flexibility over the level of the service credits which can be levied, is to make clear that the service level and service credit regime is not a form of liquidated damages but is simply a mechanism which specifies that the customer will pay a different service charge for a different level of performance by the service provider. On this basis, the customer and the service provider have the freedom to contract for a varying service charge depending on the service levels which are achieved. This approach has the benefit that the level of the service credits is not constrained by a reasonable pre-estimate of the customer’s likely losses in the event that the service levels are not achieved. Although it is consistent with the preservation of the customer’s common-law remedies, the major drawback is that as long as the service provider’s performance remains within the range defined by the service level/service credit regime there may not be a breach of contract. This could be significant if a situation arises where there is a continually poor level of performance by the service provider but the performance never falls outside the service credit regime. In these circumstances, it will be difficult for a customer to assert that a ‘material breach’ has occurred which would entitle the customer to terminate the contract.
Another approach is for the parties to agree a termination threshold based on the level of service credits that accrue over a particular period, to dovetail with a right of termination if there are persistent defaults. This provides an additional right of termination in circumstances where there has been repeated poor performance by the service provider. However, in practice, it is not easy to decide upon an appropriate level for the termination threshold and service providers will often seek to impose thresholds which allow latitude for failure.
Service Levels and Service Credits in Practice
When establishing service level and service credit regimes, it is important to try to retain perspective about what the service levels say about the parties’ priorities under the contract and the consequences of non-performance. The following questions should be considered.
· Damages for breach of contract are meant to put the innocent party in the same position as if the contract were properly performed. If a service credit does not fully compensate the customer, what is the advantage of a service credit over a claim for damages?
· Most IT contracts can be terminated for ‘material breach’, ie a breach that has a serious or significant effect. If service credits provide remedies for certain types of breach, is it to be implied that those breaches are not material?
· Are service credits the sole remedy for certain breaches?
· Does the service credit regime lead to an implication that the customer is prepared to accept a certain amount of non-performance?
· Would it be better to pay an incentive for complete performance rather than a ‘penalty’ or adjustment for failure or partial failure?
· What is the consequence of the supplier’s failure to comply with the service credit regime itself (eg a failure of reporting, delay or invoicing irregularities)?
One way to align the supplier’s performance with the interests of the customer is for the service level objectives to be the customer’s business objectives or goals. In the context of manufacturing services, this can work well if, for example, there is a high degree of dependency on the system or services in question. However, this situation is rarely found in IT contracts. IT projects usually relate to process improvements and the customer’s objective is usually increased efficiency and profitability. It would be unusual for a customer to link its own profitability to the performance of a supplier.
Service level and service credit regimes should incentivise performance under the contract. Unless the service level and service credit regime is aligned with the objectives of the contract, there is a risk that the mechanisms can become an administrative task to be carried out on a monthly basis with relatively trivial amounts of money at stake. In these circumstances, service levels and service credits simply become a contract overhead with no real benefit to either party.
Properly constructed service level and service credit regimes can provide both early warning signals of problems in service delivery arrangement and financial incentives to rectify poor performance. In order to do so, the drafting of service level and service credit regimes needs careful thought, analysis of the circumstances in which the regimes will be applied and a good understanding of the legal/contractual environment in which they operate. These regimes deserve rather more attention than they are frequently given in the negotiation and drafting of IT contracts.
Roger Bickerstaff is a Trustee of the SCL. He is also a Partner and Joint Head of the IT Sector Group at Bird & Bird LLP, specialising in IT law.
Anna Cook is a partner specialising in IT and dispute resolution at Wedlake Bell LLP: acook@wedlakebell.com