Efficiency measures – How do you measure the efficiency of service provision?

[Response by Sophie Trémolet and Diane Binder, November 2010]

Services are provided efficiently when they are provided at a fair and reasonable price for all customers while allowing the operator to cover its costs and get a fair return on its investmentEfficiency is important in keeping costs down, reducing dependence on government subsidies and freeing resources for investment in expansion and maintenance (Shirley and Ménard, 2002).

The definition of the parameters measuring efficiency – such as price level, costs of service and performance – is complex. In an incentive-based price regulation regime (such as RPI-X), the X-factor reflects the degree to which the regulator believes that the operator can improve its efficiency, via both an increase in productivity and a control on costs. The setting of the X-value is always subject to considerable debate between the regulator and regulated companies (and in some cases, other stakeholders). Confronted with information asymmetryregulators lack the appropriate data to base the X-factor on actual efficient costs of production. This issue has encouraged regulators to use benchmarking to measure the efficiency of service provision.

Benchmarking is an important tool to assess the relative historical performance of organizations (controlling for external conditions), quantify utility progress towards meeting policy objectives, help specialists identify high performing utilities (whose processes might be adapted by others) and enable regulators to develop targets and incentives for utilities (Mugisha et al., 2007).

Performance benchmarking has become standard practice in the regulated industries in England and Wales. For example, the water and sewerage companies provide the regulator, Ofwat, with indicators of service performance covering water supply, sewerage services, customer service and environmental impact. Ofwat publishes the indicators annually in a public report. These simple performance scorecards have helped measure the efficiency of service provision and apply pressure on the “worst in the class”.

Analysts indentify several methodologies and types of metrics used for benchmarking, including:[1]

  • Core Overall Performance indicators : they include specific core indices, such as volume billed per worker, quality of service, losses, coverage and financial data. These measures are generally available and provide the simplest way to perform comparisons. However, such indicators are by definition partial (i.e., they examine a series of performance dimensions and do not allow to take interactions and the overall picture into account), and may fail to account for the relationships among the different factors (Berg and Padowski, 2007).
  • Performance Scores (based on production or costs estimates) : the metric approach allows quantitative measurement of relative performance (cost efficiencytechnical efficiency, scale efficiency, allocative efficiency and efficiency change). Performance can be compared with other utilities and rankings can be based on the analysis of production patterns and cost structures (Berg and Padowski, 2007).
  • The “Model Company” approach : This methodology requires the development of an optimized economic and engineering model company which provides an idealized benchmark specific to each utility, incorporating the topology, demand patterns and population density of the service territory (Berg and Padowski, 2007). The actual costs of the company are then compared to those of the “model company”.
  • Customer Survey Benchmarking : This methodology focuses on the customers’ satisfaction as a key element for evaluating performance. Surveys can reveal preferences, performance gaps and areas of concern. Trends over time can be used to evaluate utility performance (adapted from Berg and Padowski, 2007).

When carrying out benchmarkingregulators may be confronted with a number of issues, as set out below.

  • The indicators chosen must be unambiguous, verifiable, consistent with long-term incentives for good performance and easy for the public to understand in the case of public reporting of service performance.
  • The effectiveness of benchmarking may be limited by insufficient data, both in terms of quality and quantity. Nevertheless, Estache et al. (2002) show that even with the modest data available publicly, ranking utilities based on comparative efficiency measures of Latin America’s main electricity distribution companies between 1994 and 2000 could yield useful results. Through effective cross-country coordination, the Latin America’s electricity sector could reduce the information asymmetry and shift the burden of proof for justification of bad performance from the regulator to the operators by relying on competition between markets: the performance levels recognized by the regulator to assess the share of efficiency gains to be passed on to customers was estimated from best practice benchmarks obtained by comparing performance across markets. Unless the operator can prove with the appropriate information that their performance is sub-par for specific reasons, they will have to comply with regulatory assessment of their performance.
  • Benchmarking experts with financial, statistical and regulatory backgrounds may be difficult to find. These are needed to help define the issues to be addressed, select the proper model and analytical techniques for quantifying performance and avoid hastily-made comparisons (for instance, a company may have been able to achieve a very low opex by defining some of its maintenance costs as capex).
  • Finally, critics argue that benchmarking relies on arbitrary choices about techniques and variables [2], and that it can create unrealistic expectations among customers without taking into account the cost of improving performance (i.e. a remote rural area distributor will face higher costs than a densely populated urban area distributor to match a similar level of performance). The use of benchmarking and the publication of results should therefore be done in a careful manner so as to not be counter-productive.

 

Footnotes

  1. See Book of Knowledge Properties of Benchmarking and Yardstick Analyses.
  2. See CEPA’s report on benchmarking for OFGEM.