Targets for Transition/Transformation States – In a more stable or advanced setting, how should performance targets be set, and how can performance be monitored when there is relatively limited data availability and/or the quality of the data is uncertain?

(Response by Sanford Berg and Michelle Phillips, with input from Sara Ahmed, Jemima T. Sy and Shyamala Shukla)

Setting and monitoring performance targets in Fragile and Conflict-Affected States (FCSs) was discussed in another FAQ.  The focus was on developing and improving infrastructure regulation. However, fragility is different in every context.  Here, we consider situations in more stable or advanced settings.   Context shapes the market, institutions and the actions that are possible to take.  The FAQs present general principles, guidance and cases; where possible, guidance is distinguished for countries along a fragility spectrum using the following taxonomy: Crisis, Rebuild and Reform, Transition and Transformation.  This FAQ focuses on countries in Transition or Transformation, but some of the strategies may be of interest to decision-makers in less stable situations.

Who Might Benefit from This FAQ, and How it Relates to Other FAQs

This FAQ is an extension of the one that focused on nations in the Crisis or Rebuild and Reform stages.  However, most of the points developed there are still appropriate for nations in the Transition or Transformation stages.   In the Transition or Transformation stages, affordability may be less of a binding constraint, but documenting performance and setting realistic targets are still important tasks for regulatory institutions.  The steps identified in the related FAQ on data availability and uncertain data quality are still relevant for these countries.

Weak or unsystematic data collection processes suggests that information in not being utilized.  A key lesson to draw from this series of FAQs is that “people manage what they measure.”  Thus, improving data collection processes is an important aspect of management since the collection and review of basic information is central to the operation of any infrastructure project.  Regulatory agencies providing oversight and establishing incentives for performance improvements also need access to data, so they can review trends, establish baselines, and identify realistic targets for operators in their jurisdiction.   Even in a more stable setting, information silos or internal politics can still make access to KPIs difficult for decision-makers who wish to analyze trends and to develop strategies for improving performance.

The FAQ on Key Performance Indicators and Benchmarking addresses the creation of information systems that can capture data on key performance indicators and shows how decision-makers can use this information to make comparisons over time and across comparable operating companies. The FAQ on Tariff Setting and Incentives focuses on using data to design incentives that promote quality improvements, cost containment and network expansion (when cash flows are weak).

Opportunities for Improving Data Collection and Accuracy in Stable States

In the Crises and Rebuild and Reform stages, attention is given to data definitions and information procedures.  Several points developed in the initial FAQ on data availability and accuracy are also relevant for countries in the Transition and Transformation stages.

When data collection functions can be handled by the operator and the state is in a position to review and evaluate performance, there are more opportunities to improve data accuracy and reliability.

Reliability is defined as confidence regarding how the data were gathered and is usually ranked using letters (A through D), where A stands for high reliability (“sound textual records, procedures, investigations, or analysis that is properly documented”).  B represents reliable—as with A but with minor shortcomings.  Examples include “some missing documentation, some reliance on unconfirmed reports, and some use of extrapolation”.  C represents low reliability, where there is “dependence on extrapolation from limited samples, for which Grade A or B data are unavailable.”  Finally, D stands for no reliability (“unconfirmed verbal reports, cursory inspection, or analysis”).

Accuracy is a number indicating the data’s likely range of error and is usually ranked using a number (1 through 4), where 1 stands for an associated uncertainty less than or equal to ±5%, 2 implies 5-20% associated uncertainty, 3 indicates 20-50% uncertainty, and 4 stands for an associated uncertainty greater than ±50%. These reliability and accuracy factors can be used to form a standardized confidence indicator. For instance, a value of A1 would suggest a firm that has high reliability and associated uncertainty less than or equal to ±5%. Similarly, a B3 category is reliable but the reported number could be plus or minus between 20 to 50% of the reported number.  Note, B1, C1, and D1 are not possible (they are inconsistent with the underlying definitions of reliability), nor is D2 possible.  This classification scheme, which comes from the International Water Association, is more appropriate for larger-scale utilities than for small, local infrastructure projects. However, it is good to be aware that inaccurate, unreliable data should not be used for developing targets or designing incentives.  Those setting, and those responsible for meeting targets, need to have confidence that outcomes reflect reality rather than measurement error.  The scheme also serves as a guide for improving data collection procedures.

The most important elements of the data-collection process are: (1) the creation of a dedicated team to identify the relevant variables and sources of information, (2) involving stakeholders in defining the data, (3) establishing procedures and schedules for collecting and authenticating the information (including secure and cost-effective data-handling techniques), (4) developing policies on disclosure, and (5) analyzing the data and using it to strengthen engagements with different stakeholders.  The elements are illustrated in initiatives taken by some of the BRIC nations for water.

Transformation Stage Case (data collection for Brazil’s water sector): In the mid-1990s, Brazil’s Sistema Nacional de Informações sobre Saneamento (SNIS) instituted a data-collection effort to gain comparable financial and operating statistics across the more than 2,000 utilities providing water services to municipalities and states. Participation was voluntary at first, but access to national government funding required participation. Open-data policies also allow NGOs or other interest groups to assess the situation, report inaccuracies (if encouraged) and help with the enforcement of regulation. The system has promoted transparency and dramatically improved public access to financial and operating information. Today the data collection includes all major utilities and hundreds of smaller municipalities, and data definitions have been standardized.

Involve non-traditional stakeholders such as industry associations.  In more stable contexts, other actors could play a role in sharing some of the regulatory responsibilities with public institutions.  Industry associations, for example, can play a role in supporting self-regulation.

Transformation Stage Case (networks of water operators in Brazil): In places where multiple entities deliver infrastructure services, there are often associations of operators. For example, in Brazil, the National Association of Municipal Services of Water Supply and Sanitation (ASSEMAE) represents municipal utilities. Although it has a partisan agenda against privatization, it also promotes collaboration and serves as a clearinghouse for improving production processes.

Transformation Stage Case (China’s urban water sector): In another large country case, Chinese water data are rich at the province and city levels but very limited at the utility level. This limits the potential for detailed performance evaluations of city water utilities, and the Chinese national government and local governments do not have a unified evaluation system in place for examining this performance. Therefore, the government and local regulators are unable to steer utilities in a direction leading to performance improvements. China’s Urban Water Association, a nonprofit, has started to collect performance data at the utility level. Although only the largest city utilities report their performance, the number of self-reporting utilities has increased year by year (in a manner similar to Brazil’s experience). These performance indicators include variables such as leakage, staff composition, revenue collection and pricing. Such information can provide a rough picture of the performance of Chinese city water utilities over time.

Develop a guiding coalition for data collection. Motivated staff (with skills in accounting, engineering and other fields) are in a position to identify causes of data limitations and propose budgetary and organizational initiatives that can strengthen data collection, authentication and analysis. A guiding coalition seeking performance improvements should promote professionalism within the operating company through training and continued monitoring of information flows. Data teams should be established and their work should be given priority within the operating company. A sound data system involves at least five elements: accountability, prioritization, integration, attention to cost effectiveness, and casting a wide net

  • Accountability: When there is a clear assignment of responsibility for data within the company, there is accountability. Professionals committed to improving information systems bring expertise and energy to this difficult task. However, the formal position (in addition to the person) must have continuity over time. This formalized role is needed to address turnover problems, which limit data collection, because a lack of continuity and commitment causes gaps in data over several years (a time series) and in observations across firms in a single year (cross section).
  • Prioritization: The person responsible for the data (collection, checking, storage and processing) must be convinced of the importance of his or her role. Besides providing reports to the regulator (or to any external institution), data must be viewed as important and useful for the company—for strategic, operational, administrative and commercial purposes. It is important to develop internal procedures for dealing with potential misreporting.
  • System integration: Duplication of data storage files inside the firm and data reports (in specialized formats) to external institutions must be avoided or reduced. Duplication increases administrative costs and opens the possibility for little “information empires” (silos) where individuals exercise power by withholding data from those who should have access to information.
  • Attention to cost effectiveness: The external monitor needs to be careful about its own information requests. The regulator (or ministry) needs to avoid any hint of micromanagement. Nevertheless, when the starting point is a data void (either no reports or unverifiable data), management has little excuse not to share data with the group providing oversight and with the guiding coalition that is taking steps to improve performance.
  • Casting a wide net: Seek other sources of information, from local authorities receiving reports or from ministries. Tax authorities, censuses and other institutions and official processes can provide valuable information about the inputs and outputs of regulated entities.

When companies and regulators have developed a professional staff committed to data collection and analysis, they are in a position to emphasize data quality.  Highly accurate and reliable data will allow people to make meaningful assessments and comparisons among utilities (or for the same utility over several years).

Drive data reliability and accuracy through regular use and public availability.  The operator, the regulator and the public need to be in a position to evaluate trends and options for the future.  Using data and publishing it regularly can drive scrutiny for performance and data accuracy.

Transformation Stage Case (water and energy regulation in Peru): The water regulator in Peru (SUNASS) initially had few instruments for incentivizing municipal water utilities to improve performance. Benchmarking is one tool SUNASS has used very effectively to identify strong and weak performers. Additionally, the law required utilities to submit business plans to SUNASS for review and approval. This provided the sector regulator with data on current and projected financial data (including collections and cash flows), operating variables, and targets. Over time, achieving targets has become part of the organizational culture for many of the water utilities SUNASS supervises. Getting the business plan into the public domain greatly improved the transparency of municipal utilities. Similarly, the Peruvian regulatory agency monitoring investment and operations in energy and minerals (Osinergmin) presents a vast amount of information on its web site. It promotes capacity building and plays a role in developing (as well as implementing) public policy through the provision of advice and studies.

In summary, data collection, authentication, and analysis is crucial for evidence-based decision-making within operating companies and by institutions establishing rules and regulations.