eRegistries: indicators for the WHO Essential Interventions for reproductive, maternal, newborn and child health

Assessment of the current status of global indicators

We first assessed the extent to which the 45 WHO Essential Interventions15] were being addressed in either household or facility surveys, to better understand
the landscape of current data needs and gaps that our indicator development project
should address. Our search focussed on the most recently reported process and outcome
indicators identified from globally recognised sources. First, we examined the common
international databases which compile indicators from various sources. These included
the Countdown to 2015 databases, UNICEF databases, WHO observatory, and UNAIDS. These
sources typically reported household surveys, primarily the large scale multi-country
initiatives of the Demographic and Health Surveys (DHS) and Multiple Indicator Cluster
Survey (MICS). The DHS and MICS websites were then assessed for additional information
which was not yet available in the large compilations already mentioned. The average
value for the most recent estimate of each indicator was calculated in Microsoft Excel
(Redmond, WA).

As household surveys are not designed for, nor capable of, assessing process indicators
relevant to care delivered at facilities, we also assessed the availability of indicators
of key interventions directly from facility-based sources. Eight survey instruments
were identified which collected data through either direct observations of patients
or via medical records review; we did not consider routine health information systems
as each country has their own unique set of indicators that may or may not be available
in the public domain. Two published reviews 16], 17] which identified available survey tools or instruments were sourced. Surveys mentioned
in these two sources that did not focus on health, or were not used in multiple LMIC,
were excluded. In addition, to ensure accuracy, two facility survey programs were
directly contacted for supplemental information. These facility surveys have several
purposes, one of which is to assess the readiness of a facility to implement quality
emergency obstetric care. The survey instruments ultimately included were: Service
Provision Assessment (SPA) (2012 version), Averting Maternal Death and Disability
(AMDD), the Maternal and Child Health Integrated Program Quality of Care survey (MCHIP-QoC),
the Postpartum Hemorrhage Prevention and Treatment questionnaire on Active Management
of the third stage of Labor (POPPHI), the World Bank’s Service Delivery Indicators
tool (SDI) (2012 Kenya version), WHOs Service Availability and Readiness Assessment
(SARA) (version 2.1), as well as the WHO’s surveys on Perinatal Health and on Maternal
and newborn health from 2007 and 2010, respectively. All documents reviewed were the
most recent versions available in November 2014, unless otherwise stated. One instrument
included information on management of post-partum haemorrhage and was only included
in our results as a denominator for relevant interventions. Data were extracted from
website materials or survey instruments. Where information was unavailable, survey
support staff were interviewed to identify the number of national facility surveys
performed for the specified interventions. For each intervention, three domains related
to process indicators were assessed: training and knowledge, availability of supplies
(inventory), and whether the intervention was actually performed. The number of survey
instruments collecting any information relevant to each of the 45 Essential Interventions in each of these three domains was measured through careful reading of each survey
module.

Defining indicators and data points

For each of the 45 Essential Interventions, we conducted a comprehensive search of existing WHO indicators, followed by indicators
from other professional bodies (Table 1). Existing indicators were adapted or developed as required by the eRegistries technical team with reference to the practice guidelines and training manuals cited
within each WHO Essential Intervention, and other resources where appropriate.

Table 1. Major sources for identifying existing indicators

Four types of indicators were defined for each WHO intervention:

process indicator/s for screening/risk identification (the proportion of patients for whom screening tests/risk identification measures
were performed);

outcome indicator/s for screening/risk identification (the proportion of patients screening positive/identified as ‘at-risk’);

process indicator/s for treatment/management (the proportion of patients treated); and

outcome indicator/s for treatment/management (the proportion of patients with adverse outcomes in the population).

For some interventions, screening/risk assessment indicators were not applicable.
We considered screening/risk indicators not applicable where the given treatment/management
was recommended to all women or babies of a given clearly defined population (e.g.
Antenatal care essential package for all pregnant women; Provision of thermal care
for all newborns to prevent hypothermia; Early initiation and exclusive breastfeeding).

By data points we refer to the primary data being captured at the point of care, which
is the source of information for numerator or denominator. Data points that could
be readily collected using an electronic form addressing each of the process and outcome
indicators were included. Data items specifically measuring each indicator (numerator
and denominator) were developed balancing specificity and feasibility by country resource
level. To maximise the feasibility of data collection, for defined conditions such
as preeclampsia, simple data points referring to the diagnosis (yes/no) of the condition
were accepted, rather than delineated by the individual components of the clinical
and laboratory diagnosis.

Evaluation and refinement of indicators

Evaluation and refinement of the indicators occurred via two stages: 1) Expert panel
evaluation and; 2) Response to feedback and refinement within the eRegistries technical team.

Expert panel evaluation

An international group of 47 experts in maternal and child health was assembled via
the network of the International Stillbirth Alliance. Thirty-four panel members were
invited to participate in consultation round 1; 35 in consultation round 2; and 44
in consultation round 3. Invited evaluators included researchers, senior clinicians
and academics, obstetricians, neonatologists, maternal-fetal medicine specialists,
epidemiologists, consumer advocates and others.

The eRegistries indicator evaluation tool assessing 10 domains was developed. The domains were derived
by the eRegistries technical team after reviewing several existing indicator evaluation frameworks,
including the New Economics Foundation AIMS criteria for indicators 18], 19], the Agency for Healthcare Research and Quality standards by which to judge quality
indicator performance 20], Indicators to Monitor Maternal Health Goals 21], and the SMART criteria 22]. The evaluation tool (Additional file 1) was simplified based on pilot testing with a subsample of the expert panel, culminating
in the final evaluation tool assessing the below five domains.

eRegistries indicator evaluation criteria:

Action focused: “It is clear what needs to be done to improve outcomes associated with this indicator
(e.g., immunised with tetanus toxoid to reduce neonatal tetanus)”

Important: “The indicator and the data generated will make a relevant and significant contribution
to determining how to effectively respond to the problem”

Operational: “The indicator is quantifiable; definitions are precise and reference standards are
developed and tested or it is feasible to do so”

Feasible: “It is feasible to collect data required for indicator in the relevant setting”

Simple and valued: “The people involved in the service can understand and value indicator”

Panel members were asked to indicate via categorical response their agreement with
each of five statements addressing the above domains (Yes/Probably/Unsure/Possibly/No/Do not wish to respond). A comment box was provided for each indicator to enable detailed feedback. Panel
members were invited to suggest other indicators or adjustments to the existing indicators.
Data were analysed descriptively in Microsoft Excel by tallying the number of responses
for each response category.

Indicators were evaluated in three consultation rounds: (1) Preconception/periconceptual
care and antenatal care; (2) Childbirth care and postpartum care (of the mother) and;
(3) Immediate newborn care, neonatal infection management, and care for small and
ill babies. Additional file 2 presents a list of interventions addressed, including the number of indicators within
each intervention, and where the indicators were sourced. Panel members were assigned
3 to 4 interventions in each round each and asked to evaluate all indicators within
the given intervention. Interventions were assigned to evaluators randomly, unless
the panel member indicated a preference based on their area of expertise.

For each intervention the panel members were given a detailed breakdown of the indicators
which included definitions, numerators and denominators, data points, and references.
An evidence summary for the interventions was provided based on the evidence cited
in the Essential Interventions (predominantly Cochrane systematic reviews). Panel
members were given an evaluation sheet along with a separate document containing background
information and evaluation instructions, including further detail on the eRegistries indicator evaluation tool development. Evaluation materials were sent to and returned
by panel members by email.

We adopted a quasi-anonymous approach for indicator evaluation; that is, while individual
panel members may have known the names of other members in the group, individual responses
were not identifiable to the group and panel members were not aware which interventions
had been assigned for evaluation to whom.

Response to feedback and refinement within the eRegistries technical team

Following descriptive analyses, indicators that consistently failed to meet (or ‘probably’
meet) the defined criteria were amended based on the evaluators’ comments, or else
removed if deemed unnecessary for effective monitoring and evaluation of the given
intervention. A series of meetings of the eRegistries technical team was held to review the updated indicators to ensure consistency in
nomenclature across indicators and their data points, numerators and denominators.

Graphical display of potential utilisation of the eRegistries indicators

To inform the plausible utilisation of these indicators, a power graph was created
reflecting different use cases. The power to detect a significant change in a given
indicator was graphed in association with the indicator prevalence and a given sample
size. Three scenarios of different sample sizes were assumed: 200 births annually,
10,000 births annually and 500,000 births annually, to reflect a typical rural clinic,
a typical district and a typical LMIC. The graphed indicator prevalence ranged from
75 % to 0.01 %. The most likely value for each of the indicators was calculated (Additional
file 3) and placed alongside the graph.