Counting People Reach

Navigating direct/indirect recipients & double counting

Scott Chaplowe

October, 2017

Introduction

With the increased attention to performance accountability, output indicators, like the number of people reached by services, have understandably taken a backseat of impact indicators used to determine what difference has been made. However, one should not underestimate the importance of and challenges for measuring basic output-level indicators, such as people reached by services (AKA “beneficiaries”).  This blog will examine this, focusing on the challenges presented by direct and indirect recipients, as well as double counting. It will conclude that there is no universal “recipe” for accurately counting people reached. Instead, approaches can be adapted according to organizational and operational context, with attention to the legitimacy and degree of utility.

Firstly, one may wonder why counting people reached is important: “How is counting numbers improving the quality of appropriate provision of services?” Indeed, “counts” are not enough – we want to know what difference is being made, and whether the return on investment is worthwhile.

However, output counts help to “add-up” and triangulate with other data sources to determine the worth or utility of interventions (evaluation). Output counts such as people reached can be important measures (when reliable) to assess evaluation criteria such as coverage and efficiency, and contribute to the evaluation of effectiveness and impact.

 

We need to acknowledge that the measurement of higher level results (impact) is as reliable as the measurement of basic, lower level results (outputs and counts of outreach) that help determine higher level change.

 

Evaluation aside, the ability to count the people reached by project and programs is important for monitoring and managing their implementation, informing future planning and allocation of resources.  In fact, in my experience, counting people reached is one of the basic measures I do not need to impress on program managers because they need/want to do this to inform their work.

Another consideration to clear up is the, “what to call the people we count.” There is a growing critic of the term “beneficiary” in development in humanitarian relief, which I address in my companion blog, “Beneficiary" Revisited. In short, there is an important difference between measuring outreach or coverage of people reached by services, versus desired, higher levels changes in the behavior (outcomes) or condition (impact) of people reached. One should not confuse or conflate measurement by combining measures of different levels of results – people reached by services (outputs) versus those positively impacted (outcomes/impact).

Semantics aside, we should not underestimate the challenges encountered when measuring people reached, (service outreach). One such challenge is the disaggregation of service recipients. Demographic disaggregation includes characteristics such as sex, age, disability, ethnicity, education and income. Demographic disaggregation is not the focus of this blog, but it is nevertheless important to flag as a critical consideration when designing an M&E system for counting people reached.

The remainder of this blog will focus on the challenges of counting people reached presented by direct, indirect and double counting, which can be particularly formidable for organizations operating in complex setting, involving multiple programs overlapping in time, location, and population groups.

 

Direct & Indirect Recipients

One fundamental way that counts of people reached can be disaggregated is by whether they received the service directly or indirectly.  There is no industry-recognized standard for what is meant by direct and indirect recipients, but I have my professional opinion…

Direct recipients can be defined as countable recipients of services by the service provider at the delivery point. In contrast, indirect recipients cannot be directly counted because they receive services apart from the provider and delivery point, (e.g. people listening to an HIV/AIDS awareness radio program). “Delivery point” refers to a location where a provider delivers services directly to people. This can be stationary, as with a nurse at a health clinic, or mobile, as with a roaming nurse providing vaccinations at households. The key element is that the provider is present to verify delivery of service.

Based on our definition of indirect recipients, we need to acknowledge that any measure of indirect recipients is only an approximation, (because they cannot be verified in-person by the service provider). For example, the average listening audience for an awareness-raising radio program in a certain region and time of the day is an estimation based on marketing research. In short, accurate measurement is limited, and we must acknowledge the difference between “evidence” versus “proof.”  

An important consideration for the measurement of indirect recipients is the degree to which counts are based on assumptions that are too indirect and/or unreliable. Indeed, this is a “judgement call,” and will vary from organizational and operational context. For instance, an organization counts not only students reached by a messaging campaign through attendance at school presentations (direct recipients), but extrapolates to include indirect counts of household members of students, (based on the sum of the number of students multiplied by the average household size for the region).  Depending on stakeholders and context, the assumption that students will provide messages to their family members may be questionable.

However, if students are given a homework assignment to interview one or more family member related to the messages provided at school, then this may be considered reliable enough to count family members. Is this completely accurate? No: it is an approximation. Firstly, only family members from students completing the homework assignment should be counted; secondly, there may be instances where a student “fakes” an interview for the homework assignment.

Drawing upon examples from a 2017 Technical Note on Counting People Reached I authored for the International Federation of Red Cross and Red Crescent Societies, Table 1 provides the counting rationale for direct and indirect service recipients for a variety of different service types. The examples are illustrative, and the justification will vary according to context, balancing reliability with legitimacy, and what is possible given resources and technical expertise.

Double Counting

Another important challenge for counting people reached, whether direct or indirect, is that of double counting, or counting the same person reached by a service provider (organization) more than once in the same reporting period. Double counting is to be avoided because it inflates the count of people reached and is therefore misrepresentative.

Table 2 provides a simple example of how double counting can inflate the total number of people reached beyond the total population. Double counting occurs because the organization aggregates the counts of people reached by each program in its recovery operation, rather than controlling for individuals reached by multiple programs (service types) and adjusting the total count to avoid counting them more than once during the reporting period.

Table 2 is an illustration of double counting that can occur with the delivery of multiple services from one provider. Table 3 summarizes this and other common causes of double counting. Note that sometimes an organization confronts a combination of these challenges.

Double counting can be reduced by establishing data management systems that carefully track people reached by service type, provider, delivery point, and time period. Oftentimes, such systems are already a regular part of program information management to understand the local context (needs), allocate people and resources, and coordinate services and partners. Some helpful points to keep in mind include:

1.  Anticipate and plan for instances where double counting is more likely. For example, if there is a logical framework, review programme components and indicators at each level to help identify when certain target populations, services, or providers may overlap. Related, compare logical frameworks between projects/programs to identify target population overlap.

2.   When possible, use a tracking system that can uniquely identify each individual reached by a service, so that at the end of the reporting period there are accurate lists of individuals – (by name and/or ID number, recoding their sex and age as well as other factors that can be used to analyze and inform programs such as disability status or at-risk groups) – that can be used to make and adjust counts across time, place, provider and service type. [Mobile data collection software applications can help support this, e.g., Open Data Kit (ODK), Kobo, and Magpi].

3.   When working with households, determine from the outset whether individuals will be counted, or calculated by multiplying the number of households reached by average household size. If counting individuals AND households, make sure that interventions do not overlap the different counting strategies.

4.  Mapping the programme landscape can help reduce double counting and support the use of catchment counts when appropriate.  This involves the use of maps (paper or computerized), to represent the locations of services and providers. When it is reliable that all individuals in a given target population will receive at least one service over the given time within the service delivery area, the total population can be counted as people reached.

In summary, there is no “magic recipe” to count people reached. The ability to reliably control for double-counting will depend on a variety of factors according to the organizational and operational context; i.e. an organization’s scale and scope of services, and the availability of resources and technical expertise available for aggregating counts of people reached.

For example, it may be possible for an organization to minimize double counting when reaching people with multiple services over time and place by using bar codes to uniquely identify and track individual service recipients; however, this may be possible when access to such technology is affordable and practical.

 

Conclusion – be realistic and responsible

An important consideration for reliable counts of people reached, (as well as any data to be collected and reported), is the given capacity to manage data along the “data lifecycle.” Data management concerns a range of processes, from data collection, verification, organization, storage, analysis, and presentation, to the eventual “retirement” of data. One should not underestimate the capacity requirements for reliable and accountable data management – not just counting people reached, but also other indicators.

For instance, data collection guidance, templates and forms may be required at the field level, as is training data collectors (whether using paper or handheld digital device). From the field level, data management systems may then be required to aggregate, safely store, access and “roll-up” data from the local and regional level up to country and global offices. Other considerations include quality assurance checks that cross-check and clean data.

With the above in mind, careful attention should be given to what is feasible to reliably collect and report given the existing capacities for a specific program or project.  This includes not only to existing systems for data management, but also the human and financial resources required operate them.  We do not want the “accountability tail to wag the dog,” meaning we do not want M&E measurement to burden the very service delivery (programming) it is supposed to support.

A final but important reminder – ensure data is managed in a responsible manner that safeguards the dignity, respect and privacy of the individuals, organizations, and other key groups from which we collect data.  For people reached, this includes balancing key principles related to people’s right to be counted, informed consent, data sharing and transparency, and ensuring that we act in the best interests of the people we count to serve.

 

Postscript – While this blog represents my own views, they are informed by my prior work with the International Federation of Red Cross and Red Crescent Societies (IFRC), for which I led the development of the measurement guidelines for its proxy indicators for the Federation-wide Databank and Reporting System, including the indicator for people reached. The cartoon is illustrated by Julie Smith and provided courtesy of the IFRC. The picture at the start of this blog was taken in Laos with consent from the family to be used online.