Recent from talks
Contribute something
Nothing was collected or created yet.
Monitoring and evaluation
View on WikipediaMonitoring and Evaluation (M&E) is a combined term for the processes set up by organizations such as companies, government agencies, international organisations and NGOs, with the goal of improving their management of outputs, outcomes and impact. Monitoring includes the continuous assessment of programmes based on early detailed information on the progress or delay of the ongoing assessed activities.[1] Evaluation involves the examination of the relevance, effectiveness, efficiency and impact of activities in the light of specified objectives.[2]
Monitoring and evaluation processes can be managed by the donors financing the assessed activities, by an independent branch of the implementing organization, by the project managers or implementing team themselves and/or by a private company. The credibility and objectivity of monitoring and evaluation reports depend very much on the independence of the evaluators. Their expertise and independence is of major importance for the process to be successful.
Many international organizations such as the United Nations, USAID, the World Bank group and the Organization of American States have been utilizing this process for many years. The process is also growing in popularity in the developing countries where the governments have created their own national M&E systems to assess the development projects, the resource management and the government activities or administration. The developed countries are using this process to assess their own development and cooperation agencies.
Evaluation
[edit]The M&E is separated into two distinguished categories: evaluation and monitoring. An evaluation is a systematic and objective examination concerning the relevance, effectiveness, efficiency, impact and sustainabilities of activities in the light of specified objectives.[2] The idea in evaluating projects is to isolate errors in order to avoid repeating them and to underline and promote the successful mechanisms for current and future projects.
An important goal of evaluation is to provide recommendations and lessons to the project managers and implementation teams that have worked on the projects and for the ones that will implement and work on similar projects.
Evaluations are also indirectly a means to report to the donor about the activities implemented. It is a means to verify that the donated funds are being well managed and transparently spent. The evaluators are supposed to check and analyze the budget lines and to report the findings in their work.[3] Monitoring and Evaluation is also useful in the Facilities [Hospitals], it enables the donors such as WHO and UNICEF to know whether the funds provided are well utilized in purchasing drugs and also equipment in the Hospitals.
Monitoring
[edit]Monitoring is a continuous assessment that aims at providing all stakeholders with early detailed information on the progress or delay of the ongoing assessed activities.[1] It is an oversight of the activity's implementation stage. Its purpose is to determine if the outputs, deliveries and schedules planned have been reached so that action can be taken to correct the deficiencies as quickly as possible.
Good planning, combined with effective monitoring and evaluation, can play a major role in enhancing the effectiveness of development programs and projects. Good planning helps focus on the results that matter, while monitoring and evaluation help us learn from past successes and challenges and inform decision making so that current and future initiatives are better able to improve people's lives and expand their choices.[4]
Differences between monitoring and evaluation
[edit]In monitoring, the feedback and recommendation is inevitable to the project manager but in evaluation, this is not the case. The common ground for monitoring and evaluation is that they are both management tools. For monitoring, data and information collection for tracking progress according to the terms of reference is gathered periodically which is not the case in evaluations for which the data and information collection is happening during or in view of the evaluation. The monitoring is a short term assessment and does not take into consideration the outcomes and impact unlike the evaluation process which also assesses the outcomes and sometimes longer term impact. This impact assessment occurs sometimes after the end of a project, even though it is rare because of its cost and of the difficulty to determine whether the project is responsible for the observed results.[2] Evaluation is a systematic and objective examination which is conducted on monthly and/or yearly basis, unlike Monitoring, which is a continuous assessment, providing stakeholders with early information. Monitoring checks on all the activities on the last [implementation stage] unlike Evaluation which entails information on whether the donated funds are well managed and that they are transparently spent.
Importance
[edit]Although evaluations are often retrospective, their purpose is essentially forward looking.[5] Evaluation applies the lessons and recommendations to decisions about current and future programmes. Evaluations can also be used to promote new projects, get support from governments, raise funds from public or private institutions and inform the general public on the different activities.[2]
The Paris Declaration on Aid Effectiveness in February 2005 and the follow-up meeting in Accra underlined the importance of the evaluation process and of the ownership of its conduct by the projects' hosting countries. Many developing countries now have M&E systems and the tendency is growing.[6]
Performance measurement
[edit]The credibility of findings and assessments depends to a large extent on the manner in which monitoring and evaluation is conducted. To assess performance, it is necessary to select, before the implementation of the project, indicators which will permit to rate the targeted outputs and outcomes. According to the United Nations Development Programme (UNDP), an outcome indicator has two components: the baseline which is the situation before the programme or project begins, and the target which is the expected situation at the end of the project. An output indicator that does not have any baseline as the purpose of the output is to introduce something that does not exist yet.[7]
In the United Nations
[edit]The most important agencies of the United Nations have a monitoring and evaluation unit. All these agencies are supposed to follow the common standards of the United Nations Evaluation Group (UNEG). These norms concern the Institutional framework and management of the evaluation function, the competencies and ethics, and the way to conduct evaluations and present reports (design, process, team selection, implementation, reporting and follow up). This group also provides guidelines and relevant documentation to all evaluation organs being part of the United Nations or not.[8]
Most agencies implementing projects and programmes, even if following the common UNEG standards, have their own handbook and guidelines on how to conduct M&E. Indeed, the UN agencies have different specialisations and have different needs and ways of approaching M&E. The Joint Inspection Unit of the United Nations periodically conducts system-wide reviews of the evaluation functions of their 28 Participating Organizations.[9]
See also
[edit]Reference's
[edit]- ^ a b United Nations development programme evaluation office - Handbook on Monitoring and Evaluating for Results. http://web.undp.org/evaluation/documents/handbook/me-handbook.pdf
- ^ a b c d A UNICEF Guide for Monitoring and Evaluation - Making a Difference. http://library.cphs.chula.ac.th/Ebooks/ReproductiveHealth/A%20UNICEF%20Guide%20for%20Monitoring%20and%20Evaluation_Making%20a%20Difference.pdf Archived 2020-02-27 at the Wayback Machine
- ^ Center for Global Development. US Spending in Haiti: The Need for Greater Transparency and Accountability. http://www.cgdev.org/doc/full_text/CGDBriefs/1426965/US-Spending-in-Haiti-The-Need-for-Greater-Transparency-and-Accountability.html
- ^ "Handbook on planning, monitoring and evaluating for development results" (PDF). Archived from the original (PDF) on 2012-09-06.
- ^ McLellan, Timothy (2020-08-25). "Impact, theory of change, and the horizons of scientific practice". Social Studies of Science. 51 (1): 100–120. doi:10.1177/0306312720950830. ISSN 0306-3127. PMID 32842910. S2CID 221326151.
- ^ "Why is it important to strengthen Civil Society's evaluation capacity? | MY M&E". mymande.org. Archived from the original on 2014-05-27. Retrieved 2014-05-27.
- ^ United Nations Development Programme evaluation office - Handbook on Monitoring and Evaluating for Results. Prince Conway II; a M&E specialist from the Ministry of Health and Social Welfare was one of those who achieved greatly in the health sector moving the Malaria data from 60% below base line to 85% above target. http://web.undp.org/evaluation/documents/handbook/me-handbook.pdf
- ^ United Nations Evaluation Group (UNEG). "About". Archived from the original on 2013-11-05. Retrieved 2014-05-27.
- ^ United Nations Joint Inspection Unit. https://www.unjiu.org/
Monitoring and evaluation
View on GrokipediaCore Concepts
Monitoring
Monitoring constitutes the routine, ongoing process of collecting, analyzing, and reporting data on specified indicators to assess the progress and performance of projects, programs, or interventions.[1] This function enables managers and stakeholders to identify deviations from planned objectives, track resource utilization, and make informed adjustments in real time, thereby enhancing accountability and operational efficiency.[10] Unlike periodic evaluations, monitoring emphasizes continuous observation rather than retrospective judgment, focusing primarily on inputs, activities, outputs, and immediate outcomes to detect issues such as delays or inefficiencies early.[11] The primary purpose of monitoring is to provide actionable insights for decision-making, ensuring that interventions remain aligned with intended results while minimizing risks of failure or waste.[3] For instance, in development aid programs, it involves verifying whether allocated funds are being used as budgeted and whether activities are yielding expected outputs, such as the number of beneficiaries reached or infrastructure built.[12] Empirical data from monitoring systems have shown that regular tracking can improve project outcomes by up to 20-30% through timely corrective actions, as evidenced in World Bank-reviewed interventions where baseline indicators against targets revealed underperformance in 40% of cases during implementation phases.[13] Key components of effective monitoring include the establishment of clear, measurable indicators tied to objectives; routine data collection via tools like field reports, surveys, or digital tracking systems; and analytical processes to compare actual performance against baselines and targets.[1] Baselines, established at project inception—such as pre-intervention metrics on poverty rates or service coverage—serve as reference points, with targets set for periodic review, often quarterly or monthly.[13] Data sources must be reliable and verifiable, incorporating both quantitative metrics (e.g., cost per output) and qualitative feedback to capture contextual factors influencing progress.[11] In practice, monitoring frameworks prioritize causal linkages from activities to outputs, using performance indicators that are specific, measurable, achievable, relevant, and time-bound (SMART).[10] Common methods encompass progress reporting dashboards, key performance indicator (KPI) dashboards, and risk registers to flag variances in schedule, budget, or quality— for example, schedule variance calculated as (earned value minus planned value) to quantify delays in aid projects.[3] Stakeholder involvement, including community feedback mechanisms, ensures data reflects ground realities, though challenges such as data inaccuracies or resource constraints can undermine reliability if not addressed through validation protocols.[12]Evaluation
Evaluation constitutes the systematic and objective assessment of an ongoing or completed project, program, or policy, examining its design, implementation, results, and broader effects to determine value, merit, or worth.[14] Unlike continuous monitoring, evaluation typically occurs at discrete intervals, such as mid-term or end-of-project phases, to inform decision-making, accountability, and learning by identifying causal links between interventions and outcomes.[13] This process relies on empirical evidence to test assumptions about effectiveness, often revealing discrepancies between planned and actual results, as evidenced in development aid where evaluations have shown that only about 50-60% of projects meet their stated objectives in rigorous assessments.[15] Evaluations are categorized by purpose and timing. Formative evaluations, conducted during implementation, aim to improve processes and address emerging issues, such as refining program delivery based on interim feedback.[16] Summative evaluations, performed post-completion, judge overall success or failure against objectives, informing future funding or scaling decisions.[17] Process evaluations focus on implementation fidelity—assessing whether activities occurred as planned and why deviations arose—while outcome evaluations measure immediate effects on direct beneficiaries, and impact evaluations gauge long-term, attributable changes, often using counterfactual methods like randomized controlled trials to isolate causal effects.[18][19] Standard criteria for conducting evaluations, as codified by the OECD Development Assistance Committee (DAC) in 2019, include relevance (alignment with needs and priorities), coherence (compatibility with other interventions), effectiveness (achievement of objectives), efficiency (resource optimization), impact (broader changes, positive or negative), and sustainability (enduring benefits post-intervention).[20][21] These criteria provide a structured lens for analysis, though their application requires judgment to avoid superficial compliance; for instance, efficiency assessments must account for opportunity costs, not merely cost ratios.[22] Methods in evaluation encompass qualitative approaches, such as in-depth interviews and thematic analysis to capture contextual nuances; quantitative techniques, including statistical modeling and surveys for measurable indicators; and mixed methods, which integrate both to triangulate findings and mitigate limitations like qualitative subjectivity or quantitative oversight of mechanisms.[23] Peer-reviewed studies emphasize mixed methods for complex interventions, as they enhance causal inference by combining breadth (quantitative) with depth (qualitative), though integration demands rigorous design to prevent methodological silos.[24] Challenges in evaluation include threats to independence and bias, particularly in development projects where funders or implementers may influence findings to justify continued support, leading to over-optimistic reporting; empirical analyses show that evaluations with greater evaluator autonomy yield 10-20% lower performance ratings on average.[25][26] Attribution errors—confusing correlation with causation—and data limitations further complicate impact claims, underscoring the need for pre-registered protocols and external peer review to uphold credibility.[27] Institutions like the World Bank mandate independent evaluation units to counter such risks, yet systemic pressures from political stakeholders persist.[28]Key Differences and Interrelationships
Monitoring involves the continuous and systematic collection of data on predefined indicators to track progress toward objectives and the use of resources during project implementation.[1] In contrast, evaluation constitutes a periodic, often independent assessment that determines the merit, worth, or significance of an intervention by examining its relevance, effectiveness, efficiency, and sustainability, typically through triangulated data and causal analysis.[29] Key distinctions include frequency, with monitoring being ongoing and routine, while evaluation occurs at discrete intervals such as mid-term or ex-post; scope, where monitoring emphasizes process-oriented tracking of inputs, activities, and outputs, versus evaluation's focus on outcomes, impacts, and broader contextual factors; and independence, as monitoring is generally internal and managerial, whereas evaluation prioritizes impartiality, often involving external reviewers.[29][1]| Aspect | Monitoring | Evaluation |
|---|---|---|
| Frequency | Continuous and routine | Periodic (e.g., mid-term, final) |
| Primary Focus | Progress on activities, outputs, and indicators | Effectiveness, impact, relevance, sustainability |
| Data Sources | Routine, indicator-based | Triangulated, multi-method |
| Independence | Internal, managerial | Independent, often external |
| Causal Emphasis | Limited to deviations from plan | Explicit analysis of results chains and factors |
Historical Development
Origins in Scientific Management and Early 20th Century Practices
Frederick Winslow Taylor, often regarded as the father of scientific management, pioneered systematic approaches to workplace efficiency in the late 19th and early 20th centuries through time and motion studies that involved direct observation and measurement of workers' tasks.[30] These methods entailed breaking down jobs into elemental components, timing each to identify the "one best way" of performing them, and evaluating deviations from optimal standards to minimize waste and maximize output.[30] Taylor's 1911 publication, The Principles of Scientific Management, formalized these practices, advocating for scientifically derived performance benchmarks over empirical guesswork, with incentives like bonuses tied to meeting measured time limits—yielding reported productivity gains of 200 to 300 percent in tested cases.[30] Complementing Taylor's framework, Henry L. Gantt, a collaborator, introduced Gantt charts around 1910 as visual tools for scheduling tasks and tracking progress against timelines in manufacturing and construction projects.[31] These bar charts displayed task durations, dependencies, and completion statuses, enabling managers to monitor real-time adherence to plans and evaluate delays causally, such as resource shortages or inefficiencies.[31] Applied initially in U.S. steel and machinery industries, Gantt charts facilitated quantitative assessment of workflow bottlenecks, aligning with scientific management's emphasis on data-informed adjustments rather than subjective oversight.[31] These industrial innovations influenced early 20th-century public administration, particularly through the U.S. President's Commission on Economy and Efficiency, established in 1910 under President William Howard Taft to scrutinize federal operations.[32] The commission's reports advocated performance-oriented budgeting, recommending classification of expenditures by function and measurement of outputs to assess administrative efficiency, such as unit costs per service delivered.[33] This marked an initial shift toward empirical monitoring of government activities, evaluating resource allocation against tangible results to curb waste, though implementation faced resistance until the Budget and Accounting Act of 1921 formalized centralized fiscal oversight with evaluative elements.[32][33]Post-World War II Expansion in Development Aid
Following World War II, the expansion of development aid to newly independent and underdeveloped nations prompted the initial institutionalization of monitoring and evaluation (M&E) practices, driven by the need to oversee disbursements and assess basic project outputs amid surging bilateral and multilateral commitments. President Harry Truman's Point Four Program, announced in his 1949 inaugural address, marked a pivotal shift by committing U.S. technical assistance to improve productivity, health, and education in poor countries, with early monitoring limited to financial audits and progress reports on expert missions rather than comprehensive impact assessments.[34] This initiative influenced the United Nations' creation of the Expanded Programme of Technical Assistance (EPTA) in 1950, which coordinated expert advice and fellowships across specialized agencies, emphasizing rudimentary tracking of implementation milestones to ensure funds—totaling millions annually by the mid-1950s—reached intended agricultural, health, and infrastructure goals.[35] Causal pressures included Cold War imperatives to counter Soviet influence through visible aid successes and domestic demands in donor nations for fiscal accountability, though evaluations remained ad hoc and output-focused, often overlooking long-term causal effects on poverty reduction. The 1960s accelerated M&E's role as aid volumes grew—U.S. foreign assistance, for instance, encompassed over $3 billion annually by decade's end—and agencies grappled with evident project underperformance. USAID, established in 1961 under the Foreign Assistance Act (P.L. 87-195), initially prioritized large-scale infrastructure with evaluations based on economic rates of return, but by 1968, it created an Office of Evaluation and introduced the Logical Framework (LogFrame) approach, a matrix tool for defining objectives, indicators, and assumptions to enable systematic monitoring of inputs, outputs, and outcomes.[36] Similarly, the World Bank, active in development lending since the late 1940s, confronted 1960s implementation failures—such as delays and cost overruns in rural projects—prompting internal reviews that highlighted the absence of robust data on physical progress and beneficiary impacts, setting the stage for formalized M&E units.[37] These developments reflected first-principles recognition that unmonitored aid risked inefficiency, with congressional mandates like the 1968 Foreign Assistance Act amendment (P.L. 90-554) requiring quantitative indicators to justify expenditures amid taxpayer scrutiny. By the early 1970s, M&E expanded as a professional function in response to shifting aid paradigms toward basic human needs and rural poverty alleviation, with the World Bank's Agriculture and Rural Development Department establishing a dedicated Monitoring Unit in 1974 to track key performance indicators (KPIs) like budget adherence and target achievement across global portfolios.[37] Donor agencies, including USAID, increasingly incorporated qualitative methods such as surveys and beneficiary feedback, though challenges persisted due to capacity gaps in recipient countries and overreliance on donor-driven metrics that sometimes ignored local causal dynamics. This era's growth—spurred by UN efforts in the 1950s to build national planning capacities and OECD discussions on aid effectiveness—laid groundwork for later standardization, as evaluations revealed that without rigorous tracking, aid often failed to achieve sustained development outcomes, prompting iterative refinements in methodologies.[38] Empirical data from early assessments, such as U.S. Senate reviews admitting difficulties in proving post-WWII aid's net impact, underscored the causal necessity of M&E for evidence-based allocation amid billions in annual flows.[36]Modern Standardization from the 1990s Onward
In 1991, the Organisation for Economic Co-operation and Development's Development Assistance Committee (OECD DAC) formalized a set of five core evaluation criteria—relevance, effectiveness, efficiency, impact, and sustainability—to standardize assessments of development cooperation efforts.[22] These criteria, initially outlined in DAC principles and later detailed in the 1992 DAC Principles for Effective Aid, provided a harmonized framework for determining the merit and worth of interventions, shifting evaluations from ad hoc reviews toward systematic analysis of outcomes relative to inputs and objectives.[20] Adopted widely by bilateral donors, multilateral agencies, and national governments, they addressed inconsistencies in prior practices by emphasizing empirical evidence of causal links between activities and results, though critics noted their initial focus overlooked broader systemic coherence.[39] The late 1990s marked the widespread adoption of results-based management (RBM) as a complementary standardization tool, particularly within the United Nations system, to integrate monitoring and evaluation into programmatic planning and accountability.[40] RBM, which prioritizes measurable outputs, outcomes, and impacts over mere activity tracking, was implemented across UN agencies starting around 1997–1998 to enhance transparency and performance in resource allocation amid growing demands for aid effectiveness.[41] Organizations like the World Bank and UNDP incorporated RBM into operational guidelines, producing handbooks such as the World Bank's Ten Steps to a Results-Based Monitoring and Evaluation System (2004), which codified processes for designing indicators, baselines, and verification methods to support evidence-based decision-making.[13] This approach, rooted in causal realism by linking interventions to verifiable results chains, reduced reliance on anecdotal reporting but faced implementation challenges in data-scarce environments. From the early 2000s onward, these standards evolved through international commitments like the 2005 Paris Declaration on Aid Effectiveness, which embedded M&E in principles of ownership, alignment, and mutual accountability, prompting donors to harmonize reporting via shared indicators.[42] The Millennium Development Goals (2000–2015) further standardized global M&E by establishing time-bound targets and disaggregated metrics, influencing over 190 countries to adopt compatible national systems.[43] In 2019, the OECD DAC revised its criteria to include coherence, reflecting empirical lessons from prior evaluations that isolated assessments often missed inter-sectoral interactions and external influences.[44] Despite these advances, standardization efforts have been critiqued for privileging quantifiable metrics over qualitative causal insights, with institutional sources like UN reports acknowledging persistent gaps in capacity and bias toward donor priorities.[45]Methods and Frameworks
Data Collection and Analysis Techniques
Quantitative and qualitative data collection techniques form the foundation of monitoring and evaluation, enabling the systematic gathering of evidence on program inputs, outputs, outcomes, and impacts. Quantitative methods prioritize numerical data to measure predefined indicators, facilitating comparability and statistical rigor, while qualitative methods capture nuanced, non-numerical insights into processes, perceptions, and contextual factors. Mixed-method approaches, integrating both, are frequently employed to triangulate evidence, address gaps in single-method designs—such as the lack of depth in purely quantitative assessments—and enhance overall validity.[13][46] Common quantitative techniques include structured surveys and questionnaires with closed-ended questions, such as multiple-choice or Likert scales, which efficiently collect data from large samples to track progress against baselines or benchmarks.[47] Administrative records, household surveys like the Core Welfare Indicators Questionnaire (CWIQ), and secondary sources—such as national censuses or program databases—provide reliable, cost-effective data for ongoing monitoring and historical comparisons.[13] Structured observations, using checklists to record specific events or behaviors, quantify real-time performance in operational settings.[47] Qualitative techniques emphasize exploratory depth, with in-depth interviews eliciting individual perspectives from key informants and focus group discussions revealing group dynamics among 6-10 participants.[47] Case studies integrate multiple data sources for holistic analysis of specific instances, while document reviews and direct observations uncover implementation challenges not evident in metrics alone.[48] Analysis of quantitative data typically involves descriptive statistics—frequencies, means, and percentages—to summarize trends, alongside inferential techniques like regression models to test associations and infer causality from monitoring datasets.[49] Qualitative analysis employs thematic coding and content analysis to identify recurring patterns, often supported by triangulation with quantitative findings for robust interpretation.[13] Advanced methods, such as econometric modeling or cost-benefit analysis, assess long-term impacts in evaluations, drawing on client surveys and CRM system data where applicable.[48] Best practices stress piloting tools to ensure reliability and validity, selecting methods aligned with evaluation questions, and incorporating stakeholder input to maintain relevance and ethical standards.[13] Data quality checks, including timeliness and completeness, are essential to support causal inferences and adaptive decision-making.[13]Logical Framework Approach and Results-Based Management
The Logical Framework Approach (LFA), also known as the logframe, is a systematic planning and management tool that structures project elements into a matrix to clarify objectives, assumptions, and causal linkages, facilitating monitoring through indicators and evaluation via verification mechanisms.[50] Developed in 1969 by Practical Concepts Incorporated for the United States Agency for International Development (USAID), it emerged as a response to challenges in evaluating aid effectiveness by emphasizing vertical logic—where activities lead to outputs, outputs to purposes (outcomes), and purposes to overall goals (impacts)—while incorporating horizontal elements like risks.[51] In monitoring and evaluation (M&E), LFA supports ongoing tracking by defining measurable indicators for each objective level and sources of data (means of verification), enabling periodic assessments of progress against planned results, though critics note its rigidity can overlook emergent risks if assumptions prove invalid.[52] The core of LFA is a 4x4 matrix that captures:| Hierarchy of Objectives | Indicators | Means of Verification | Assumptions/Risks |
|---|---|---|---|
| Goal (long-term impact) | Quantitative/qualitative measures of broader societal change | Reports from national statistics or independent audits | External policy stability supports sustained impact |
| Purpose (outcome) | Metrics showing direct beneficiary improvements, e.g., 20% increase in literacy rates | Baseline/endline surveys or administrative data | Beneficiaries adopt trained skills without disruption |
| Outputs (immediate results) | Counts of deliverables, e.g., 50 schools constructed | Project records or site inspections | Supply chains remain uninterrupted |
| Activities/Inputs (resources used) | Timelines and budgets, e.g., training 100 teachers by Q2 | Financial logs and activity reports | Funding and personnel availability |
Performance Indicators and Metrics
Performance indicators in monitoring and evaluation (M&E) are quantifiable or qualifiable measures designed to track inputs, processes, outputs, outcomes, and impacts of programs, projects, or policies against intended objectives.[15] These indicators provide objective data for assessing efficiency, effectiveness, and sustainability, enabling stakeholders to identify deviations from targets and inform adaptive decision-making.[48] Metrics, often used interchangeably with indicators in M&E contexts, emphasize the numerical or standardized quantification of performance, such as rates, percentages, or counts, to facilitate comparability across time periods or entities.[13] Key types of performance indicators align with the results chain in M&E frameworks:- Input indicators measure resources allocated, such as budget expended or staff hours invested; for instance, the number of training sessions funded in a health program.[15]
- Process indicators gauge implementation activities, like the percentage of project milestones completed on schedule.[12]
- Output indicators assess immediate products, such as the number of individuals trained or infrastructure units built.[15]
- Outcome indicators evaluate short- to medium-term effects, for example, the reduction in disease incidence rates following vaccination campaigns.[15]
- Impact indicators track long-term changes, such as overall poverty levels in a beneficiary population, though these often require proxy measures due to attribution challenges.[12]
