Unlock hundreds more features
Save your Quiz to the Dashboard
View and Export Results
Use AI to Create Quizzes and Analyse Results

Sign inSign in with Facebook
Sign inSign in with Google

Master Monitoring and Evaluation: Take the Quiz!

Ready for monitoring and evaluation MCQs? Jump into this M&E quiz and test your expertise!

Editorial: Review CompletedCreated By: Lindsay McchesneyUpdated Aug 28, 2025
Difficulty: Moderate
2-5mins
Learning OutcomesCheat Sheet
Paper art illustration of quiz elements for Monitoring and Evaluation knowledge challenge on golden yellow background

This Monitoring and Evaluation quiz helps you practice core M&E skills like indicators, baselines, data checks, and reporting. Use it to spot gaps before an exam or project review, and know what to study next. For more depth, try the program evaluation quiz or this evaluation practice set .

What is the primary purpose of monitoring in Monitoring and Evaluation (M&E)?
To measure long-term impacts several years after completion
To conduct a comprehensive cost-benefit analysis
To establish initial conditions before a project begins
To track implementation progress and outputs
Monitoring in M&E refers to the continuous collection and analysis of data on activities and outputs to ensure a project or program stays on track, enabling timely adjustments. It focuses on immediate progress rather than long-term impact, which is assessed through evaluation. It also differs from baseline studies and financial analyses.
Which of the following is an example of an output indicator?
Reduction in disease incidence in a community
Percentage increase in household income
Number of training sessions conducted
Degree of policy adoption by government
Output indicators measure the direct products of project activities, such as the number of events held or materials distributed. They do not capture longer-term outcomes like income changes or health improvements. Tracking outputs helps in assessing whether activities are being delivered as planned.
The SMART criteria for indicators stands for Specific, Measurable, Achievable, Relevant, and what?
Technical
Testable
Time-bound
Transparent
SMART indicators must be time-bound to ensure they have a clear timeframe for achievement. This makes tracking progress and accountability possible. The other elements - Specific, Measurable, Achievable, and Relevant - define what is measured and ensure targets are realistic.
Which type of evaluation is conducted during project implementation to allow for adjustments?
Impact evaluation
Formative evaluation
Ex-post evaluation
Summative evaluation
Formative evaluation occurs during project implementation and provides feedback for improving or adjusting the intervention. It contrasts with summative evaluation, which assesses outcomes after completion. Impact evaluations focus on the causal effects rather than mid-course corrections.
In M&E, what does a baseline refer to?
Final outcome measurements
Data collected before the start of an intervention
Comparative data from a control group
Targets set for the end of the project
A baseline is the set of data collected before an intervention begins, providing a reference point against which future progress or outcomes can be compared. It does not refer to targets or final measurements. Having a strong baseline is crucial for understanding change attributable to the project.
What is the primary role of a logic model in M&E?
To depict the causal chain linking inputs, activities, outputs, and outcomes
To forecast project budgets
To train staff in data-entry methods
To audit compliance with financial regulations
A logic model visually represents the causal pathway from inputs and activities to outputs, outcomes, and impacts. It clarifies assumptions and helps stakeholders understand how and why a program is expected to work. This tool is foundational for designing M&E frameworks and identifying indicators.
Which evaluation criterion assesses the degree to which objectives are being met?
Effectiveness
Relevance
Efficiency
Sustainability
Effectiveness measures the extent to which an intervention has achieved its stated objectives. It differs from efficiency, which concerns resource use, and relevance, which looks at the alignment with needs or priorities. Sustainability examines long-term continuation of benefits.
Which of the following is a qualitative data collection method?
Key performance indicator dashboard
Focus group discussion
Household survey with closed questions
Structured questionnaire
Focus group discussions are a qualitative method that gathers in-depth insights from participants on perceptions and experiences. Structured questionnaires and closed-question surveys are quantitative, producing numerical data. Dashboards visualize existing indicators rather than collect qualitative data.
Which of the following is an example of an impact indicator?
Number of workshops held
Percentage change in community literacy rate
Amount of funding disbursed
Timeliness of report submission
An impact indicator measures the long-term effects of an intervention on target populations or systems, such as changes in literacy rates. Outputs like workshop counts and financial disbursements are short-term and do not capture the broader impact.
Which framework is commonly used to define evaluation criteria such as relevance, effectiveness, and sustainability in international development?
Balanced Scorecard
Theory of Constraints
Results-Based Management (RBM)
OECD-DAC Evaluation Criteria
The OECD-DAC Evaluation Criteria provide a widely accepted framework for assessing development interventions across dimensions like relevance, effectiveness, efficiency, impact, and sustainability. RBM is a broader management approach, while the Balanced Scorecard and Theory of Constraints serve different planning and performance contexts.
What is the main difference between formative and summative evaluation?
Formative occurs before design; summative occurs during
Summative is only for cost analysis; formative measures impact
Summative collects qualitative data; formative collects quantitative data
Formative occurs during implementation to improve design; summative occurs after completion to assess outcomes
Formative evaluation is conducted during program implementation to provide feedback for refinement and improvement, while summative evaluation occurs after completion to assess overall performance and outcomes. It is not tied to specific data types or cost analysis only. Both can include qualitative and quantitative methods.
Which study design provides the strongest evidence for causal attribution in impact evaluation?
Case study design
Randomized controlled trial (RCT)
Before-and-after study without a comparison group
Cross-sectional survey
Randomized controlled trials (RCTs) randomly assign participants to intervention and control groups, minimizing bias and confounding, thus providing the strongest evidence for causal attribution. Other designs lack the same level of control over external factors.
In an evaluation context, what does 'attribution' refer to?
Assigning credit for outcomes to the intervention
Sharing project responsibilities among stakeholders
Allocating the project budget to activities
Randomly selecting participants for data collection
Attribution is the process of establishing a causal link between an intervention and observed results, distinguishing the effect of the program from other external factors. It is different from budget allocation or stakeholder roles. Accurate attribution supports accountability and learning.
Which sampling method ensures every member of a population has an equal chance of selection?
Purposive sampling
Simple random sampling
Convenience sampling
Stratified sampling
Simple random sampling gives each population member an equal probability of being selected, reducing selection bias. Stratified sampling groups the population and then samples within strata, while purposive and convenience methods do not guarantee equal chance for all.
Which data quality dimension assesses whether the data collection method measures what it intends to measure?
Precision
Validity
Reliability
Timeliness
Validity refers to the extent to which data collection instruments accurately measure the intended concept. Reliability is about consistency over time, while timeliness and precision address other dimensions. Valid data support credible findings.
In contribution analysis, what is the first step?
Articulate the theory of change or results chain
Test alternative explanations
Calculate the cost-effectiveness ratio
Gather evidence on outcome achievements
The first step in contribution analysis is to articulate the theory of change or results chain, which maps out how activities are expected to lead to outcomes. Only after this framework is set can evaluators gather evidence and test alternative explanations.
In a randomized controlled trial (RCT), what is the primary purpose of blinding?
To standardize the intervention delivery protocol
To increase statistical power by reducing sample size requirements
To minimize biases by preventing participants or researchers from knowing group assignments
To improve participant recruitment rates
Blinding in RCTs prevents participants, researchers, or outcome assessors from knowing which group receives the intervention, reducing performance and detection biases. This enhances the credibility of causal inferences. It does not directly affect recruitment, power, or protocol standardization.
Which cost-effectiveness metric compares alternatives by the incremental cost per additional unit of outcome achieved?
Internal Rate of Return (IRR)
Incremental Cost-Effectiveness Ratio (ICER)
Cost-Benefit Ratio (CBR)
Net Present Value (NPV)
The Incremental Cost-Effectiveness Ratio (ICER) expresses the additional cost required to gain one extra unit of outcome when comparing two interventions. It is the standard metric for cost-effectiveness analysis. Other metrics like NPV or IRR serve different economic evaluation purposes.
0
{"name":"What is the primary purpose of monitoring in Monitoring and Evaluation (M&E)?", "url":"https://www.quiz-maker.com/QPREVIEW","txt":"What is the primary purpose of monitoring in Monitoring and Evaluation (M&E)?, Which of the following is an example of an output indicator?, The SMART criteria for indicators stands for Specific, Measurable, Achievable, Relevant, and what?","img":"https://www.quiz-maker.com/3012/images/ogquiz.png"}

Study Outcomes

  1. Understand Core M&E Concepts -

    Identify and define key frameworks and terminology in monitoring and evaluation questions and answers to establish a solid foundation for project oversight.

  2. Differentiate Monitoring vs. Evaluation -

    Analyze the distinct objectives, timelines, and methodologies of monitoring and evaluation within a monitoring evaluation test context to improve project performance tracking.

  3. Apply Data Collection Techniques -

    Implement best practices for surveys, interviews, and data management highlighted in the M&E quiz to ensure accurate and reliable information gathering.

  4. Assess Impact Measurement Methods -

    Critically evaluate quantitative and qualitative impact assessment approaches featured in monitoring and evaluation MCQs to determine program effectiveness and outcomes.

  5. Prepare for Certification Exams -

    Enhance readiness for a monitoring and evaluation question paper or professional exam by practicing targeted questions and answers that mirror real-world scenarios.

Cheat Sheet

  1. SMART Indicators -

    Review how Specific, Measurable, Achievable, Relevant, and Time-bound criteria guide indicator selection in monitoring and evaluation questions and answers. For example, "Number of community meetings held by Q3" meets all SMART elements. Mnemonic trick: "S.M.A.R.T keeps M&E on track."

  2. Logical Framework (Logframe) -

    Understand the hierarchy of Goal - Purpose - Outputs - Activities to build a coherent M&E quiz response structure. A typical logframe flows from Inputs → Activities → Outputs → Outcomes → Impact, clarifying causality. Tip: remember it as IPOOI (Input, Process, Output, Outcome, Impact).

  3. Data Quality Dimensions -

    Master reliability, validity, timeliness, precision, and integrity when answering monitoring and evaluation MCQs. For instance, Cronbach's alpha (α ≥ 0.7) indicates internal consistency for survey items. Use the acronym "RVTPI" to recall all five dimensions.

  4. Sampling Methods & Sample Size Formula -

    Differentiate between simple random, stratified, and cluster sampling to tackle monitoring evaluation test scenarios confidently. Apply the sample size equation n = (Z² × p × (1 - p)) ÷ d² to calculate your survey needs. Remember "RS-Cluster" to signpost Random, Stratified, then Cluster approaches.

  5. Evaluation Types: Formative vs. Summative -

    Know when to use formative (ongoing improvement) versus summative (final judgment) evaluations in your monitoring and evaluation question paper. Formative answers "How can we improve?" while summative answers "Did we achieve our goals?" Brush up on process, outcome, and impact evaluation distinctions.

Powered by: Quiz Maker