Despite this, there is broad consensus that the major goal of evaluation should be to influence decision-making or policy formulation through the provision of empirically-driven feedback. In addition, there may be cases where non-experimental designs are the only feasible impact evaluation design, such as universally implemented programmes or national policy reforms in which no isolated comparison groups are likely to exist.
Interfering events[ edit ] Interfering events are similar to secular trends; in this case it is the short-term events that can produce changes that may introduce bias into estimates of program effect, such as a power outage disrupting communications or hampering the delivery of food supplements may interfere with a nutrition program Rossi et al.
Furthermore, it is possible that program participants are disadvantaged if the bias is in such a way that it contributes to making an ineffective or harmful program seem effective. Thus, it is estimated that RCTs are only applicable to 5 percent of development finance.
It would also be legitimate to include the Logical Framework or "Logframe" model developed at U. Rigorous factual analysis of links in the causal chain. Understand context, including the social, political and economic setting of the intervention.
Quasi-experimental methods include matching, differencing, instrumental variables and the pipeline approach; they are usually carried out by multivariate regression analysis. Debates that rage within the evaluation profession -- and they do rage -- are generally battles between these different strategists, with each claiming the superiority of their position.
Are they recommendations or are they conclusions? Evaluation is the systematic assessment of the worth or merit of some object This definition is hardly perfect.
The random assignment of individuals to program and control groups allows for making the assumption of continuing equivalence. Theory-Based approaches use both quantitative and qualitative data collection, and the latter can be particularly useful in understanding the reasons for compliance and therefore whether and how the intervention may be replicated in other settings.
Propensity score matching PSM uses a statistical model to calculate the probability of participating on the basis of a set of observable characteristics and matches participants and non-participants with similar probability scores.
But the need to improve, update and adapt these methods to changing circumstances means that methodological research and development needs to have a major place in evaluation work.
Formulating and conceptualizing methods might be used including brainstorming, focus groups, nominal group techniques, Delphi methods, brainwriting, stakeholder analysis, synectics, lateral thinking, input-output analysis, and concept mapping.
With all of these strategies to choose from, how to decide? There is no inherent incompatibility between these broad strategies -- each of them brings something valuable to the evaluation table. However, in practice, it cannot be guaranteed that treatment and comparison groups are comparable and some method of matching will need to be applied to verify comparability.
Summative evaluations, in contrast, examine the effects or outcomes of some object -- they summarize it by describing what happens subsequent to delivery of the program or technology; assessing whether the object can be said to have caused the outcome; determining the overall impact of the causal factor beyond only the immediate target outcomes; and, estimating the relative costs associated with the object.
Random sample surveys, in which the sample for the evaluation is chosen randomly, should not be confused with experimental evaluation designs, which require the random assignment of the treatment.
If selection characteristics are known and observed, they can be controlled for to remove the bias. Agency for International Development and general systems theory and operations research approaches in this category. With that said, randomized field experiments are not always feasible to carry out and in these situations there are alternative research designs that are at the disposal of an evaluator.1 OUTLINE OF PRINCIPLES OF IMPACT EVALUATION PART I KEY CONCEPTS Definition Impact evaluation is an assessment of how the intervention being evaluated affects.
Defining the key evaluation questions (KEQs) the impact evaluation should address. Impact evaluations should be focused around answering a small number of high-level key evaluation questions (KEQs) that will be answered through a combination of evidence. These questions should be clearly linked to the evaluative criteria.
Political Analysis. Home / Publications / The Evaluation of Politics and the Politics of Evaluation. Background Paper 11 - The Evaluation of Politics and the Politics of Evaluation The Evaluation of Politics and the Politics of Evaluation. DLP focuses on the crucial role of home-grown leaderships and coalitions in forging legitimate political settlements.
Impact Politics has four main areas of focus: General Consulting & Communications.
Why transformational change in political media may take longer than expected. by Brian Franklin On the shelf in my office sits one of my favorite items: the ultimedescente.com sock puppet, a historic symbol of the dot-com boom and bust.
Online video and banner. Impact evaluation assesses the changes that can be attributed to a particular intervention, such as a project, program or policy, both the intended ones, as well as ideally the unintended ones.
In contrast to outcome monitoring, which examines whether targets have been achieved, impact evaluation is structured to answer the question: how would outcomes such as participants' well-being have.
Evaluations and Political Sustainability: The Progresa/ Impact Evaluation Methods Cost of Impact Evaluations of a Selection of World BankÐ Disaggregated Costs of a Selection of World BankÐSupported Projects Work Sheet for Impact Evaluation Cost Estimation Sample Impact Evaluation BudgetDownload