Over the last six decades since many low and middle-income countries (LMICs) achieved independence, attempts by multi and bi lateral agencies to provide effective foreign assistance have been met with mixed results. There continues to be a need to strengthen understanding of program dynamics, its synergies and results (both intended and unintended). There is also a pressing need to establish a common ground among these donors, implementing agencies, other stakeholders and potential evaluators.
It is in light of the above challenges that the framework: Program Indicator Screening Matrix (PRISM) was designed. The model is defined by six criteria – Specificity, Reliability, Sensitivity, Simplicity, Utility and Affordability.
Relevance
Context
In an attempt to improve results based management (RBM) in general and intervention data specifically; making foreign aid more focused and strategic with compelling evidence-based results; donors have increasingly teamed up and progressively introduced ubiquitous evaluation processes as an integral component of any program. In course of streamlining program implementation, funding agencies in collaboration with recipient governments and other stakeholders have also promoted the availability and utilization of strategic frameworks (SFs). The current most frequently used SFs are the Logic Framework and the Strategic Objective framework. These frameworks serve as complementary elements of RBM.
A critical component that continues to confront many development aid Stakeholders, is the ability to establish equitable, standard and inclusive strategies that include, donors, national governments, implementing agencies, program managers, beneficiaries and oversight systems. Current reviews of existing dynamics continue to be mixed. And hence a compelling need for a paradigm change cannot be adequately emphasized. Informed decisions are only as useful and constructive as the trustworthiness of the relevant data.
Objective of PRISM
The general objective of this model is to strengthen the knowledge of Implementing Agencies, Program Managers and other key and relevant stakeholders in LMICs; emphasizing sustainable engagement by mitigating indicator redundancies and optimizing results in program management.
Framework and Methodology
The model comprises a set of deterministic criteria simultaneously applied in an attempt to identify the most effective set of indicators in any thematic program area. At the same time it mitigates many program management nuisances by making indicators and data more trust worthy. The model criteria include: specificity, reliability, sensitivity, simplicity, utility and affordability. Experts are required to conduct the assessment. The working groups of experts use an analytical approach synonymous with the Delphi methodology. Each indicator during the assessment process is assigned binary outcome (0,1) based on its performance with the relevant criterion. The resulting composite scores are evaluated against a “gold-standard” or target established by these experts a priori.
As a quality control measure, all the teams’ outcomes are finally evaluated based on the degree of intra-team and inter-team concordance. And it is only after this level of concordance is established that the final indicators are selected. The model is thematically generic with an inclusive target audience. It has been successfully tested in Africa, Pacific Island Countries and the USA.
(Key words: Indicator-screening-matrix, Results-based-management, Gold-standard, Concordance, Binary-outcome, Composite-score.)