Monitoring equity and research policy

SUPPORT Tools for evidence-informed health Policymaking (STP) 18: Planning monitoring and evaluation of policies
Fretheim A, Oxman AD, Lavis JN and Lewin S: Health Research Policy and Systems 7(Suppl 1), 16 December 2009

The term monitoring is commonly used to describe the process of systematically collecting data to inform policymakers, managers and other stakeholders whether a new policy or programme is being implemented in accordance with their expectations. Indicators are used for monitoring purposes to judge, for example, if objectives are being achieved, or if allocated funds are being spent appropriately. Sometimes the term evaluation is used interchangeably with the term monitoring, but the former usually suggests a stronger focus on the achievement of results. When the term impact evaluation is used, this usually implies that there is a specific attempt to try to determine whether the observed changes in outcomes can be attributed to a particular policy or programme. This article suggests four questions that can be used to guide the monitoring and evaluation of policy or programme options: Is monitoring necessary? What should be measured? Should an impact evaluation be conducted? How should the impact evaluation be done?

SUPPORT Tools for evidence-informed health Policymaking (STP) 1: What is evidence-informed policymaking?
Oxman AD, Lavis JN, Lewin S and Fretheim A: Health Research Policy and Systems 7(Suppl 1), 16 December 2009

This article discusses three questions: What is evidence? What is the role of research evidence in informing health policy decisions? What is evidence-informed policymaking? Evidence-informed health policymaking is an approach to policy decisions that aims to ensure that decision making is well-informed by the best available research evidence. It is characterised by the systematic and transparent access to, and appraisal of, evidence as an input into the policymaking process. The overall process of policymaking is not assumed to be systematic and transparent. However, within the overall process of policymaking, systematic processes are used to ensure that relevant research is identified, appraised and used appropriately. These processes are transparent in order to ensure that others can examine what research evidence was used to inform policy decisions, as well as the judgements made about the evidence and its implications. Evidence-informed policymaking helps policymakers gain an understanding of these processes.

SUPPORT Tools for evidence-informed health Policymaking (STP) 2: Improving how your organisation supports the use of research evidence to inform policymaking
Oxman1 AD, Vandvik PO, Lavis JN, Fretheim A and Lewin S: Health Research Policy and Systems 7(Suppl 1), 16 December 2009

This article addresses ways of organising efforts to support evidence-informed health policymaking. Efforts to link research to action may include a range of activities related to the production of research that is both highly relevant to – and appropriately synthesised for – policymakers. Such activities may include a mix of efforts used to link research to action, as well as the evaluation of such efforts. The article suggests five questions that can help guide considerations of how to improve organisational arrangements to support the use of research evidence to inform health policy decision making: What is the capacity of your organisation to use research evidence to inform decision making? What strategies should be used to ensure collaboration between policymakers, researchers and stakeholders? What strategies should be used to ensure independence as well as the effective management of conflicts of interest? What strategies should be used to ensure the use of systematic and transparent methods for accessing, appraising and using research evidence? What strategies should be used to ensure adequate capacity to employ these methods?

SUPPORT Tools for evidence-informed health Policymaking (STP) 3: Setting priorities for supporting evidence-informed policymaking
Lavis JN, Oxman AD, Lewin S and Fretheim A: Health Research Policy and Systems 7(Suppl 1), 16 December 2009

Regardless of whether the support for evidence-informed policymaking is provided in-house or contracted out, or whether it is centralised or decentralised, resources always need to be used wisely in order to maximise their impact. Examples of undesirable practices in a priority-setting approach include timelines to support evidence-informed policymaking being negotiated on a case-by-case basis (instead of having clear norms about the level of support that can be provided for each timeline), implicit (rather than explicit) criteria for setting priorities, ad hoc (rather than systematic and explicit) priority-setting process, and the absence of both a communications plan and a monitoring and evaluation plan. This article suggests questions that can guide those setting priorities: Does the approach to prioritisation make clear the timelines that have been set for addressing high-priority issues in different ways? Does the approach incorporate explicit criteria for determining priorities? Does the approach incorporate an explicit process for determining priorities? Does the approach incorporate a communications strategy and a monitoring and evaluation plan?

SUPPORT Tools for evidence-informed health Policymaking (STP) 7: Finding systematic reviews
Lavis JN, Oxman AD, Grimshaw J, Johansen M, Boyko JA, Lewin S and Fretheim A: Health Research Policy and Systems 7(Suppl 1), 16 December 2009

A number of constraints have hindered the wider use of systematic reviews in policymaking, including a lack of awareness of their value and a mismatch between the terms employed by policymakers when attempting to retrieve systematic reviews, and the terms used by the original authors of those reviews. Mismatches between the types of information that policymakers are seeking, and the way in which authors fail to highlight (or make obvious) such information within systematic reviews have also proved problematic. This article suggests three questions that can be used to guide those searching for systematic reviews, particularly reviews about the impacts of options being considered: Is a systematic review really what is needed? What databases and search strategies can be used to find relevant systematic reviews? What alternatives are available when no relevant review can be found?

SUPPORT Tools for evidence-informed health Policymaking (STP) 8: Deciding how much confidence to place in a systematic review
Lewin S, Oxman AD, Lavis JN and Fretheim A: Health Research Policy and Systems 7(Suppl 1), 16 December 2009

The reliability of systematic reviews of the effects of health interventions is variable. Consequently, policymakers and others need to assess how much confidence can be placed in such evidence. The use of systematic and transparent processes to determine such decisions can help to prevent the introduction of errors and bias in these judgements. This article suggests five questions that can be considered when deciding how much confidence to place in the findings of a systematic review of the effects of an intervention: Did the review explicitly address an appropriate policy or management question? Were appropriate criteria used when considering studies for the review? Was the search for relevant studies detailed and reasonably comprehensive? Were assessments of the studies' relevance to the review topic and of their risk of bias reproducible? Were the results similar from study to study?

SUPPORT Tools for evidence-informed health Policymaking (STP) 9: Assessing the applicability of the findings of a systematic review
Lavis JN, Oxman AD, Souza NM, Lewin S, Gruen R and Fretheim A: Health Research Policy and Systems 7(Suppl 1), 16 December 2009

A key challenge that policymakers and those supporting them must face is the need to understand whether research evidence about an option can be applied to their setting. Systematic reviews make this task easier by summarising the evidence from studies conducted in a variety of different settings. Many systematic reviews, however, do not provide adequate descriptions of the features of the actual settings in which the original studies were conducted. This article suggests questions to guide those assessing the applicability of the findings of a systematic review to a specific setting: Were the studies included in a systematic review conducted in the same setting or were the findings consistent across settings or time periods? Are there important differences in on-the-ground realities and constraints that might substantially alter the feasibility and acceptability of an option? Are there important differences in health system arrangements that may mean an option could not work in the same way? Are there important differences in the baseline conditions that might yield different absolute effects even if the relative effectiveness was the same? What insights can be drawn about options, implementation, and monitoring and evaluation?

A hundred indicators of well-being?
Green D: Oxfam, 30 October 2009

The author notes that the key debate over what indicators to use to measure progress seems to regarding complexity. Hundreds of different indicators are already being used to measure progress and hundreds more have been proposed. The Stiglitz Commission proposes ‘dashboards’ of indicators, allowing different people and institutions to combine them in different ways to measure and track the things that matter most to them (mental health, carbon emissions, citizen participation or whatever). But decision makers and ordinary people can only keep a limited number of indicators in their heads. Composite indicators could rapidly become a political football, as member states argue for the combination that puts their own performance in the best light, and each successive government changes them, meaning you lose comparability both between countries and across time. The answer is to combine the merits of simplicity and complexity by picking three to five standardised indicators, each of which would be at the centre of a cluster of disaggregated numbers allowing policy makers and researchers to drill down into the relationships between different aspects of people’s lives (for example between income inequality and child well-being).

Key cold chain for medical research
Bodibe K: Health-e News, 5 November 2009

South Africa’s first bio-bank, a cold storage facility where samples from HIV clinical trials and other diseases can be stored for years to support future medical research, was launched in Johannesburg. Bio-banking is a novel concept on the African continent and South Africa is the first country to introduce it. ‘Bio-banking ensures that the integrity of the samples is kept so that when you do run the test, you’re able to get sense out of the result’, explained Jessica Trusler, medical director of Bio-analytical Research Corporation (BRAC) South Africa. Peter Cole, chief executive officer of BRAC, noted: ‘The ability to store samples long term, including the RNA and DNA of these infectious pathogens means that we can do things like look at resistance patterns to drugs, we can use the DNA in the future for vaccine development, we can store TB DNA looking at resistance patterns against the various drugs and the role of what we call MOTTS, the non-tuberculous organisms in the immuno-compromised patients. So, there’s a lot of unique stuff that we can do here’, he added.

Mental health research priorities in low- and middle-income countries of Africa, Asia, Latin America and the Caribbean
Sharan P, Gallo C, Gureje O, Lamberte E, Mari JJ, Mazzotti G, Patel V, Swartz L, Olifson S, Levav I, de Francisco A, Saxena S and the Mental Health Research Mapping Project Group: British Journal of Psychiatry 195: 354–363, 2009

The aim of this study was to investigate research priorities in mental health among researchers and other stakeholders in low- and middle-income (LAMI) countries. A two-stage design was used that included identification, through literature searches and snowball technique, of researchers and stakeholders in 114 countries of Africa, Asia, Latin America and the Caribbean; and a mail survey on priorities in research. The study identified broad agreement between researchers and stakeholders and across regions regarding research priorities. Epidemiology (burden and risk factors), health systems and social science ranked highest for type of research. Researchers’ and stakeholders’ priorities were consistent with burden of disease estimates. However, suicide was underprioritised, compared with its burden. Researchers’ and stakeholders’ priorities were also largely congruent with the researchers’ projects. The results of this first-ever conducted survey of researchers and stakeholders regarding research priorities in mental health suggest that it should be possible to develop consensus at regional and international levels regarding the research agenda that is necessary to support health system objectives in LAMI countries.

Pages