Monitoring equity and research policy

SUPPORT Tools for evidence-informed health Policymaking (STP) 3: Setting priorities for supporting evidence-informed policymaking
Lavis JN, Oxman AD, Lewin S and Fretheim A: Health Research Policy and Systems 7(Suppl 1), 16 December 2009

Regardless of whether the support for evidence-informed policymaking is provided in-house or contracted out, or whether it is centralised or decentralised, resources always need to be used wisely in order to maximise their impact. Examples of undesirable practices in a priority-setting approach include timelines to support evidence-informed policymaking being negotiated on a case-by-case basis (instead of having clear norms about the level of support that can be provided for each timeline), implicit (rather than explicit) criteria for setting priorities, ad hoc (rather than systematic and explicit) priority-setting process, and the absence of both a communications plan and a monitoring and evaluation plan. This article suggests questions that can guide those setting priorities: Does the approach to prioritisation make clear the timelines that have been set for addressing high-priority issues in different ways? Does the approach incorporate explicit criteria for determining priorities? Does the approach incorporate an explicit process for determining priorities? Does the approach incorporate a communications strategy and a monitoring and evaluation plan?

SUPPORT Tools for evidence-informed health Policymaking (STP) 7: Finding systematic reviews
Lavis JN, Oxman AD, Grimshaw J, Johansen M, Boyko JA, Lewin S and Fretheim A: Health Research Policy and Systems 7(Suppl 1), 16 December 2009

A number of constraints have hindered the wider use of systematic reviews in policymaking, including a lack of awareness of their value and a mismatch between the terms employed by policymakers when attempting to retrieve systematic reviews, and the terms used by the original authors of those reviews. Mismatches between the types of information that policymakers are seeking, and the way in which authors fail to highlight (or make obvious) such information within systematic reviews have also proved problematic. This article suggests three questions that can be used to guide those searching for systematic reviews, particularly reviews about the impacts of options being considered: Is a systematic review really what is needed? What databases and search strategies can be used to find relevant systematic reviews? What alternatives are available when no relevant review can be found?

SUPPORT Tools for evidence-informed health Policymaking (STP) 8: Deciding how much confidence to place in a systematic review
Lewin S, Oxman AD, Lavis JN and Fretheim A: Health Research Policy and Systems 7(Suppl 1), 16 December 2009

The reliability of systematic reviews of the effects of health interventions is variable. Consequently, policymakers and others need to assess how much confidence can be placed in such evidence. The use of systematic and transparent processes to determine such decisions can help to prevent the introduction of errors and bias in these judgements. This article suggests five questions that can be considered when deciding how much confidence to place in the findings of a systematic review of the effects of an intervention: Did the review explicitly address an appropriate policy or management question? Were appropriate criteria used when considering studies for the review? Was the search for relevant studies detailed and reasonably comprehensive? Were assessments of the studies' relevance to the review topic and of their risk of bias reproducible? Were the results similar from study to study?

SUPPORT Tools for evidence-informed health Policymaking (STP) 9: Assessing the applicability of the findings of a systematic review
Lavis JN, Oxman AD, Souza NM, Lewin S, Gruen R and Fretheim A: Health Research Policy and Systems 7(Suppl 1), 16 December 2009

A key challenge that policymakers and those supporting them must face is the need to understand whether research evidence about an option can be applied to their setting. Systematic reviews make this task easier by summarising the evidence from studies conducted in a variety of different settings. Many systematic reviews, however, do not provide adequate descriptions of the features of the actual settings in which the original studies were conducted. This article suggests questions to guide those assessing the applicability of the findings of a systematic review to a specific setting: Were the studies included in a systematic review conducted in the same setting or were the findings consistent across settings or time periods? Are there important differences in on-the-ground realities and constraints that might substantially alter the feasibility and acceptability of an option? Are there important differences in health system arrangements that may mean an option could not work in the same way? Are there important differences in the baseline conditions that might yield different absolute effects even if the relative effectiveness was the same? What insights can be drawn about options, implementation, and monitoring and evaluation?

A hundred indicators of well-being?
Green D: Oxfam, 30 October 2009

The author notes that the key debate over what indicators to use to measure progress seems to regarding complexity. Hundreds of different indicators are already being used to measure progress and hundreds more have been proposed. The Stiglitz Commission proposes ‘dashboards’ of indicators, allowing different people and institutions to combine them in different ways to measure and track the things that matter most to them (mental health, carbon emissions, citizen participation or whatever). But decision makers and ordinary people can only keep a limited number of indicators in their heads. Composite indicators could rapidly become a political football, as member states argue for the combination that puts their own performance in the best light, and each successive government changes them, meaning you lose comparability both between countries and across time. The answer is to combine the merits of simplicity and complexity by picking three to five standardised indicators, each of which would be at the centre of a cluster of disaggregated numbers allowing policy makers and researchers to drill down into the relationships between different aspects of people’s lives (for example between income inequality and child well-being).

Key cold chain for medical research
Bodibe K: Health-e News, 5 November 2009

South Africa’s first bio-bank, a cold storage facility where samples from HIV clinical trials and other diseases can be stored for years to support future medical research, was launched in Johannesburg. Bio-banking is a novel concept on the African continent and South Africa is the first country to introduce it. ‘Bio-banking ensures that the integrity of the samples is kept so that when you do run the test, you’re able to get sense out of the result’, explained Jessica Trusler, medical director of Bio-analytical Research Corporation (BRAC) South Africa. Peter Cole, chief executive officer of BRAC, noted: ‘The ability to store samples long term, including the RNA and DNA of these infectious pathogens means that we can do things like look at resistance patterns to drugs, we can use the DNA in the future for vaccine development, we can store TB DNA looking at resistance patterns against the various drugs and the role of what we call MOTTS, the non-tuberculous organisms in the immuno-compromised patients. So, there’s a lot of unique stuff that we can do here’, he added.

Mental health research priorities in low- and middle-income countries of Africa, Asia, Latin America and the Caribbean
Sharan P, Gallo C, Gureje O, Lamberte E, Mari JJ, Mazzotti G, Patel V, Swartz L, Olifson S, Levav I, de Francisco A, Saxena S and the Mental Health Research Mapping Project Group: British Journal of Psychiatry 195: 354–363, 2009

The aim of this study was to investigate research priorities in mental health among researchers and other stakeholders in low- and middle-income (LAMI) countries. A two-stage design was used that included identification, through literature searches and snowball technique, of researchers and stakeholders in 114 countries of Africa, Asia, Latin America and the Caribbean; and a mail survey on priorities in research. The study identified broad agreement between researchers and stakeholders and across regions regarding research priorities. Epidemiology (burden and risk factors), health systems and social science ranked highest for type of research. Researchers’ and stakeholders’ priorities were consistent with burden of disease estimates. However, suicide was underprioritised, compared with its burden. Researchers’ and stakeholders’ priorities were also largely congruent with the researchers’ projects. The results of this first-ever conducted survey of researchers and stakeholders regarding research priorities in mental health suggest that it should be possible to develop consensus at regional and international levels regarding the research agenda that is necessary to support health system objectives in LAMI countries.

Priority setting for health policy and systems research
Alliance for Health Policy and Systems Research Briefing Note 3: September 2009

The main pattern of research funding is driven by the interests of research funders, who are often external rather than domestic actors. When priority-setting processes do occur, they are typically disease-driven and without a broader, more integrated systems-level perspective (for example, determining how research might address one or more health-system building blocks). As a result, there is rarely consensus on national evidence needs, few national research priorities are set, and research in low- to middle-income countries (LMICs) continues to follow the fleeting and shifting priorities of global funders. This brief discusses the fundamental concepts of priority setting exercises; explores the priority-setting dynamic between the national and global levels; describes priority setting exercises specific to health policy and systems research; and details the work of the in driving global priorities based on the evidence needs of LMIC policy-makers through a three-step approach. It concludes with recommendations for how researchers, LMIC policy-makers and the global community might increasingly promote, fund and convene priority-setting exercises in health policy and systems research.

Rethinking the conceptual terrain of AIDS scholarship: Lessons from comparing 27 years of AIDS and climate change research
Chazan May, Brklacich M and Whiteside A: Globalization and Health 5(12), 6 October 2009

In this conceptual article, the authors compare and contrast the evolution of climate change and AIDS research. They demonstrate how scholarship and response in these two seemingly disparate areas share certain important similarities, such as the "globalisation" of discourses and associated masking of uneven vulnerabilities, the tendency toward techno-fixes, and the polarisation of debates within these fields. They also examine key divergences, noting in particular that climate change research has tended to be more forward-looking and longer-term in focus than AIDS scholarship. Suggesting that AIDS scholars can learn from these key parallels and divergences, the paper offers four directions for advancing AIDS research: focusing more on the differentiation of risk and responsibility within and among AIDS epidemics; taking (back) on board social justice approaches; moving beyond polarised debates; and shifting focus from reactive to forward-looking and proactive approaches.

South Africa Survey 2008/2009
South African Institute of Race Relations: 2009

International comparisons show that the average South African will not live longer than 50 years. South Africa was one of only six out of a group of 37 developed and developing countries that had a decreasing life expectancy between 1990 and 2007. South Africa’s life expectancy decreased from 62 years in 1990, to 50 years in 2007. Only Zimbabwe had a worse trend for life expectancy. The statistics in this report show that, in 2009, the average life expectancy at birth for South Africans was 51 years. Between 2001 and 2006 the life expectancy at birth was 51 years for males, and 55 years for females. This is expected to decrease between 2006 and 2011 to 48 years for males, and 51 years for females. KwaZulu-Natal had the lowest life expectancy at birth in 2009 at 43 years, followed by the Free State and Mpumalanga at 47 years each. These three provinces also had some of the highest HIV prevalence rates at 16%, 14%, and 14% respectively. International comparisons also show that in 2007, some 27% of males and 33% of females in South Africa would survive to age 65. Out of a comparison group of 37 developing and developed countries, only Mozambique and Zimbabwe had lower survival rates.

Pages