In 2008, Makerere University began a radical institutional change to bring together four schools under one College of Health Sciences. This paper’s objective is to demonstrate how the University’s leadership in training, research and services has impacted health in Uganda. Data were collected through analysis of key documents; systematic review of MakCHS publications and grants; surveys of patients, students and faculty; and key informant interviews of the College’s major stakeholders. The researchers found that the University inputs to the health sector include more than 600 health professionals graduating per year, many of whom assume leadership positions. University contributions to processes include strengthened approaches to engaging communities, standardised clinical care procedures and evidence-informed policy development. Outputs include the largest number of out-patients and in-patient admissions in Uganda. Outcomes include an expanded knowledge pool, and contributions to coverage of health services and healthy behaviours. Pilot projects have applied innovative demand and supply incentives to create a rapid increase in safe deliveries (three-fold increase after three months), and increased quality and use of HIV services with positive collateral improvements on non-HIV health services at community clinics.
Monitoring equity and research policy
This reader aims to encourage and deepen health policy analysis work in low- and middle-income countries (LMICs). It presents the range of health policy analysis studies that have been conducted in LMICs, highlights relevant theory, and points to new directions for such work. It also includes methodological and analytical pointers, and considers how to use health policy analysis prospectively to support health policy change. The Reader’s primary audience includes all those with an interest in understanding and influencing health policy change, including researchers and educators, as well as policy advocates, managers, and policy-makers. The Reader will also be of interest to those who have specialist policy studies or public administration backgrounds, and also to those with limited prior engagement with relevant social science perspectives.
For this report, the authors conducted interviews with senior health systems researchers, high-level policy makers and policy brokers in 26 low- and middle-income countries (LMICs) in order to map health systems research capacity, health systems research undertaken and policy uptake of this research. They found that health systems research was dependent on a cluster of enabling factors: charismatic and strategically thinking individuals with a talent for networking, technical competence and scientific credibility, appropriate international alliances and trends, emergent local knowledge translation structures and increasing national ownership of research agendas, more and better training courses for researchers as well as workshops for decision makers to make them more attuned to each others’ world and constraints, increasing trust between decision makers and researchers, a critical mass of health systems researchers and competing institutions ‘able to deliver’, an entry point for health systems research in decision making circles, sufficient domestic and international funding, and even political transitions, shock events or other windows of opportunity. However, country contexts diverge widely. In most LMIC countries studied, health systems research appears to be gaining momentum, and its potential for informing policy is increasing.
This report presents a historical reflection on research evaluation studies, their recurrent themes and challenges, and their implications. It critically examines studies of how scientific research drives innovation and socioeconomic benefits. First, it provides a predominantly descriptive historical overview of some landmark studies in the research evaluation field, from the late 1950s until the present day, and highlights some of their key contributions. Then, it reflects on the historical overview analytically, in order to discuss some of the methodological developments and recurrent themes in research evaluation studies. The report concludes by discussing the enduring challenges in research evaluation studies and their implications. The authors emphasise that this report does not address all of the key studies in the research evaluation field. The evaluation literature today is so extensive that a selective approach is necessary to focus on those studies that they feel provide the most valuable insights in the context of biomedical and health research evaluation.
The author notes that the key debate over what indicators to use to measure progress seems to regarding complexity. Hundreds of different indicators are already being used to measure progress and hundreds more have been proposed. The Stiglitz Commission proposes ‘dashboards’ of indicators, allowing different people and institutions to combine them in different ways to measure and track the things that matter most to them (mental health, carbon emissions, citizen participation or whatever). But decision makers and ordinary people can only keep a limited number of indicators in their heads. Composite indicators could rapidly become a political football, as member states argue for the combination that puts their own performance in the best light, and each successive government changes them, meaning you lose comparability both between countries and across time. The answer is to combine the merits of simplicity and complexity by picking three to five standardised indicators, each of which would be at the centre of a cluster of disaggregated numbers allowing policy makers and researchers to drill down into the relationships between different aspects of people’s lives (for example between income inequality and child well-being).
Namibia faces a daunting array of mental health problems. However, there is no Namibian screening instrument for psychological distress. The papers reports on work to develop a Namibian version of the 28 item General Health Questionnaire (GHQ-28) with a consecutive sample of 159 Oshiwambo speaking patients attending rural health clinics in the north of Namibia. The Oshiwambo version of the 28 item GHQ is presented as a valid screening instrument for psychological distress in clinic attendees.
Research funding agencies continue to grapple with assessing research impact. This narrative literature review synthesized evidence on processes and conceptual models used for assessing policy and practice impacts of public health research. The review involved keyword searches of electronic databases, including MEDLINE, CINAHL, PsycINFO, EBM Reviews, and Google Scholar in July/August 2013. The review included theoretical and opinion pieces, case studies, descriptive studies, frameworks and systematic reviews describing processes, and conceptual models for assessing research impact. A total of 16 different impact assessment models were identified, with the ‘payback model’ the most frequently used conceptual framework. Typically, impacts were assessed across multiple dimensions using mixed methodologies, including publication and citation analysis, interviews with principal investigators, peer assessment, case studies, and document analysis. The vast majority of studies relied on principal investigator interviews and/or peer review to assess impacts, instead of interviewing policymakers and end-users of research.
CEWG, the expert working group advising the World Health Organisation (WHO) on research and development, has recommended that the May 2012 World Health Assembly adopt an international convention on research and development (R&D) that will bind member states to action and catalyse new knowledge for diseases that primarily affect the global poor but for which patents provide insufficient market incentives. In this editorial, the chairpersons of the expert group summarise the recommendations and report of CEWG, which they say constitute a transformative change for achieving access to medicines. They argue that financial contributions should be determined based on the concept that both the costs and benefits of R&D should be shared. They recommend a role for WHO in the stronger coordination of R&D and suggest pooling of financial investments to secure efficient allocations to where demands and opportunities are identified through active participation of developing countries. An international convention, the authors argue, is a way to secure a systemic and sustainable solution since it creates a formalised platform for the future where countries can be held accountable.
The World Health Organisation (WHO) has taken an important step to reform the global system for supporting medical research and development (R&D). The organisation’s governing body has just passed a new — hotly-debated — resolution to set up a new intergovernmental working group that will immediately start work to "draw up a global strategy and plan of action." This will include a new framework to support sustainable, needs-driven, essential R&D work on diseases that disproportionately affect developing countries.
Despite increasing investment in health research capacity strengthening efforts in low and middle income countries, published evidence to guide the systematic design and monitoring of such interventions is very limited. Systematic processes are important to underpin capacity strengthening interventions because they provide stepwise guidance and allow for continual improvement. The authors aimed to use evidence to inform the design of a replicable but flexible process to guide health research capacity strengthening that could be customized for different contexts, and to provide a framework for planning, collecting information, making decisions, and improving performance. They used peer-reviewed and grey literature to develop a five-step pathway for designing and evaluating health research capacity strengthening programmes, tested in a variety of contexts in Africa. The five steps are: i) defining the goal of the capacity strengthening effort, ii) describing the optimal capacity needed to achieve the goal, iii) determining the existing capacity gaps compared to the optimum, iv) devising an action plan to fill the gaps and associated indicators of change, and v) adapting the plan and indicators as the programme matures. The five-step pathway starts with a clear goal and objectives, making explicit the capacity required to achieve the goal. Strategies for promoting sustainability are agreed with partners and incorporated from the outset. The pathway for designing capacity strengthening programmes focuses not only on technical, managerial, and financial processes within organisations, but also on the individuals within organisations and the wider system within which organisations are coordinated, financed, and managed.