Abstract
Background
In several countries, attempts are made to improve health promotion by centrally rating the effectiveness of health promotion interventions. The Dutch Effectiveness Rating System (ERS) for health promotion interventions is an improvement-oriented approach in which multi-disciplinary expert committees rate available health promotion interventions as ‘theoretically sound’, ‘probably effective’ or ‘proven effective’. The aim of this study is to explore the functioning of the ERS and the perspective of researchers, policy-makers and practitioners regarding its contribution to improvement.
Methods
We interviewed 53 selected key informants from research, policy and practice in the Netherlands and observed the assessment of 12 interventions.
Results
Between 2008 and 2012, a total of 94 interventions were submitted to the ERS, of which 23 were rejected, 58 were rated as ‘theoretically sound’, 10 were rated as ‘probably effective’ and 3 were rated as ‘proven effective’. According to participants, the ERS was intended to facilitate both the improvement of available interventions and the improvement of health promotion in practice. While participants expected that describing and rating interventions promoted learning and enhanced the transferability of interventions, they were concerned that the ERS approach was not suitable for guiding intervention development and improving health promotion in practice. The expert committees that assessed the interventions struggled with a lack of norms for the relevance of effects and questions about how effects should be studied and rated. Health promotion practitioners were concerned that the ERS neglected the local adaptation of interventions and did not encourage the improvement of aspects like applicability and costs. Policy-makers and practitioners were worried that the lack of proven effectiveness legitimised cutbacks rather than learning and advancing health promotion.
Conclusion
While measuring and centrally rating the effectiveness of interventions can be beneficial, the evidence based-inspired ERS approach is too limited to guide both intervention development and the improvement of health promotion in practice. To better contribute to improving health promotion, a more reflexive and responsive guidance approach is required, namely one which stimulates the improvement of different intervention aspects, provides targeted recommendations to practitioners and provides feedback to those who develop and rate interventions.
In several countries, attempts are made to improve health promotion by centrally rating the effectiveness of health promotion interventions. The Dutch Effectiveness Rating System (ERS) for health promotion interventions is an improvement-oriented approach in which multi-disciplinary expert committees rate available health promotion interventions as ‘theoretically sound’, ‘probably effective’ or ‘proven effective’. The aim of this study is to explore the functioning of the ERS and the perspective of researchers, policy-makers and practitioners regarding its contribution to improvement.
Methods
We interviewed 53 selected key informants from research, policy and practice in the Netherlands and observed the assessment of 12 interventions.
Results
Between 2008 and 2012, a total of 94 interventions were submitted to the ERS, of which 23 were rejected, 58 were rated as ‘theoretically sound’, 10 were rated as ‘probably effective’ and 3 were rated as ‘proven effective’. According to participants, the ERS was intended to facilitate both the improvement of available interventions and the improvement of health promotion in practice. While participants expected that describing and rating interventions promoted learning and enhanced the transferability of interventions, they were concerned that the ERS approach was not suitable for guiding intervention development and improving health promotion in practice. The expert committees that assessed the interventions struggled with a lack of norms for the relevance of effects and questions about how effects should be studied and rated. Health promotion practitioners were concerned that the ERS neglected the local adaptation of interventions and did not encourage the improvement of aspects like applicability and costs. Policy-makers and practitioners were worried that the lack of proven effectiveness legitimised cutbacks rather than learning and advancing health promotion.
Conclusion
While measuring and centrally rating the effectiveness of interventions can be beneficial, the evidence based-inspired ERS approach is too limited to guide both intervention development and the improvement of health promotion in practice. To better contribute to improving health promotion, a more reflexive and responsive guidance approach is required, namely one which stimulates the improvement of different intervention aspects, provides targeted recommendations to practitioners and provides feedback to those who develop and rate interventions.
Original language | English |
---|---|
Article number | 100 |
Journal | Health Research Policy and Systems |
Volume | 15 |
Issue number | 100 |
DOIs | |
Publication status | Published - 2017 |
Research programs
- EMC NIHES-05-63-03 Competition