The Rise of Rapid Reviews
“Perfect is the enemy of good” Voltaire
Rapid reviews are becoming increasingly commissioned, used and written about. But why is there this, relatively sudden, interest? Putting it bluntly, it’s because the cornerstone of evidence synthesis, the systematic review, is becoming increasingly out of touch with the needs of today’s healthcare systems.
The journal Systematic Reviews recently featured an editorial ‘All in the Family: systematic reviews, rapid reviews, scoping reviews, realist reviews, and more’. In the article, they report “It is our view that all of these new forms of reviews are related to systematic reviews, similar to the way that different biological Species within the same Family are related to each other.” Tantalizingly they later raise the issue of the extinction of some methods. Is it a given that systematic reviews, as we know them, deserve to survive?
For many years, systematic reviews have been relatively free from evolutionary pressure. The majority of changes have been internally driven, frequently relating to improving the methodological rigor of the product. But this focus on rigor has often been made in isolation of the ensuing effects on cost and timeliness. The reality is that systematic reviews are costly interventions, typically take over a year to produce and are, frankly, boring.
Rapid reviews fill an evolutionary gap long vacated by traditional systematic reviews; meeting the dual requirements of speed and low-cost. If a review is needed in a month and on a modest budget, a systematic review isn’t worth considering. The scenario of the need for a rapid and low-cost evidence synthesis is hardly atypical and it’s a result of this that rapid reviews are on the rise.
But rapid reviews are an ungainly collection of different techniques, ranging from Trip’s 5-minute system to those taking 6 months or longer. There is no coherent language to describe them, leaving users to ask for ‘rapid reviews’ with little appreciation of the diverse methods that are available. Even more problematic is the lack of rigorous evaluation to underpin each separate method. Given the relative youthfulness of the field of rapid reviews, this is hardly surprising and a number of groups are starting to put together research plans to better understand the promise and limits of the various rapid methods.
Assuming the research helps qualify the usefulness of rapid review methods there is the need to see them within the wider context of evidence synthesis. My own view is that there needs to be a more nuanced approach to reviews/syntheses, one that doesn’t start with the kneejerk ‘we need a systematic review’ with no questions asked. I’m increasingly drawn to the view that a rapid review is the default opening method for those interested in a synthesis of the evidence. A rapid review allows for a rapid overview of the existing evidence. And one would only proceed to a fuller review if certain criteria are met. For instance, are the outcomes useful, are the effects size likely to lead to change, are we able to produce a fuller review within the timescale needed? Unless there is clear gain, it appears unethical to use a valuable resource on a product that is likely to be of little use.
Rapid versus systematic reviews is a battle for position, a battle for resource and a battle for influence. Some vested interests may rely on the status quo being maintained. Irrespective of this, this fight will ultimately be resolved by which method actually meets the user’s needs. As Voltaire said “perfect is the enemy of good” and if the systematic review world unquestioningly follows the notion of perfect methods, it’ll surely be the rapid methods that prevail.
Author: Jon Brassey
Jon Brassey is the founder and director of the EBM search engine the Trip Database. In addition to this, he works as the lead for knowledge mobilization at Public Health Wales, is an honorary fellow at the Centre for Evidence-Based Medicine, Oxford and recently started the Rapid-Reviews.info website.