What is the best evidence and how to find it
Why is research evidence better than expert opinion alone?
In a broad sense, research evidence can be any systematic observation in order to establish facts and reach conclusions. Anything not fulfilling this definition is typically classified as “expert opinion”, the basis of which includes experience with patients, an understanding of biology, knowledge of preclinical research, as well as of the results of studies. Using expert opinion as the only basis to make decisions has proved problematic because in practice doctors often introduce new treatments too quickly before they have been shown to work, or they are too slow to introduce proven treatments.
However, clinical experience is key to interpret and apply research evidence into practice, and to formulate recommendations, for instance in the context of clinical guidelines. In other words, research evidence is necessary but not sufficient to make good health decisions.
What studies are more reliable?
Not all evidence is equally reliable.
Any study design, qualitative or quantitative, where data is collected from individuals or groups of people is usually called a primary study. There are many types of primary study designs, but for each type of health question, there is one that provides more reliable information.
For treatment decisions, there is a consensus that the most reliable primary study is the randomized controlled trial (RCT). In this type of study, patients are randomly assigned to have either the treatment being tested or a comparison treatment (sometimes called the control treatment). Random really means random. The decision to put someone into one group or another is made like tossing a coin: heads they go into one group, tails they go into the other.
The control treatment might be a different type of treatment or dummy treatment that shouldn't have any effect (a placebo). Researchers then compare the effects of the different treatments.
Large randomized trials are expensive and take time. In addition, sometimes it may be unethical to undertake a study in which some people were randomly assigned not to have a treatment. For example, it wouldn't be right to give oxygen to some children having an asthma attack and not give it to others. In cases like this, other primary study designs may be the best choice.
Laboratory studies are another type of study. Newspapers often have stories of studies showing how a drug cured cancer in mice. But just because a treatment works for animals in laboratory experiments, this doesn't mean it will work for humans. In fact, most drugs that have been shown to cure cancer in mice do not work for people.
Very rarely we cannot base our health decisions on the results of studies. Sometimes the research hasn't been done because doctors are used to treating a condition in a way that seems to work. This is often true of treatments for broken bones and operations. But just because there's no research for treatment doesn't mean it doesn't work. It just means that no one can say for sure.
Why we shouldn’t read studies
An enormous amount of effort is required to be able to identify and summarise everything we know with regard to any given health intervention. The amount of data has soared dramatically. A conservative estimation is there are more than 35,000 medical journals and almost 20 million research articles published every year. On the other hand, up to half of the existing data might be unpublished.
How can anyone keep up with all this? And how can you tell if the research is good or not? Each primary study is only one piece of a jigsaw that may take years to finish. Rarely does any one piece of research answer either a doctor's or a patient's questions.
Even though reading large numbers of studies is impractical, high-quality primary studies, especially RCTs constitute the foundations of what we know, and they are the best way of advancing the knowledge. Any effort to support or promote the conduct of sound, transparent and independent trials, that are fully and clearly published is worth endorsing. A prominent project on this regard is the All trials initiative.
Why we should read systematic reviews
Most of the time a single study doesn't tell us enough. The best answers are found by combining the results of many studies.
A systematic review is a type of research that looks at the results from all of the good-quality studies. It puts together the results of these individual studies into one summary. This gives an estimate of a treatment's risks and benefits. Sometimes these reviews include a statistical analysis, called a meta-analysis, which combines the results of several studies to give a treatment effect.
Systematic reviews are increasingly being used for decision-making because they reduce the probability of being misled by looking at one piece of the jigsaw. By being systematic they are also more transparent and have become the gold standard approach to synthesize the ever-expanding and conflicting biomedical literature.
Systematic reviews are not foolproof. Their findings are only as good as the studies that they include and the methods they employ. But the best reviews clearly state whether the studies they include are good quality or not.
Three reasons why we shouldn’t read (most) systematic reviews
Firstly, systematic reviews have proliferated over time. From 11 per day in 2010 [1], they skyrocketed up to 40 per day or more in 2015. [2] Some have described this production as having reached epidemic proportions where the large majority of produced systematic reviews and meta-analyses are unnecessary, misleading, and/or conflicted. [3][4] So, finding more than one systematic review for a question is the rule more than the exception, and it is not unusual to find several dozen for the hottest questions.
Second, most systematic reviews address a narrow question. It is difficult to put them in the context of all of the available alternatives for an individual case. Reading multiple reviews to assess all of the alternatives is impractical, even more, if we consider they are typically difficult to read for the average clinician, who will need to solve several questions each day. [5]
Third, systematic reviews do not tell what to do, or what is advisable for a given patient or situation. Indeed, good systematic reviews explicitly avoid making recommendations.
So, even though systematic reviews play a key role in any evidence-based decision-making process, most of them are low-quality or outdated, and they rarely provide all the information needed to make decisions in the real world.
How to find the best available evidence?
Considering the massive amount of information available, we can quickly discard periodically reviewing our favorite journals as a means of sourcing the best available evidence.
The traditional approach to search for evidence has been using major databases, such as PubMed, or EMBASE. These constitute comprehensive sources including millions of relevant, but also irrelevant articles. Even though in the past they were the preferred approach to searching for evidence, information overload has made them impractical, and most clinicians would fail to find the best available evidence in this way, however hard they tried.
Another popular approach is simply searching in Google. Unfortunately, because of its lack of transparency, Google is not a reliable way to filter current best evidence from unsubstantiated or non-scientifically supervised sources. [6]
Three alternatives to access the best evidence?
Alternative 1 - Pick the best systematic review
Mastering the art of identifying, appraising and applying high-quality systematic reviews into practice can be very rewarding. It is not easy, but once mastered it gives a view of the bigger picture: of what is known, and what is not known.
The best single-source of highest-quality systematic reviews is produced by an international organization called the Cochrane Collaboration, named after a well-known researcher.[4] They can be accessed at The Cochrane Library.
Unfortunately, Cochrane reviews do not cover all of the existing questions, or they are not always up to date. Also, there might be non-Cochrane reviews out-performing Cochrane reviews.
There are many resources that facilitate access to systematic reviews (and other resources), such as Trip database, PubMed Health, ACCESSSS or Epistemonikos.
Epistemonikos database is innovative both in simultaneously searching multiple resources and in indexing and interlinking relevant evidence. For example, Epistemonikos connects systematic reviews and their included studies, and thus allows clustering of systematic reviews based on the primary studies they have in common. Epistemonikos is also unique in offering an appreciable multilingual user interface, multilingual search, and translation of abstracts in more than 9 languages.[6] This database includes several tools to compare systematic reviews, including the matrix of evidence, a dynamic table showing all of the systematic reviews and the primary studies included in those reviews.
Additionally, Epistemonikos partnered Cochrane, and during 2017 a combined search in both the Cochrane Library and Epistemonikos will be released.
Alternative 2- Read trustworthy guidelines
Although systematic reviews can provide a synthesis of the benefits and harms of the interventions, they do not integrate these factors with patients’ values and preferences or resource considerations to provide a suggested course of action. Also, to fully address the questions, clinicians would need to integrate the information of several systematic reviews covering all the relevant alternatives and outcomes. Most clinicians will likely prefer guidance rather than interpreting systematic reviews themselves.
Trustworthy guidelines, especially if developed with high standards, such as the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) approach, offer systematic and transparent guidance in moving from evidence to recommendations. [7]
Many online guideline websites promote themselves as “evidence-based,” but few have explicit links to research findings. [8] If they don’t have in-line references to relevant research findings, dismiss them. If they have, you can judge the strength of the commitment to evidence to support an inference, checking whether statements are based on high-quality vs low-quality evidence using alternative 1 explained above.
Unfortunately, most guidelines have serious limitations or are outdated. [9], [10] The exercise of locating and appraising the best guideline is time-consuming. This is particularly challenging for generalists addressing questions from different conditions or diseases.
Alternative 3- Use point-of-care tools
Point-of-care tools, such as BMJ Best Practice, have been developed as a response to the genuine need to summarise the ever-expanding biomedical literature on an ever-increasing number of alternatives in order to make evidence-based decisions. In this competitive market, the more successful products have been those delivering innovative, user-friendly interfaces that improve the retrieval, synthesis, organization, and application of evidence-based content in many different areas of clinical practice.
However, the same impossibility to catch-up with new evidence without compromising quality that affects guidelines also affects point-of-care tools. Clinicians should become familiar with the point-of-care information resource they want or can access, and examine the in-line references to relevant research findings. Clinicians can easily judge the strength of the commitment to evidence checking whether statements are based on high-quality vs low-quality evidence using alternative 1 explained above. Comprehensiveness, use of GRADE approach and independence are other characteristics to bear in mind when selecting among point-of-care information summaries.
A comprehensive list of these resources can be found in a study by Kwag et al.
The future
Finding the best available evidence is more challenging than it was in the dawn of the evidence-based movement, and the main cause is the exponential growth of ‘evidence-based’ information, in any of the flavors described above.
However, with a little bit of patience and practice, the busy clinician will discover evidence-based practice is far easier than it was 5 or 10 years ago. We are entering a stage where information is flowing between the different systems, technology is being harnessed for good, and the different players are starting to generate alliances.
The early-adopters will surely enjoy the first experiments of living systematic reviews (high-quality, up-to-date online summaries of health research that are updated as new research becomes available), living guidelines, rapid reviews tied to rapid recommendations, just to mention a few. [13], [14], [15]
It is unlikely that the picture of countless low-quality studies and reviews will change in the foreseeable future. However, it would not be a surprise if, in 3 to 5 years, separating the wheat from the chaff becomes trivial. Maybe the promise of evidence-based medicine of more effective, safer medical intervention resulting in better health outcomes for patients could be fulfilled.
Author: Gabriel Rada
Competing interests: Gabriel Rada is the co-founder and chairman of Epistemonikos database, part of the team that founded and maintains PDQ-Evidence, and an editor of the Cochrane Collaboration.
Related Blogs
Living Systematic Reviews: towards real-time evidence for health-care decision-making
References
- Bastian H, Glasziou P, Chalmers I. Seventy-five trials and eleven systematic reviews a day: how will we ever keep up? PLoS Med. 2010 Sep 21;7(9):e1000326. doi: 10.1371/journal.pmed.1000326
- Epistemonikos database [filter= systematic review; year=2015]. A Free, Relational, Collaborative, Multilingual Database of Health Evidence. https://www.epistemonikos.org/en/search?&q=*&classification=systematic-review&year_start=2015&year_end=2015&fl=14542 Accessed 5 Jan 2017.
- Ioannidis JP. The Mass Production of Redundant, Misleading, and Conflicted Systematic Reviews and Meta-analyses. Milbank Q. 2016 Sep;94(3):485-514. doi: 10.1111/1468-0009.12210.
- Page MJ, Shamseer L, Altman DG, et al. Epidemiology and reporting characteristics of systematic reviews of biomedical research: a cross-sectional study. PLoS Med. 2016;13(5):e1002028.
- Del Fiol G, Workman TE, Gorman PN. Clinical questions raised by clinicians at the point of care: a systematic review. JAMA Intern Med. 2014 May;174(5):710-8. doi: 10.1001/jamainternmed.2014.368.
- Agoritsas T, Vandvik P, Neumann I, Rochwerg B, Jaeschke R, Hayward R, et al. Chapter 5: finding current best evidence. In: Users' guides to the medical literature: a manual for evidence-based clinical practice. Chicago: MacGraw-Hill, 2014.
- Guyatt GH, Oxman AD, Vist GE, et al. GRADE: An emerging consensus on rating quality of evidence and strength of recommendations. BMJ. 2008;336(7650):924-926. doi: 10.1136/bmj.39489.470347
- Neumann I, Santesso N, Akl EA, Rind DM, Vandvik PO, Alonso-Coello P, Agoritsas T, Mustafa RA, Alexander PE, Schünemann H, Guyatt GH. A guide for health professionals to interpret and use recommendations in guidelines developed with the GRADE approach. J Clin Epidemiol. 2016 Apr;72:45-55. doi: 10.1016/j.jclinepi.2015.11.017
- Alonso-Coello P, Irfan A, Solà I, Gich I, Delgado-Noguera M, Rigau D, Tort S, Bonfill X, Burgers J, Schunemann H. The quality of clinical practice guidelines over the last two decades: a systematic review of guideline appraisal studies. Qual Saf Health Care. 2010 Dec;19(6):e58. doi: 10.1136/qshc.2010.042077
- Martínez García L, Sanabria AJ, García Alvarez E, Trujillo-Martín MM, Etxeandia-Ikobaltzeta I, Kotzeva A, Rigau D, Louro-González A, Barajas-Nava L, Díaz Del Campo P, Estrada MD, Solà I, Gracia J, Salcedo-Fernandez F, Lawson J, Haynes RB, Alonso-Coello P; Updating Guidelines Working Group. The validity of recommendations from clinical guidelines: a survival analysis. CMAJ. 2014 Nov 4;186(16):1211-9. doi: 10.1503/cmaj.140547
- Kwag KH, González-Lorenzo M, Banzi R, Bonovas S, Moja L. Providing Doctors With High-Quality Information: An Updated Evaluation of Web-Based Point-of-Care Information Summaries. J Med Internet Res. 2016 Jan 19;18(1):e15. doi: 10.2196/jmir.5234
- Banzi R, Cinquini M, Liberati A, Moschetti I, Pecoraro V, Tagliabue L, Moja L. Speed of updating online evidence based point of care summaries: prospective cohort analysis. BMJ. 2011 Sep 23;343:d5856. doi: 10.1136/bmj.d5856
- Elliott JH, Turner T, Clavisi O, Thomas J, Higgins JP, Mavergames C, Gruen RL. Living systematic reviews: an emerging opportunity to narrow the evidence-practice gap. PLoS Med. 2014 Feb 18;11(2):e1001603. doi: 10.1371/journal.pmed.1001603
- Vandvik PO, Brandt L, Alonso-Coello P, Treweek S, Akl EA, Kristiansen A, Fog-Heen A, Agoritsas T, Montori VM, Guyatt G. Creating clinical practice guidelines we can trust, use, and share: a new era is imminent. Chest. 2013 Aug;144(2):381-9. doi: 10.1378/chest.13-0746
- Vandvik PO, Otto CM, Siemieniuk RA, Bagur R, Guyatt GH, Lytvyn L, Whitlock R, Vartdal T, Brieger D, Aertgeerts B, Price S, Foroutan F, Shapiro M, Mertz R, Spencer FA. Transcatheter or surgical aortic valve replacement for patients with severe, symptomatic, aortic stenosis at low to intermediate surgical risk: a clinical practice guideline. BMJ. 2016 Sep 28;354:i5085. doi: 10.1136/bmj.i5085