Guyatt GH, Ebrahim S, Alonso-Coello P, Johnston BC, Mathioudakis AG, Briel M, Mustafa RA, Sun X, Walter SD, Heels-Ansdell D, et al. GRADE guidelines 17: Assessing the Risk of Bias Associated with Missing Participant Outcome Data in a body of evidence. J.Clin.Epidemiol. Epub 2017 May 18. PMID: 28529188.

OBJECTIVE:
To provide GRADE guidance for assessing risk of bias across an entire body of evidence consequent on missing data for systematic reviews of both binary and continuous outcomes.
STUDY DESIGN:
Systematic survey of published methodological research, iterative discussions, testing in systematic reviews, and feedback from the GRADE Working Group.
RESULTS:
Approaches begin with a primary meta-analysis using a complete case analysis followed by sensitivity meta-analyses imputing, in each study, data for those with missing data, and then pooling across studies. For binary outcomes we suggest use of "plausible worst case" in which review authors assume that those with missing data in treatment arms have proportionally higher event rates than those followed successfully. For continuous outcomes, imputed mean values come from other studies within the systematic review, and the standard deviation from the median standard deviations of the control arms of all studies.
CONCLUSIONS:
If the results of the primary meta-analysis are robust to the most extreme assumptions viewed as plausible, one does not rate down certainty in the evidence for risk of bias due to missing participant outcome data. If the results prove not robust to plausible assumptions, one would rate down certainty in the evidence for risk of bias.
Copyright © 2017. Published by Elsevier Inc.

DOI: https://doi.org/10.1016/j.jclinepi.2017.05.005.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28529188.

Hultcrantz M, Rind D, Akl EA, Treweek S, Mustafa RA, Iorio A, Alper BS, Meerpohl JJ, Murad MH, Ansari MT, et al. The GRADE Working Group clarifies the construct of certainty of evidence. J.Clin.Epidemiol. Epub 2017 May 18. PMID: 28529184.

OBJECTIVE:
To clarify the GRADE (grading of recommendations assessment, development and evaluation) definition of certainty of evidence and suggest possible approaches to rating certainty of the evidence for systematic reviews, health technology assessments and guidelines.
STUDY DESIGN AND SETTING:
This work was carried out by a project group within the GRADE Working Group, through brainstorming and iterative refinement of ideas, using input from workshops, presentations, and discussions at GRADE Working Group meetings to produce this document, which constitutes official GRADE guidance.
RESULTS:
Certainty of evidence is best considered as the certainty that a true effect lies on one side of a specified threshold, or within a chosen range. We define possible approaches for choosing threshold or range. For guidelines, what we call a fully contextualized approach requires simultaneously considering all critical outcomes and their relative value. Less contextualized approaches, more appropriate for systematic reviews and health technology assessments, include using specified ranges of magnitude of effect, e.g. ranges of what we might consider no effect, trivial, small, moderate, or large effects.
CONCLUSION:
It is desirable for systematic review authors, guideline panelists, and health technology assessors to specify the threshold or ranges they are using when rating the certainty in evidence.
Copyright © 2017. Published by Elsevier Inc.

FREE FULL TEXT: http://ac.els-cdn.com/S089543561630703X/1-s2.0-S089543561630703X-main.pdf?_tid=2cd7e5e0-4b01-11e7-acbc-00000aacb362&acdnat=1496785587_5ab74ce325d44a4ad123289410df11ff
DOI: https://doi.org/10.1016/j.jclinepi.2017.05.006.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28529184.

Mayo-Wilson E, Fusco N, Li T, Hong H, Canner J, Dickersin K, MUDS investigators. Multiple outcomes and analyses in clinical trials create challenges for interpretation and research synthesis. J.Clin.Epidemiol. Epub 2017 May 18. PMID: 28529187.

OBJECTIVE:
To identify variations in outcomes and results across reports of randomized clinical trials (RCTs).
STUDY DESIGN AND SETTING:
Eligible RCTs examined gabapentin for neuropathic pain and quetiapine for bipolar depression, reported in public (e.g., journal articles) and non-public (e.g., clinical study reports) sources by 2015. We pre-specified outcome domains. From each source, we collected "outcomes" (i.e., domain, measure, metric, method of aggregation, and time-point); "treatment effect" (i.e., outcome plus the methods of analysis [e.g., how missing data were handled]); and results (i.e., numerical contrasts of treatment and comparison groups). We assessed whether results included sufficient information for meta-analysis.
RESULTS:
We found 21 gabapentin (68 public, 6 non-public reports) and seven quetiapine RCTs (46 public, 4 non-public reports). For four (gabapentin) and seven (quetiapine) pre-specified outcome domains, RCTs reported 214 and 81 outcomes by varying four elements. RCTs assessed 605 and 188 treatment effects by varying the analysis of those outcomes. RCTs reported 1,230 and 661 meta-analyzable results, 305 (25%) and 109 (16%) in public reports.
CONCLUSION:
RCTs included hundreds of outcomes and results; a small proportion were in public reports. Trialists and meta-analysts may cherry-pick what they report from multiple sources of RCT information.
Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

FREE FULL TEXT: http://www.sciencedirect.com/science/article/pii/S089543561730121X
DOI: https://doi.org/10.1016/j.jclinepi.2017.05.007.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28529187.

Hall AT, Belanger SE, Guiney PD, Galay-Burgos M, Maack G, Stubblefield W, Martin O. New approach to weight-of-evidence assessment of ecotoxicological effects in regulatory decision-making. Integr.Environ.Assess.Manag. Epub 2017 Apr 6. PMID: 28383801.

Ecological risk assessments and risk management decisions are only as sound as the underlying information and processes to integrate them. It is important to develop transparent and reproducible procedures a priori to integrate often-heterogeneous evidence. Current weight-of-evidence (WoE) approaches for effects or hazard assessment tend to conflate aspects of the assessment of the quality of the data with the strength of the body of evidence as a whole. We take forward recent developments in the critical appraisal of the reliability and relevance of individual ecotoxicological studies as part of the effect or hazard assessment of prospective risk assessments and propose a streamlined WoE approach. The aim is to avoid overlap and double accounting of criteria used in reliability and relevance with that used in current WoE methods. The protection goals, problem formulation, and evaluation process need to be clarified at the outset. The data are first integrated according to lines of evidence (LoEs), typically mechanistic insights (e.g., cellular, subcellular, genomic), in vivo experiments, and higher-tiered field or observational studies. Data are then plotted on the basis of both relevance and reliability scores or categories. This graphical approach provides a means to visually assess and communicate the credibility (reliability and relevance of available individual studies), quantity, diversity, and consistency of the evidence. In addition, the external coherence of the body of evidence needs to be considered. The final step in the process is to derive an expression of the confidence in the conclusions of integrating the information considering these 5 aspects in the context of remaining uncertainties. We suggest that this streamlined approach to WoE for the effects or hazard characterization should facilitate reproducible and transparent assessments of data across different regulatory requirements. Integr Environ Assess Manage 2017;12:000-000. © 2017 The Authors. Integrated Environmental Assessment and Management published by Wiley Periodicals, Inc. on behalf of Society of Environmental Toxicology & Chemistry SETAC.
© 2017 The Authors. Integrated Environmental Assessment and Management Published by Wiley Periodicals, Inc. on behalf of Society of Environmental Toxicology & Chemistry (SETAC).

DOI: http://dx.doi.org/10.1002/ieam.1936.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28383801.

Irving M, Eramudugolla R, Cherbuin N, Anstey KJ. A Critical Review of Grading Systems: Implications for Public Health Policy. Eval.Health Prof. Epub 2016 May 10. PMID: 27166012.

Grading instruments are an important part of evidence-based medicine and are used to inform health policy and the development of clinical practice guidelines. They are extensively used in the development of clinical guidelines and the assessment of research publications, having particular impact on health care and policy sectors. The positive effects of using grading instruments are, however, potentially undermined by their misuse and a number of shortcomings. This review found eight key concerns about grading instruments: (1) lack of information on validity and reliability, (2) poor concurrent validity, (3) may not account for external validity, (4) may not be inherently logical, (5) susceptibility to subjectivity, (6) complex systems with inadequate instructions, (7) may be biased toward randomized controlled trial (RCT) studies, and (8) may not adequately address the variety of non-RCTs. This narrative review concludes that there is a need to take into account these criticisms and domain-specific limitations, to enable the use and development of the most appropriate grading instruments. Grading systems need to be matched to both the research question being asked and the type of evidence being used.

DOI: https://doi.org/10.1177/0163278716645161.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/27166012.

Austvoll-Dahlgren A, Semakula D, Nsangi A, Oxman AD, Chalmers I, Rosenbaum S, Guttersrud O, IHC Group. Measuring ability to assess claims about treatment effects: the development of the 'Claim Evaluation Tools'. BMJ Open. 2017 May 17;7(5):e013184. PMID: 28515181.

OBJECTIVES:
To describe the development of the Claim Evaluation Tools, a set of flexible items to measure people's ability to assess claims about treatment effects.
SETTING:
Methodologists and members of the community (including children) in Uganda, Rwanda, Kenya, Norway, the UK and Australia.
PARTICIPANTS:
In the iterative development of the items, we used purposeful sampling of people with training in research methodology, such as teachers of evidence-based medicine, as well as patients and members of the public from low-income and high-income countries. Development consisted of 4 processes: (1) determining the scope of the Claim Evaluation Tools and development of items; (2) expert item review and feedback (n=63); (3) cognitive interviews with children and adult end-users (n=109); and (4) piloting and administrative tests (n=956).
RESULTS:
The Claim Evaluation Tools database currently includes a battery of multiple-choice items. Each item begins with a scenario which is intended to be relevant across contexts, and which can be used for children (from age 10? and above), adult members of the public and health professionals. People with expertise in research methods judged the items to have face validity, and end-users judged them relevant and acceptable in their settings. In response to feedback from methodologists and end-users, we simplified some text, explained terms where needed, and redesigned formats and instructions.
CONCLUSIONS:
The Claim Evaluation Tools database is a flexible resource from which researchers, teachers and others can design measurement instruments to meet their own requirements. These evaluation tools are being managed and made freely available for non-commercial use (on request) through Testing Treatments interactive (testingtreatments.org).
TRIAL REGISTRATION NUMBERS:
PACTR201606001679337 and PACTR201606001676150; Pre-results.
Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

FREE FULL TEXT: http://bmjopen.bmj.com/content/7/5/e013184.long
DOI: http://dx.doi.org/10.1136/bmjopen-2016-013184.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28515181.

Clayton GL, Smith IL, Higgins JPT, Mihaylova B, Thorpe B, Cicero R, Lokuge K, Forman JR, Tierney JF, White IR, et al. The INVEST project: investigating the use of evidence synthesis in the design and analysis of clinical trials. Trials. 2017 May 15;18(1):219. PMID: 28506284.

BACKGROUND:
When designing and analysing clinical trials, using previous relevant information, perhaps in the form of evidence syntheses, can reduce research waste. We conducted the INVEST (INVestigating the use of Evidence Synthesis in the design and analysis of clinical Trials) survey to summarise the current use of evidence synthesis in trial design and analysis, to capture opinions of trialists and methodologists on such use, and to understand any barriers.
METHODS:
Our sampling frame was all delegates attending the International Clinical Trials Methodology Conference in November 2015. Respondents were asked to indicate (1) their views on the use of evidence synthesis in trial design and analysis, (2) their own use during the past 10 years and (3) the three greatest barriers to use in practice.
RESULTS:
Of approximately 638 attendees of the conference, 106 (17%) completed the survey, half of whom were statisticians. Support was generally high for using a description of previous evidence, a systematic review or a meta-analysis in trial design. Generally, respondents did not seem to be using evidence syntheses as often as they felt they should. For example, only 50% (42/84 relevant respondents) had used a meta-analysis to inform whether a trial is needed compared with 74% (62/84) indicating that this is desirable. Only 6% (5/81 relevant respondents) had used a value of information analysis to inform sample size calculations versus 22% (18/81) indicating support for this. Surprisingly large numbers of participants indicated support for, and previous use of, evidence syntheses in trial analysis. For example, 79% (79/100) of respondents indicated that external information about the treatment effect should be used to inform aspects of the analysis. The greatest perceived barrier to using evidence synthesis methods in trial design or analysis was time constraints, followed by a belief that the new trial was the first in the area.
CONCLUSIONS:
Evidence syntheses can be resource-intensive, but their use in informing the design, conduct and analysis of clinical trials is widely considered desirable. We advocate additional research, training and investment in resources dedicated to ways in which evidence syntheses can be undertaken more efficiently, offering the potential for cost savings in the long term.

FREE FULL TEXT: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5433067/pdf/13063_2017_Article_1955.pdf
DOI: https://doi.org/10.1186/s13063-017-1955-y.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28506284.
PubMed Central: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5433067/.

Cole AP, Trinh QD. Secondary data analysis: techniques for comparing interventions and their limitations. Curr.Opin.Urol. 2017 Jul;27(4):354-9. PMID: 28570290.

PURPOSE OF REVIEW:
Secondary data analysis has become increasingly common in health services research, specifically comparative effectiveness research. While a comprehensive study of the techniques and methods for secondary data analysis is a wide-ranging topic, we sought to perform a descriptive study of some key methodological issues related to secondary data analyses and to provide a basic summary of techniques to address them.
RECENT FINDINGS:
In this study, we first address common issues seen in analysis of secondary datasets, and limitations of datasets with respect to bias. We cover some strategies for handling missing or incomplete data and a basic summary of three statistical approaches that can be used to address the problem of bias.
SUMMARY:
While it is unrealistic for surgeon scientists to aspire to the depth of knowledge of professional statisticians or data scientists, it is important for researchers and clinicians reading to understand some of the common pitfalls and issues when using secondary data to investigate clinical questions. Ultimately, the choice of analytical technique and the particular data sets used should be dictated by the research question and hypothesis being tested. Transparency about data handling and statistical techniques are vital elements of secondary data analysis.

DOI: https://dx.doi.org/10.1097/MOU.0000000000000407.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/28570290.

Cooney L, Loke YK, Golder S, Kirkham J, Jorgensen A, Sinha I, Hawcutt D. Overview of systematic reviews of therapeutic ranges: methodologies and recommendations for practice. BMC Med.Res.Methodol. 2017 Jun 2;17(1):84. PMID: 28577540.

BACKGROUND:
Many medicines are dosed to achieve a particular therapeutic range, and monitored using therapeutic drug monitoring (TDM). The evidence base for a therapeutic range can be evaluated using systematic reviews, to ensure it continues to reflect current indications, doses, routes and formulations, as well as updated adverse effect data. There is no consensus on the optimal methodology for systematic reviews of therapeutic ranges.
METHODS:
An overview of systematic reviews of therapeutic ranges was undertaken. The following databases were used: Cochrane Database of Systematic Reviews (CDSR), Database of Abstracts and Reviews of Effects (DARE) and MEDLINE. The published methodologies used when systematically reviewing the therapeutic range of a drug were analyzed. Step by step recommendations to optimize such systematic reviews are proposed.
RESULTS:
Ten systematic reviews that investigated the correlation between serum concentrations and clinical outcomes encompassing a variety of medicines and indications were assessed. There were significant variations in the methodologies used (including the search terms used, data extraction methods, assessment of bias, and statistical analyses undertaken). Therapeutic ranges should be population and indication specific and based on clinically relevant outcomes. Recommendations for future systematic reviews based on these findings have been developed.
CONCLUSION:
Evidence based therapeutic ranges have the potential to improve TDM practice. Current systematic reviews investigating therapeutic ranges have highly variable methodologies and there is no consensus of best practice when undertaking systematic reviews in this field. These recommendations meet a need not addressed by standard protocols.

FREE FULL TEXT: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5455119/pdf/12874_2017_Article_363.pdf
DOI: https://dx.doi.org/10.1186/s12874-017-0363-z.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28577540.
PubMed Central: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5455119.

Ebell MH, Sokol R, Lee A, Simons C, Early J. How good is the evidence to support primary care practice? Evid Based.Med. 2017 Jun;22(3):88-92. PMID: 28554944.

Our goal was to determine the extent to which recommendations for primary care practice are informed by high-quality research-based evidence, and the extent to which they are based on evidence of improved health outcomes (patient-oriented evidence). As a substrate for study, we used Essential Evidence, an online, evidence-based, medical reference for generalists. Each of the 721 chapters makes overall recommendations for practice that are graded A, B or C using the Strength of Recommendations Taxonomy (SORT). SORT A represents consistent and good quality patient-oriented evidence; SORT B is inconsistent or limited quality patient-oriented evidence and SORT C is expert opinion, usual practice or recommendations relying on surrogate or intermediate outcomes. Pairs of researchers abstracted the evidence ratings for each chapter in tandem, with discrepancies resolved by the lead author. Of 3251 overall recommendations, 18% were graded 'A', 34% were 'B' and 49% were 'C'. Clinical categories with the most 'A' recommendations were pregnancy and childbirth, cardiovascular, and psychiatric; those with the least were haematological, musculoskeletal and rheumatological, and poisoning and toxicity. 'A' level recommendations were most common for therapy and least common for diagnosis. Only 51% of recommendations are based on studies reporting patient-oriented outcomes, such as morbidity, mortality, quality of life or symptom reduction. In conclusion, approximately half of the recommendations for primary care practice are based on patient-oriented evidence, but only 18% are based on patient-oriented evidence from consistent, high-quality studies.
© Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

DOI: https://dx.doi.org/10.1136/ebmed-2017-110704.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/28554944.

Fava GA. Evidence-based medicine was bound to fail: a report to Alvan Feinstein. J.Clin.Epidemiol. 2017 Apr;84:3-7. PMID: 28532614.

John Ioannidis has provided a lucid account, in the form of a report to David Sackett, of how evidence-based medicine (EBM) was hijacked to serve vested interests: major randomized controlled trials are largely done by and for the benefit of the industry; meta-analyses and guidelines are flooded with conflicts of interest; national and federal research funds are unable to address basic clinical questions. Nonetheless, EBM would remain a worthwhile goal. In this paper, in the form of a report to Alvan Feinstein, it is argued that current developments were largely predictable. EBM certainly gave an important contribution to questioning unsubstantiated therapeutic claims. Time has come, however, to become aware of its considerable limitations, including overall reductionism and insufficient consideration of problems related to financial conflicts of interest. EBM does not represent the scientific approach to medicine: it is only a restrictive interpretation of the scientific approach to clinical practice. EBM drives the prescribing clinician to an overestimated consideration of potential benefits, paying little attention to the likelihood of responsiveness and to potential vulnerabilities in relations to the adverse effects of treatment. It is time to substitute the fashionable popularity of a strategy developed outside of clinical medicine with models and research based on the insights of clinical judgment and patient-doctor interaction, as Feinstein had outlined.

DOI: https://doi.org/10.1016/j.jclinepi.2017.01.012.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28532614.

Gómez-García F, Ruano J, Aguilar-Luque M, Gay-Mimbrera J, Maestre-Lopez B, Sanz-Cabanillas JL, Carmona-Fernández PJ, González-Padilla M, Vélez García-Nieto A, Isla-Tejera B. Systematic reviews and meta-analyses on psoriasis: role of funding sources, conflict of interest and bibliometric indices as predictors of methodological quality. Br.J.Dermatol. 2017 Jun;176(6):1633-44. PMID: 28192600.

BACKGROUND:
The quality of systematic reviews and meta-analyses on psoriasis, a chronic inflammatory skin disease that severely impairs quality of life and is associated with high costs, remains unknown.
OBJECTIVES:
To assess the methodological quality of systematic reviews published on psoriasis.
METHODS:
After a comprehensive search in MEDLINE, Embase and the Cochrane Database (PROSPERO: CDR42016041611), the quality of studies was assessed by two raters using the Assessment of Multiple Systematic Reviews (AMSTAR) tool. Article metadata and journal-related bibliometric indices were also obtained. Systematic reviews were classified as low (0-4), moderate (5-8) or high (9-11) quality. A prediction model for methodological quality was fitted using principal component and multivariate ordinal logistic regression analyses.
RESULTS:
We classified 220 studies as high (17·2%), moderate (55·0%) or low (27·8%) quality. Lower compliance rates were found for AMSTAR question (Q)5 (list of studies provided, 11·4%), Q10 (publication bias assessed, 27·7%), Q4 (status of publication included, 39·5%) and Q1 (a priori design provided, 40·9%). Factors such as meta-analysis inclusion [odds ratio (OR) 6·22; 95% confidence interval (CI) 2·78-14·86], funding by academic institutions (OR 2·90, 95% CI 1·11-7·89), Article Influence score (OR 2·14, 95% CI 1·05-6·67), 5-year impact factor (OR 1·34, 95% CI 1·02-1·40) and article page count (OR 1·08, 95% CI 1·02-1·15) significantly predicted higher quality. A high number of authors with a conflict of interest (OR 0·90, 95% CI 0·82-0·99) was significantly associated with lower quality.
CONCLUSIONS:
The methodological quality of systematic reviews published about psoriasis remains suboptimal. The type of funding sources and author conflicts may compromise study quality, increasing the risk of bias.
© 2017 British Association of Dermatologists.

DOI: http://dx.doi.org/10.1111/bjd.15380.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28192600.

Guyatt G. EBM has not only called out the problems but offered solutions. J.Clin.Epidemiol. 2017 Apr;84:8-10. PMID: 28532615.

[First paragraph]

As with many critiques of evidence-based medicine (EBM), Dr. Fava has been selective and superficial in his reading of the EBM literature. I will point out a number of examples of unfortunate oversights and misinterpretations.

DOI: https://doi.org/10.1016/j.jclinepi.2017.02.004.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/28532615.

Heneghan C, Goldacre B, Mahtani KR. Why clinical trial outcomes fail to translate into benefits for patients. Trials. 2017 Mar 14;18(1):122. PMID: 28288676.

Clinical research should ultimately improve patient care. For this to be possible, trials must evaluate outcomes that genuinely reflect real-world settings and concerns. However, many trials continue to measure and report outcomes that fall short of this clear requirement. We highlight problems with trial outcomes that make evidence difficult or impossible to interpret and that undermine the translation of research into practice and policy. These complex issues include the use of surrogate, composite and subjective endpoints; a failure to take account of patients' perspectives when designing research outcomes; publication and other outcome reporting biases, including the under-reporting of adverse events; the reporting of relative measures at the expense of more informative absolute outcomes; misleading reporting; multiplicity of outcomes; and a lack of core outcome sets. Trial outcomes can be developed with patients in mind, however, and can be reported completely, transparently and competently. Clinicians, patients, researchers and those who pay for health services are entitled to demand reliable evidence demonstrating whether interventions improve patient-relevant clinical outcomes.

FREE FULL TEXT: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5348914/pdf/13063_2017_Article_1870.pdf
DOI: http://dx.doi.org/10.1186/s13063-017-1870-2.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28288676.
PubMed Central: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5348914.

Horwitz RI, Singer BH. Why evidence-based medicine failed in patient care and medicine-based evidence will succeed. J.Clin.Epidemiol. 2017 Apr;84:14-7. PMID: 28532612.

Evidence-based medicine (EBM) has succeeded in strengthening the evidence base for population medicine. Where EBM has failed is in answering the practicing doctor's question of what a likely outcome would be when a given treatment is administered to a particular patient with her own distinctive biological and biographical (life experience) profile. We propose Medicine-based evidence (MBE), based on the profiles of individual patients, as the evidence base for individualized or personalized medicine. MBE will build an archive of patient profiles using data from all study types and data sources, and will include both clinical and socio-behavioral information. The clinician seeking guidance for the management of an individual patient will start with the patient's longitudinal profile and find approximate matches in the archive that describes how similar patients responded to a contemplated treatment and alternative treatments.
Copyright © 2017 Elsevier Inc. All rights reserved.

DOI: https://doi.org/10.1016/j.jclinepi.2017.02.003.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28532612.

Knottnerus JA, Tugwell P. Evidence-based medicine: achievements and prospects. J.Clin.Epidemiol. 2017 Apr;84:1-2. PMID: 28532610.
DOI: https://doi.org/10.1016/j.jclinepi.2017.02.006.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/28532610.

Lapkin S, Stephenson M. Not all measurement instruments are created equal. JBI Database System Rev.Implement Rep. 2017 May;15(5):1220-1. PMID: 28498160.

[First paragraph]

Systematic reviews synthesize and summarize existing research and are considered to be the highest level of research evidence.1 Rigorous research design and use of validated measurement instruments are fundamental requirements for establishing strong evidence. Current efforts related to appraisal of evidence of effectiveness lean towards assessment of research design with seemingly little focus on the evaluation of the validity and reliability of measurement instruments used to gather the data. In fact, a significant number of relevant and well conducted systematic reviews reveal contradictory or inconclusive findings.2 While these reviews may make a contribution to the body of evidence, the lack of strong conclusions can limit the translation of quality evidence into practice and policy, with potentially negative consequences for health care consumers. This is in part due to the fact that studies that fulfil eligibility criteria and are included in systematic reviews often utilize different, incomparable or non-validated measurement instruments

DOI: https://doi.org/10.11124/JBISRIR-2017-003408.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/28498160.

Mantziari S, Demartines N. Poor outcome reporting in medical research; building practice on spoilt grounds. Ann.Transl.Med. 2017 May;5(Suppl 1):S15. PMID: 28567397.

[First paragraph, reference html links removed]

Published medical literature forms an ever-expanding web of information, available to patients, clinicians and all healthcare providers in everyday practice. Data from some of these publications, especially from high-impact journals, may be used to guide therapeutic decisions. Moreover, these data will be used for future studies’ design, such as for sample size calculations based on previously published size effects (1) or for conducting systematic reviews and metaanalyses, that represent the highest level of evidence in Medical Research (2). Unfortunately, a growing volume of evidence suggests high rates of poor selection and reporting of research outcomes; the reasons are either inadequate formulation of the research question or deliberate selection of outcomes presented depending on their observed results (outcome reporting bias), but in any case this is a rather worrisome phenomenon that needs to be properly addressed in order to protect transparency and reliability of health care research.

Comment on:

Matthews JH, Bhanderi S, Chapman SJ, Nepogodiev D, Pinkney T, Bhangu A. Underreporting of Secondary Endpoints in Randomized Trials: Cross-sectional, Observational Study. Ann Surg. 2016 Dec;264(6):982-986. PubMed PMID: 26756751.


FREE FULL TEXT: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5440299/pdf/atm-05-S1-S15.pdf
DOI: http://dx.doi.org/10.21037/atm.2017.03.75.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28567397.
PubMed Central: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5440299.

Miller E. Connecting the Quality of Evidence to Clinical Decision-Making. Pain Manag.Nurs. 2017 Jun;18(3):121-2. PMID: 28528933.
DOI: http://dx.doi.org/10.1016/j.pmn.2017.04.008.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28528933.

Peters JPM, Stegeman I, Grolman W, Hooft L. The risk of bias in randomized controlled trials in otorhinolaryngology: hardly any improvement since 1950. BMC Ear Nose Throat Disord. 2017 Apr 18;17:3. PMID: 28428729.

BACKGROUND:
Randomized Controlled Trials (RCTs) represent the most valuable study design to evaluate the effectiveness of therapeutic interventions. However, flaws in design, conduct, analysis, and reporting of RCTs can cause the effect of an intervention to be under- or overestimated. These biased RCTs may be included in literature reviews. To make the assessment of Risk of Bias (RoB) consistent and transparent, Cochrane published a RoB tool, with which RoB is assessed per item as "low", "unclear" or "high". Our objective was to provide an overview of RoB assessments of RCTs in otorhinolaryngology over time, and to identify items where improvement is still warranted.
METHODS:
We retrieved Cochrane reviews in the otorhinolaryngologic research field published in 2012 and 2013. We used all judgments per item as assessed by the review authors of the included RCTs. We evaluated the association between "low RoB" vs. "unclear and high RoB" and the year of publication (time strata: '<1990', '1990-1995', '1996-2000', '2001-2005', '2006-2012') per item using binary logistic regression.
RESULTS:
We extracted the RoB assessments from 42 Cochrane reviews that had included 402 RCTs (median number of RCTs per review: 7, range 1-40). In total 2,356 items were assessed (mean number of assessed items per RCT: 5.9, standard deviation 1.8). On binary logistic regression, RCTs published in 2006-2012, compared with those published before 1990, were more likely to have a low RoB for two items: random sequence generation (odds ratio 6.09 [95% confidence interval: 3.11-11.95]) and allocation concealment (3.59 [1.87-6.90]). On all other items, there was no significant increase in the proportion of low RoB when comparing RCTs published in 2006-2012 with RCTs published before 1990.
CONCLUSION:
Although there were some positive developments in the RoB assessments in otorhinolaryngology, a further decrease in RoB is still warranted on several items. Currently, biased RCTs are included in Cochrane reviews and effects of therapeutic interventions can be under- or overestimated, with implications for clinical patient care.

FREE FULL TEXT: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5395869/pdf/12901_2017_Article_36.pdf
DOI: http://dx.doi.org/10.1186/s12901-017-0036-x.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28428729.
PubMed Central: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5395869.

Wieland LS, Berman BM, Altman DG, Barth J, Bouter LM, D'Adamo CR, Linde K, Moher D, Mullins CD, Treweek S, et al. Rating of Included Trials on the Efficacy-Effectiveness Spectrum: development of a new tool for systematic reviews. J.Clin.Epidemiol. 2017 Apr;84:95-104. PMID: 28188898.

BACKGROUND AND OBJECTIVE:
Randomized trials may be designed to provide evidence more strongly related to efficacy or effectiveness of an intervention. When systematic reviews are used to inform clinical or policy decisions, it is important to know the efficacy-effectiveness nature of the included trials. The objective of this study was to develop a tool to characterize randomized trials included in a systematic review on an efficacy-effectiveness continuum.
METHODS:
We extracted rating domains and descriptors from existing tools and used a modified Delphi procedure to condense the domains and develop a new tool. The feasibility and interrater reliability of the tool was tested on trials from four systematic reviews.
RESULTS:
The Rating of Included Trials on the Efficacy-Effectiveness Spectrum (RITES) tool rates clinical trials on a five-point Likert scale in four domains: (1) participant characteristics, (2) trial setting, (3) flexibility of interventions, and (4) clinical relevance of interventions. When RITES was piloted on trials from three reviews by unaffiliated raters, ratings were variable (intraclass correlation coefficient [ICC] 0.25-0.66 for the four domains); but, when RITES was used on one review by the review authors with expertise on the topic, the ratings were consistent (ICCs > 0.80.
CONCLUSION:
RITES may help to characterize the efficacy-effectiveness nature of trials included in systematic reviews.
Copyright © 2017 Elsevier Inc. All rights reserved.

DOI: https://doi.org/10.1016/j.jclinepi.2017.01.010.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28188898.
PubMed Central: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5441969/.