Forbes G, Loudon K, Treweek S, Taylor SJ, Eldridge S. Understanding the applicability of results from primary care trials: lessons learned from applying PRECIS-2. J.Clin.Epidemiol. Epub 2017 Jun 16. PMID: 28629699.

OBJECTIVE:
To compare two approaches for trial teams to apply PRECIS-2 to pragmatic trials: independent scoring and scoring following a group discussion.
STUDY DESIGN AND SETTING:
We recruited multidisciplinary teams who were conducting or had conducted trials in primary care in collaboration with the Pragmatic Clinical Trials Unit, Queen Mary University of London. Each team carried out two rounds of scoring on the 9 PRECIS-2 domains: first independently using an online version of PRECIS-2 and secondly following a discussion.
RESULTS:
Seven teams took part in the study. Prior to the discussion within-team agreement in scores was generally poor and not all raters were able to score all domains; agreement improved following the discussion. The PRECIS-2 wheels suggested that the trials were pragmatic, though some domains were more pragmatic than others.
CONCLUSION:
PRECIS-2 can facilitate information exchange within trial teams. To apply PRECIS-2 successfully we recommend a discussion between those with detailed understanding of: what usual care is for the intervention, the trial's design including operational and technical aspects, and the PRECIS-2 domains. For some cluster randomised trials greater insight may be gained by plotting two PRECIS-2 wheels, one at the individual participant level and one at the cluster level.
Copyright © 2017. Published by Elsevier Inc.

DOI: http://dx.doi.org/10.1016/j.jclinepi.2017.06.007.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28629699.

Rice M, Ali MU, Fitzpatrick-Lewis D, Kenny M, Raina P, Sherifali D. Updating Systematic Reviews with Simplified Search Strategies. J.Clin.Epidemiol. Epub 2017 Jun 15. PMID: 28625563.

OBJECTIVE:
To test the overall effectiveness of a simplified search strategy (SSS) for updating systematic reviews.
METHODS:
We identified nine systematic reviews undertaken by our research group for which both comprehensive and simplified search strategy (SSS) updates were performed. Three relevant performance measures were estimated i.e. sensitivity, precision and number needed to read (NNR).
RESULTS:
The update reference searches for all nine included systematic reviews identified a total of 55,099 citations that were screened resulting in final inclusion of 163 RCTs. As compared to reference search, the simplified search strategy (SSS) resulted in 8,239 hits and had a median sensitivity of 83.3%, while precision and number needed to read (NNR) were 4.5 times better. During analysis, we found that the simplified search strategy (SSS) performed better for clinically focused topics, with a median sensitivity of 100% and precision and number needed to read (NNR) 6 times better the reference searches. For broader topics the sensitivity of the simplified search strategy (SSS) was 80% while precision and number needed to read (NNR) were 5.4 times better as compared to reference search.
CONCLUSION:
Simplified search strategy (SSS) performed well for clinically-focused topics and, with a median sensitivity of 100%, could be a viable alternative to a conventional comprehensive search strategy for updating this type of systematic reviews particularly considering the budget constraints and the volume of new literature being published. For broader topics 80% sensitivity is likely to be considered too low for a systematic review update in most cases, although it might be acceptable if updating a scoping or rapid review.
Copyright © 2017 Elsevier Inc. All rights reserved.

DOI: http://dx.doi.org/10.1016/j.jclinepi.2017.06.005.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/28625563.

Suter G, Cormier S, Barron M. A Weight of Evidence Framework for Environmental Assessments: Inferring Qualities. Integr.Environ.Assess.Manag. Epub 2017 Jun 14. PMID: 28613433.

The weighing of heterogeneous evidence such as conventional laboratory toxicity tests, field tests, biomarkers, and community surveys is essential to environmental assessments. Evidence synthesis and weighing is needed to determine causes of observed effects, hazards posed by chemicals or other agents, the completeness of remediation, and other environmental qualities. As part of its guidelines for WoE in ecological assessments, the U.S. Environmental Protection Agency has developed a generally applicable framework. Its basic steps are: assemble evidence, weight the evidence, and weigh the body of evidence. Use of the framework can increase the consistency and rigor of WoE practices and provide greater transparency than ad hoc and narrative-based approaches.

DOI: http://dx.doi.org/10.1002/ieam.1954.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28613433.

Wasiak J, Shen AY, Ware R, O'Donohoe TJ, Faggion CM,Jr. Methodological quality and reporting of systematic reviews in hand and wrist pathology. J.Hand Surg.Eur.Vol. Epub 2017 Jun 1. PMID: 28610464.

The objective of this study was to assess methodological and reporting quality of systematic reviews in hand and wrist pathology. MEDLINE, EMBASE and Cochrane Library were searched from inception to November 2016 for relevant studies. Reporting quality was evaluated using Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) and methodological quality using a measurement tool to assess systematic reviews, the Assessment of Multiple Systematic Reviews (AMSTAR). Descriptive statistics and linear regression were used to identify features associated with improved methodological quality. A total of 91 studies were included in the analysis. Most reviews inadequately reported PRISMA items regarding study protocol, search strategy and bias and AMSTAR items regarding protocol, publication bias and funding. Systematic reviews published in a plastics journal, or which included more authors, were associated with higher AMSTAR scores. A large proportion of systematic reviews within hand and wrist pathology literature score poorly with validated methodological assessment tools, which may affect the reliability of their conclusions. LEVEL OF EVIDENCE: I.

DOI: http://dx.doi.org/10.1177/1753193417712660.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28610464.

Bero L. Addressing Bias and Conflict of Interest Among Biomedical Researchers. JAMA. 2017 May 2;317(17):1723-4. PMID: 28464166.

[First paragraph]

Bias in research is ubiquitous and the goal of every researcher should be to reduce bias and make its potential sources transparent. Discussions of research bias are often limited to the statistical definition of bias, ie, a systematic error or deviation from the truth in results or inferences. This means biased studies will have systematically different (either larger or smaller) effect estimates than studies that are not biased. This type of bias can be reduced through methodological rigor. For example, selection bias or baseline confounding can be reduced by randomization; performance and detection biases can be reduced by blinding. Common tools for measuring bias, such as the Cochrane risk of bias tools, focus on identifying problems with the study design and execution.

DOI: https://dx.doi.org/10.1001/jama.2017.3854.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/28464166.

Chang CH. Applications of the propensity score weighting method in psychogeriatric research: correcting selection bias and adjusting for confounders. Int.Psychogeriatr. 2017 May;29(5):703-6. PMID: 28095944.

The propensity score (PS) weighting method is an analytic technique that has been applied in multiple fields for a number of purposes. Here, we discuss two common applications, which are (1) to correct for selection bias and (2) to adjust for confounding variables when estimating the effect of an exposure variable on the outcome of interest.

DOI: http://dx.doi.org/10.1017/S1041610216002490.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28095944.

Easley TJ. Medical Journals, Publishers, and Conflict of Interest. JAMA. 2017 May 2;317(17):1759-60. PMID: 28464164.

[First paragraph]

Publishers of scientific journals must be as invested as editors in defining and managing conflict of interest policies and practices, because the credibility and integrity of their publications constitute their most important asset. How and why a publisher engages in this effort are increasingly complex, given the exponential changes in biomedical journalism over the last 15 years. In addition, the world of medical journalism is affected by wider cultural trends in the media; in general, issues around conflict of interest have arguably never been more prominent. The veracity, independence, and objectivity of the media are under siege, with public confidence in the “fourth estate” reaching dangerous lows. At the same time, the public has become increasingly cynical, and public disillusionment centered on conflict of interest is on the rise.

DOI: http://dx.doi.org/10.1001/jama.2017.3421.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/28464164.

Fox DM. Evidence and Health Policy: Using and Regulating Systematic Reviews. Am.J.Public Health. 2017 Jan;107(1):88-92. PMID: 27854522.

Systematic reviews have, increasingly, informed policy for almost 3 decades. In many countries, systematic reviews have informed policy for public and population health, paying for health care, increasing the quality and efficiency of interventions, and improving the effectiveness of health sector professionals and the organizations in which they work. Systematic reviews also inform other policy areas: criminal justice, education, social welfare, and the regulation of toxins in the environment. Although the production and use of systematic reviews has steadily increased, many clinicians, public health officials, representatives of commercial organizations, and, consequently, policymakers who are responsive to them, have been reluctant to use these reviews to inform policy; others have actively opposed using them. Systematic reviews could inform policy more effectively with changes to current practices and the assumptions that sustain these practices-assumptions made by researchers and the organizations that employ them, by public and private funders of systematic reviews, and by organizations that finance, set priorities and standards for, and publish them.

DOI: http://dx.doi.org/10.2105/AJPH.2016.303485.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/27854522.

Liu Z, Saldanha IJ, Margolis D, Dumville JC, Cullum NA. Outcomes in Cochrane systematic reviews related to wound care: An investigation into prespecification. Wound Repair Regen. 2017 Apr;25(2):292-308. PMID: 28370877.

The choice of outcomes in systematic reviews of the effects of interventions is crucial, dictating which data are included and analyzed. Full prespecification of outcomes in systematic reviews can reduce the risk of outcome reporting bias but, this issue has not been widely investigated. This study is the first to analyze the nature and specification of outcomes used in Cochrane Wounds (CW) systematic reviews. Adequacy of outcome specification was assessed using a five-element framework of key outcome components: outcome domain, specific measurement, specific metric, method of aggregation, and time points. We identified all CW review titles associated with a protocol published on or before October 1, 2014. We categorized all reported outcome domains and recorded whether they were primary or secondary outcomes. We explored outcome specification for outcome domains reported in 25% or more of the eligible protocols. We included 106 protocols and 126 outcome domains; 24.6% (31/126) domains were used as primary outcomes at least once. Eight domains were reported in >/=25% of protocols: wound healing, quality of life, costs, adverse events, resource use, pain, wound infection, and mortality. Wound healing was the most completely specified outcome domain (median 3; interquartile range [IQR] =1-5) along with resource use (median 3; IQR 2-4). Quality of life (median 1; IQR 1-3), pain (median 1; IQR 1-3), and costs (median 1; IQR 1-4) were the least completely specified outcome domains. Outcomes are frequently poorly prespecified and the elements of metric, aggregation, and time-point are rarely adequately specified. We strongly recommend that reviewers be more vigilant about prespecifying outcomes, using the five-element framework. Better prespecification is likely to improve review quality by reducing bias in data abstraction and analysis, and by reducing subjectivity in the decision of which outcomes to extract; it may also improve outcome specification in clinical trial design and reporting.

DOI: http://dx.doi.org/10.1111/wrr.12519.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28370877.

McKinney RE,Jr, Pierce HH. Strategies for Addressing a Broader Definition of Conflicts of Interest. JAMA. 2017 May 2;317(17):1727-8. PMID: 28464167.

[First paragraph, reference html link removed]

In the pursuit of truth in scientific evaluation, objectivity of the researcher is assumed, but in reality, interpreting research data requires judgment and interpretation. That 2 scientists can analyze the same data set and come to different conclusions is a reflection of the subjective elements that illustrate both the complexity of science and the potential for bias to influence reported outcomes. As one potential source of bias, a conflict of interest exists when professional judgment concerning some primary interest or responsibility is affected by a secondary interest or responsibility.1 Because all human beings have interests beyond their primary duties (eg, self-interest), conflicts of interest are ubiquitous and the evaluation of their potential effect is a matter of assessing the probable degree to which bias might have influenced the conduct of research or interpretation of data. The policies, rules, investigations, and scholarship on conflicts of interest have largely focused on financial conflicts of interest, with ongoing debates as to whether the definition of conflict of interest appropriately or disproportionately relies on financial relationships as a proxy indicator of bias. More comprehensive strategies for mitigating bias, regardless of whether it has been observed, can effectively supplement targeted responses to identified financial conflicts of interest.

DOI: http://dx.doi.org/10.1001/jama.2017.3857.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28464167.

Quincy B, Ragan P. Critical Appraisal of Observational Designs. J.Physician.Assist.Educ. 2017 Mar;28(1):49-52. PMID: 28207583.

[First paragraph, reference html links removed]

Although the randomized controlled trial (RCT) is generally identified as the most rigorous of the study designs for an individual research study,1 it is often impractical, impossible, or unethical to randomly assign subjects to exposure groups for study purposes.2 Useful alternatives to the experimental design of the RCT include the various observational designs. With observational designs, investigators observe subjects in their naturally occurring groups rather than assigning them to groups. Observational designs can be categorized as prospective or retrospective in nature. Among observational designs, the concurrent cohort study is a classic example of a prospective approach, whereas the case?control and historical cohort designs take a retrospective approach. The cohort and case?control study designs are less rigorous than the RCT but may offer more “real world” results,2,3 and therefore have their place in health research. In addition, these studies often provide the initial evidence to substantiate conducting an RCT. For that reason, it is important for physician assistants (PAs) to recognize the inherent strengths and limitations of observational studies. The aim of this article is to provide an overview of cohort and case?control designs and to discuss the inherent strengths and limitations of each design that need to be considered in their critical appraisal.

DOI: http://dx.doi.org/10.1097/JPA.0000000000000111.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/28207583.

Spencer-Bonilla G, Singh Ospina N, Rodriguez-Gutierrez R, Brito JP, Iñiguez-Ariza N, Tamhane S, Erwin PJ, Murad MH, Montori VM. Systematic reviews of diagnostic tests in endocrinology: an audit of methods, reporting, and performance. Endocrine. 2017 Jul;57(1):18-34. PMID: 28585154.

BACKGROUND:
Systematic reviews provide clinicians and policymakers estimates of diagnostic test accuracy and their usefulness in clinical practice. We identified all available systematic reviews of diagnosis in endocrinology, summarized the diagnostic accuracy of the tests included, and assessed the credibility and clinical usefulness of the methods and reporting.
METHODS:
We searched Ovid MEDLINE, EMBASE, and Cochrane CENTRAL from inception to December 2015 for systematic reviews and meta-analyses reporting accuracy measures of diagnostic tests in endocrinology. Experienced reviewers independently screened for eligible studies and collected data. We summarized the results, methods, and reporting of the reviews. We performed subgroup analyses to categorize diagnostic tests as most useful based on their accuracy.
RESULTS:
We identified 84 systematic reviews; half of the tests included were classified as helpful when positive, one-fourth as helpful when negative. Most authors adequately reported how studies were identified and selected and how their trustworthiness (risk of bias) was judged. Only one in three reviews, however, reported an overall judgment about trustworthiness and one in five reported using adequate meta-analytic methods. One in four reported contacting authors for further information and about half included only patients with diagnostic uncertainty.
CONCLUSION:
Up to half of the diagnostic endocrine tests in which the likelihood ratio was calculated or provided are likely to be helpful in practice when positive as are one-quarter when negative. Most diagnostic systematic reviews in endocrine lack methodological rigor, protection against bias, and offer limited credibility. Substantial efforts, therefore, seem necessary to improve the quality of diagnostic systematic reviews in endocrinology.

DOI: http://dx.doi.org/10.1007/s12020-017-1298-1.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28585154.

Stensrud MJ, Valberg M, Røysland K, Aalen OO. Exploring selection bias by causal frailty models: The magnitude matters. Epidemiology. 2017 May;28(3):379-86. PMID: 28244888.

Counter-intuitive associations appear frequently in epidemiology, and these results are often debated. In particular, several scenarios are characterized by a general risk factor that appears protective in particular subpopulations, for example, individuals suffering from a specific disease. However, the associations are not necessarily representing causal effects. Selection bias due to conditioning on a collider may often be involved, and causal graphs are widely used to highlight such biases. These graphs, however, are qualitative, and they do not provide information on the real life relevance of a spurious association. Quantitative estimates of such associations can be obtained from simple statistical models. In this study, we present several paradoxical associations that occur in epidemiology, and we explore these associations in a causal, frailty framework. By using frailty models, we are able to put numbers on spurious effects that often are neglected in epidemiology. We discuss several counter-intuitive findings that have been reported in real life analyses, and we present calculations that may expand the understanding of these associations. In particular, we derive novel expressions to explain the magnitude of bias in index-event studies.

DOI: https://doi.org/10.1097/EDE.0000000000000621.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28244888.

Taichman DB, Sahni P, Pinborg A, Peiperl L, Laine C, James A, Hong ST, Haileamlak A, Gollogly L, Godlee F, et al. Data Sharing Statements for Clinical Trials: A Requirement of the International Committee of Medical Journal Editors. PLoS Med. 2017 Jun 5;14(6):e1002315. PMID: 28582414.

The International Committee of Medical Journal Editors announces requirements that a data sharing plan be prospectively registered, and a data sharing statement be included in submitted manuscripts, for clinical trials to be published in ICMJE journals.

Also published as:

Taichman DB, Sahni P, Pinborg A, Peiperl L, Laine C, James A, Hong ST, Haileamlak A, Gollogly L, Godlee F, Frizelle FA, Florenzano F, Drazen JM, Bauchner H, Baethge C, Backus J. Data sharing statements for clinical trials: a requirement of the International Committee of Medical Journal Editors. N Z Med J. 2017 Jun 16;130(1457):7-10. PubMed PMID: 28617782. Taichman DB, Sahni P, Pinborg A, Peiperl L, Laine C, James A, Hong ST, Haileamlak A, Gollogly L, Godlee F, Frizelle FA, Florenzano F, Drazen JM, Bauchner H, Baethge C, Backus J. Data Sharing Statements for Clinical Trials: A Requirement of the International Committee of Medical Journal Editors. JAMA. 2017 Jun 27;317(24):2491-2492. doi: 10.1001/jama.2017.6514. PubMed PMID: 28586895. Taichman DB, Sahni P, Pinborg A, Peiperl L, Laine C, James A, Hong ST, Haileamlak A, Gollogly L, Godlee F, Frizelle FA, Florenzano F, Drazen JM, Bauchner H, Baethge C, Backus J. Data sharing statements for clinical trials: a requirement of the International Committee of Medical Journal Editors. Lancet. 2017 Jun 10;389(10086):e12-e14. doi: 10.1016/S0140-6736(17)31282-5. Epub 2017 Jun 6. PubMed PMID: 28596041. Taichman DB, Sahni P, Pinborg A, Peiperl L, Laine C, James A, Hong ST, Haileamlak A, Gollogly L, Godlee F, Frizelle FA, Florenzano F, Drazen JM, Bauchner H, Baethge C, Backus J. Data Sharing Statements for Clinical Trials: A Requirement of the International Committee of Medical Journal Editors. Ann Intern Med. 2017 Jun 6. doi: 10.7326/M17-1028. [Epub ahead of print] PubMed PMID: 28586790. Taichman DB, Sahni P, Pinborg A, Peiperl L, Laine C, James A, Hong ST, Haileamlak A, Gollogly L, Godlee F, Frizelle FA, Florenzano F, Drazen JM, Bauchner H, Baethge C, Backus J. Data Sharing Statements for Clinical Trials - A Requirement of the International Committee of Medical Journal Editors. N Engl J Med. 2017 Jun 8;376(23):2277-2279. doi: 10.1056/NEJMe1705439. Epub 2017 Jun 5. PubMed PMID: 28581902. Taichman DB, Sahni P, Pinborg A, Peiperl L, Laine C, James A, Hong ST, Haileamlak A, Gollogly L, Godlee F, Frizelle FA, Florenzano F, Drazen JM, Bauchner H, Baethge C, Backus J. Data Sharing Statements for Clinical Trials: A Requirement of the International Committee of Medical Journal Editors. J Korean Med Sci. 2017 Jul;32(7):1051-1053. doi: 10.3346/jkms.2017.32.7.1051. PubMed PMID: 28581257; PubMed Central PMCID: PMC5461304.

FREE FULL TEXT: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5459581/pdf/pmed.1002315.pdf
DOI: http://dx.doi.org/10.1371/journal.pmed.1002315.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/28582414.
PubMed Central: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5459581.

Thornton JP. Conflict of Interest and Legal Issues for Investigators and Authors. JAMA. 2017 May 2;317(17):1761-2. PMID: 28464169.

[First paragraph]

Investigators have legal duties and responsibilities that span the long arc from study design and patient enrollment to well beyond publication of results of clinical trials and other types of studies in peer-reviewed journals. Duties extend to human research participants, institutional review boards (IRBs), data and safety monitoring boards, funders and research sponsors, employers, and finally, journals and their readers.

DOI: http://dx.doi.org/10.1001/jama.2017.4235.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28464169.

van Walraven C. 'Bootstrap imputation with a disease probability model minimized bias from misclassification due to administrative database codes. J.Clin.Epidemiol. 2017 Apr;84:114-20. PMID: 28167144.

OBJECTIVE:
Diagnostic codes used in administrative databases cause bias due to misclassification of patient disease status. It is unclear which methods minimize this bias.
STUDY DESIGN AND SETTING:
Serum creatinine measures were used to determine severe renal failure status in 50,074 hospitalized patients. The true prevalence of severe renal failure and its association with covariates were measured. These were compared to results for which renal failure status was determined using surrogate measures including the following: (1) diagnostic codes; (2) categorization of probability estimates of renal failure determined from a previously validated model; or (3) bootstrap methods imputation of disease status using model-derived probability estimates.
RESULTS:
Bias in estimates of severe renal failure prevalence and its association with covariates were minimal when bootstrap methods were used to impute renal failure status from model-based probability estimates. In contrast, biases were extensive when renal failure status was determined using codes or methods in which model-based condition probability was categorized.
CONCLUSION:
Bias due to misclassification from inaccurate diagnostic codes can be minimized using bootstrap methods to impute condition status using multivariable model-derived probability estimates.
Copyright © 2017 Elsevier Inc. All rights reserved.

DOI: http://dx.doi.org/10.1016/j.jclinepi.2017.01.007.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28167144.

Waldstreicher J, Johns ME. Managing Conflicts of Interest in Industry-Sponsored Clinical Research: More Physician Engagement Is Required. JAMA. 2017 May 2;317(17):1751-2. PMID: 28464168.

[First paragraph, reference html links removed]

Companies engaged in health care research and development share, with health professionals, academic health centers, patient advocacy organizations, and other medical and health-related institutions, the mission to improve human health. Companies play indispensable roles in advancing almost all aspects of this mission, including sponsoring clinical research and generating clinical data that serve as the basis for drug and device approvals, guidelines, and prescribing information. Clinical research must be conducted with the utmost scientific rigor, integrity, and compassion. It has long been recognized that the dual obligations companies have to both patients and shareholders create real and perceived conflicts of interest (COIs). If not properly managed, COIs can undermine physician and patient trust in industry-supported development of drugs and devices.1,2

DOI: http://dx.doi.org/10.1001/jama.2017.4160.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28464168.

Walter SD, Han H, Briel M, Guyatt GH. Quantifying the bias in the estimated treatment effect in randomized trials having interim analyses and a rule for early stopping for futility. Stat.Med. 2017 Apr 30;36(9):1506-18. PMID: 28183155.

In this paper, we consider the potential bias in the estimated treatment effect obtained from clinical trials, the protocols of which include the possibility of interim analyses and an early termination of the study for reasons of futility. In particular, by considering the conditional power at an interim analysis, we derive analytic expressions for various parameters of interest: (i) the underestimation or overestimation of the treatment effect in studies that stop for futility; (ii) the impact of the interim analyses on the estimation of treatment effect in studies that are completed, i.e. that do not stop for futility; (iii) the overall estimation bias in the estimated treatment effect in a single study with such a stopping rule; and (iv) the probability of stopping at an interim analysis. We evaluate these general expressions numerically for typical trial scenarios. Results show that the parameters of interest depend on a number of factors, including the true underlying treatment effect, the difference that the trial is designed to detect, the study power, the number of planned interim analyses and what assumption is made about future data to be observed after an interim analysis. Because the probability of stopping early is small for many practical situations, the overall bias is often small, but a more serious issue is the potential for substantial underestimation of the treatment effect in studies that actually stop for futility. We also consider these ideas using data from an illustrative trial that did stop for futility at an interim analysis.

DOI: http://dx.doi.org/10.1002/sim.7242.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28183155.

Wolkewitz M, Schumacher M. Unmasking survival biases in observational treatment studies of influenza patients. J.Clin.Epidemiol. 2017 Feb 7. PMID: 28188897.

BACKGROUND AND OBJECTIVE:
Several observational studies reported that Oseltamivir (Tamiflu) reduced mortality in infected and hospitalized patients. Because of the restriction of observation to hospital stay and time-dependent treatment assignment, such findings were prone to common types of survival bias (length, time-dependent and competing risk bias).
METHODS:
British hospital data from the Influenza Clinical Information Network (FLU-CIN) study group were used which included 1,391 patients with confirmed pandemic influenza A/H1N1 2009 infection. We used a multistate model approach with following states: hospital admission, Oseltamivir treatment, discharge, and death. Time origin is influenza onset. We displayed individual data, risk sets, hazards, and probabilities from multistate models to study the impact of these three common survival biases.
RESULTS:
The correct hazard ratio of Oseltamivir for death was 1.03 (95% confidence interval [CI]: 0.64-1.66) and for discharge 1.89 (95% CI: 1.65-2.16). Length bias increased both hazard ratios (HRs): HR (death) = 1.82 (95% CI: 1.12-2.98) and HR (discharge) = 4.44 (95% CI: 3.90-5.05), whereas the time-dependent bias reduced them: HR (death) = 0.62 (95% CI: 0.39-1.00) and HR (discharge) = 0.85 (95% CI: 0.75-0.97). Length and time-dependent bias were less pronounced in terms of probabilities. Ignoring discharge as a competing event for hospital death led to a remarkable overestimation of hospital mortality and failed to detect the reducing effect of Oseltamivir on hospital stay.
CONCLUSIONS:
The impact of each of the three survival biases was remarkable, and it can make neuraminidase inhibitors appear more effective or even harmful. Incorrect and misclassified risk sets were the primary sources of biased hazard rates.
Copyright © 2017 Elsevier Inc. All rights reserved.

DOI: http://dx.doi.org/10.1016/j.jclinepi.2017.01.008.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28188897.

Wolkewitz M, Schumacher M. Survival biases lead to flawed conclusions in observational treatment studies of influenza patients. J.Clin.Epidemiol. 2017 Apr;84:121-9. PMID: 28188897.

BACKGROUND AND OBJECTIVE:
Several observational studies reported that Oseltamivir (Tamiflu) reduced mortality in infected and hospitalized patients. Because of the restriction of observation to hospital stay and time-dependent treatment assignment, such findings were prone to common types of survival bias (length, time-dependent and competing risk bias).
METHODS:
British hospital data from the Influenza Clinical Information Network (FLU-CIN) study group were used which included 1,391 patients with confirmed pandemic influenza A/H1N1 2009 infection. We used a multistate model approach with following states: hospital admission, Oseltamivir treatment, discharge, and death. Time origin is influenza onset. We displayed individual data, risk sets, hazards, and probabilities from multistate models to study the impact of these three common survival biases.
RESULTS:
The correct hazard ratio of Oseltamivir for death was 1.03 (95% confidence interval [CI]: 0.64-1.66) and for discharge 1.89 (95% CI: 1.65-2.16). Length bias increased both hazard ratios (HRs): HR (death) = 1.82 (95% CI: 1.12-2.98) and HR (discharge) = 4.44 (95% CI: 3.90-5.05), whereas the time-dependent bias reduced them: HR (death) = 0.62 (95% CI: 0.39-1.00) and HR (discharge) = 0.85 (95% CI: 0.75-0.97). Length and time-dependent bias were less pronounced in terms of probabilities. Ignoring discharge as a competing event for hospital death led to a remarkable overestimation of hospital mortality and failed to detect the reducing effect of Oseltamivir on hospital stay.
CONCLUSIONS:
The impact of each of the three survival biases was remarkable, and it can make neuraminidase inhibitors appear more effective or even harmful. Incorrect and misclassified risk sets were the primary sources of biased hazard rates.
Copyright © 2017 Elsevier Inc. All rights reserved.

DOI: http://dx.doi.org/10.1016/j.jclinepi.2017.01.008.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28188897.

Büchter RB, Pieper D. Most overviews of Cochrane reviews neglected potential biases from dual authorship. J.Clin.Epidemiol. 2016 Sep;77:91-4. PMID: 27131430.

OBJECTIVE:
Some authors of Cochrane overviews have also (co-)authored one or more of the underlying reviews. We examined the extent of dual (co-)authorship in Cochrane overviews, how it is dealt with, and whether the issue is raised in protocols.
STUDY DESIGN:
The Cochrane Library was searched for overviews and protocols for overviews in September 2015. Data on dual (co-)authorship were extracted for each review into standard spreadsheets by one author and checked for accuracy by a second.
RESULTS:
Twenty overviews and 25 protocols were identified. The overviews included a median of 10 reviews (interquartile range [IQR]: 6-18.5). In 18/20 overviews (90%), at least one of the included reviews was affected by dual (co-)authorship. A median of 5 (IQR, 2.5-7) reviews per overview was affected by dual (co-)authorship. In 8/18 (44%) overviews with dual (co-)authorship, quality assessment was conducted independently. In 7/25 (28%) protocols, dual (co-)authorship was mentioned.
CONCLUSION:
Potential biases arising from dual (co-)authorship are often neglected in Cochrane overviews. We argue that authors of Cochrane overviews and Review Groups should pay more attention to the issue, to avoid bias and preserve the good reputation that Cochrane overviews will typically deserve.
Copyright © 2016 Elsevier Inc. All rights reserved.

DOI: http://dx.doi.org/10.1016/j.jclinepi.2016.04.008.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=27131430.

Jesus TS. Systematic Reviews and Clinical Trials in Rehabilitation: Comprehensive Analyses of Publication Trends. Arch.Phys.Med.Rehabil. 2016 Nov;97(11):1853,1862.e2. PMID: 27424809.

OBJECTIVE:
To analyze publication trends of clinical trials (CTs) and systematic reviews (SRs) in rehabilitation.
DESIGN:
PubMed searches were performed with appropriate combinations of Medical Subject Headings. All entries until December 2013, and their yearly distributions since 1981 (when the first rehabilitation SR was identified), were retrieved. After the initial data visualization, data analyses were narrowed to specific periods. Linear regression techniques analyzed the growth of publications and their relative percentages over time.
SETTING:
Not applicable.
PARTICIPANTS:
Not applicable.
INTERVENTIONS:
Not applicable.
MAIN OUTCOME MEASURES:
Not applicable.
RESULTS:
Although not observed for SRs, CTs have grown at a much higher rate in rehabilitation than in the broader health/medical field-more than twice the difference for both periods analyzed (1989-2001, 2001-2013). Rehabilitation journals published about 20% or less of the rehabilitation SRs and CTs, and no significant increases were observed over time (P>.05; 2001-2013). Neurologic conditions, particularly cerebrovascular, were the most addressed by rehabilitation SRs and CTs, while differences between neurologic and other groups of conditions typically widened over time (eg, more than doubled between neurologic and musculoskeletal conditions in 15y).
CONCLUSIONS:
While publications of CTs are increasing at a much higher rate within rehabilitation than within broader health care, further research is warranted to explain why this trend is not being followed by SRs, particularly those with meta-analysis. Similarly, research might determine whether the (growing) differences in the publications of rehabilitation SRs and CTs across groups of conditions are justified by clinical or population need.
Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

DOI: http://dx.doi.org/10.1016/j.apmr.2016.06.017.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/27424809.

Perrier L, Lightfoot D, Kealey MR, Straus SE, Tricco AC. Knowledge synthesis research: a bibliometric analysis. J.Clin.Epidemiol. 2016 May;73:50-7. PMID: 26912123.

OBJECTIVES:
The purpose of this article is to describe the volume and attributes of original research available in PubMed on emerging knowledge synthesis methods (excluding traditional systematic reviews) published by researchers.
STUDY DESIGN AND SETTING:
Bibliometric analysis.
RESULTS:
Six-hundred eight studies related to the topic of knowledge synthesis methods were analyzed. Although there has been a steady increase in publications on knowledge synthesis methods since 2003, studies are dispersed among a large number of journals. Similarly, a large number of authors are publishing on these methods but in limited numbers for any individual. Relevant Medical Subject Headings that were applied most often to these studies included qualitative research, research design, meta-analysis as topic, and review literature as topic.
CONCLUSION:
There is no prevailing journal or author that is a leader in reporting on knowledge synthesis methods. Relevant Medical Subject Headings were either not applied to most records or not available for the synthesis method being examined. This may lend itself to inconsistencies and variations in methods making it challenging for researchers and research users to locate and appraise these articles.
Copyright © 2016 Elsevier Inc. All rights reserved.

DOI: http://dx.doi.org/10.1016/j.jclinepi.2015.02.019.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/26912123.