Martínez García L, Pardo-Hernandez H, Superchi C, Niño de Guzman E, Ballesteros M, Ibargoyen Roteta N, McFarlane E, Posso M, Roqué I Figuls M, Rotaeche Del Campo R, et al. Methodological systematic review identifies major limitations in prioritisation processes for updating. J.Clin.Epidemiol. Epub 2017 May 23. PMID: 28549931.

OBJECTIVE:
To identify and describe strategies to prioritise the updating of systematic reviews (SRs), health technology assessments (HTAs), or clinical guidelines (CGs).
STUDY DESIGN AND SETTING:
We conducted a SR of studies describing one or more methods to prioritise SRs, HTAs, or CGs for updating. We searched MEDLINE (PubMed, from 1966 to August 2016) and The Cochrane Methodology Register (The Cochrane Library, Issue 8 2016). We handsearched abstract books, reviewed reference lists, and contacted experts. Two reviewers independently screened the references and extracted data.
RESULTS:
We included 14 studies. Six studies were classified as descriptive (6/14, 42.9%) and eight as implementation studies (8/14, 57.1%). Six studies reported an updating strategy (6/14, 42.9%), six a prioritisation process (6/14, 42.9%), and two a prioritisation criterion (2/14, 14.2%). Eight studies focused on SRs (8/14, 57.1%), six on CGs (6/14, 42.9%), and none were about HTAs. We identified 76 prioritisation criteria that can be applied when prioritising documents for updating. The most frequently cited criteria were: available evidence (19/76, 25.0%), clinical relevance (10/76; 13.2%), and users' interest (10/76; 13.2%).
CONCLUSIONS:
There is wide variability and suboptimal reporting of the methods used to develop and implement processes to prioritise updating of SRs, HTAs, and CGs.
Copyright © 2017 Elsevier Inc. All rights reserved.

DOI: https://doi.org/10.1016/j.jclinepi.2017.05.008.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28549931.

Rockers PC, Tugwell P, Grimshaw J, Oliver S, Atun R, Røttingen JA, Fretheim A, Ranson MK, Daniels K, Luiza VL, et al. Quasi-experimental Study Designs Series - Paper 12: Strengthening Global Capacity for Evidence Synthesis of Quasi-experimental Health Systems Research. J.Clin.Epidemiol. Epub 2017 Mar 28. PMID: 28363733.

Evidence from quasi-experimental studies is often excluded from systematic reviews of health systems research despite the fact that such studies can provide strong causal evidence when well-conducted. This article discusses global coordination of efforts to institutionalize the inclusion of causal evidence from quasi-experiments in systematic reviews of health systems research. In particular, we are concerned with identifying opportunities for strengthening capacity at the global- and local-level for implementing protocols necessary to ensure that reviews that include quasi-experiments are consistently of the highest quality. We first describe the current state of the global infrastructure that facilitates the production of systematic reviews of health systems research. We identify five important types of actors operating within this infrastructure: review authors; synthesis collaborations that facilitate the review process; synthesis interest groups that supplement the work of the larger collaborations; review funders; and end users, including policymakers. Then, we examine opportunities for intervening to build the capacity of each type of actor to support the inclusion of quasi-experiments in reviews. Lastly, we suggest practical next steps for proceeding with capacity building efforts. Due to the complexity and relative nascence of the field, we recommend a carefully planned and executed approach to strengthening global capacity for the inclusion of quasi-experimental studies in systematic reviews.

DOI: http://dx.doi.org/10.1016/j.jclinepi.2016.03.034.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28363733.

Reeves BC, Wells GA, Waddington H. Quasi-experimental study designs series - Paper 5: Classifying studies evaluating effects of health interventions - a taxonomy without labels. J.Clin.Epidemiol. Epub 2017 Mar 27. PMID: 28351692.

OBJECTIVES:
The aim of the study was to extend a previously published checklist of study design features to include study designs often used by health systems researchers and economists. Our intention is to help review authors in any field to set eligibility criteria for studies to include in a systematic review that relate directly to the intrinsic strength of the studies in inferring causality. We also seek to clarify key equivalences and differences in terminology used by different research communities.
STUDY DESIGN AND SETTING:
Expert consensus meeting.
RESULTS:
The checklist comprises seven questions, each with a list of response items, addressing: clustering of an intervention as an aspect of allocation or due to the intrinsic nature of the delivery of the intervention; for whom, and when, outcome data are available; how the intervention effect was estimated; the principle underlying control for confounding; how groups were formed; the features of a study carried out after it was designed; and the variables measured before intervention.
CONCLUSION:
The checklist clarifies the basis of credible quasi-experimental studies, reconciling different terminology used in different fields of investigation and facilitating communications across research communities. By applying the checklist, review authors' attention is also directed to the assumptions underpinning the methods for inferring causality.
Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

DOI: http://dx.doi.org/ 10.1016/j.jclinepi.2017.02.016.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28351692.

Waddington H, Aloe A, Becker BJ, Djimeu EW, Hombrados JG, Tugwell P, Wells G, Reeves B. Quasi-experimental study designs series -Paper 6: Risk of bias assessment. J.Clin.Epidemiol. Epub 2017 Mar 25. PMID: 28351693.

OBJECTIVES:
Rigorous and transparent bias assessment is a core component of high-quality systematic reviews. We assess modifications to existing risk of bias approaches to incorporate rigorous quasi-experimental approaches with selection on unobservables. These are nonrandomized studies using design-based approaches to control for unobservable sources of confounding such as difference studies, instrumental variables, interrupted time series, natural experiments, and regression-discontinuity designs.
STUDY DESIGN AND SETTING:
We review existing risk of bias tools. Drawing on these tools, we present domains of bias and suggest directions for evaluation questions.
RESULTS:
The review suggests that existing risk of bias tools provide, to different degrees, incomplete transparent criteria to assess the validity of these designs. The paper then presents an approach to evaluating the internal validity of quasi-experiments with selection on unobservables.
CONCLUSION:
We conclude that tools for nonrandomized studies of interventions need to be further developed to incorporate evaluation questions for quasi-experiments with selection on unobservables.
Copyright © 2017 Elsevier Inc. All rights reserved.

DOI: http://dx.doi.org/10.1016/j.jclinepi.2017.02.015.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28351693.

Hannes K, Petry K, Heyvaert M. The meta-aggregative approach to qualitative evidence synthesis: a worked example on experiences of pupils with special educational needs in inclusive education. International Journal of Research & Method in Education. Epub 2017 Mar 20

The present article focuses on the meta-aggregative approach to qualitative evidence synthesis. Originally developed in Australia by the Joanna Briggs Institute, it mirrors the review process for reviews of effectiveness outlined by the international Cochrane and Campbell Collaboration, while remaining sensitive to the specific characteristics of qualitative research. Meta-aggregation is largely inspired by American pragmatism, hence its most distinct feature is that it produces synthesized statements in the form of ‘lines of action’ to be undertaken by practitioners and policy-makers. After a decade of implementing the meta-aggregative approach, we consider its challenges and outline how these can be dealt with in practice. We illustrate this by means of a worked example on experiences of pupils with special educational needs in inclusive education.

DOI: http://dx.doi.org/10.1080/1743727X.2017.1299124.

Gómez-García F, Ruano J, Aguilar-Luque M, Gay-Mimbrera J, Maestre-Lopez B, Sanz-Cabanillas JL, Carmona-Fernández PJ, González-Padilla M, Vélez García-Nieto A, Isla-Tejera B. Systematic reviews and meta-analyses on psoriasis: role of funding sources, conflict of interest and bibliometric indices as predictors of methodological quality. Br.J.Dermatol. Epub 2017 Feb 13. PMID: 28192600.

BACKGROUND:
The quality of systematic reviews and meta-analyses on psoriasis, a chronic inflammatory skin disease that severely impairs quality of life and is associated with high costs, remains unknown.
OBJECTIVES:
To assess the methodological quality of systematic reviews published on psoriasis.
METHODS:
After a comprehensive search in MEDLINE, Embase and the Cochrane Database (PROSPERO: CDR42016041611), the quality of studies was assessed by two raters using the Assessment of Multiple Systematic Reviews (AMSTAR) tool. Article metadata and journal-related bibliometric indices were also obtained. Systematic reviews were classified as low (0-4), moderate (5-8) or high (9-11) quality. A prediction model for methodological quality was fitted using principal component and multivariate ordinal logistic regression analyses.
RESULTS:
We classified 220 studies as high (17·2%), moderate (55·0%) or low (27·8%) quality. Lower compliance rates were found for AMSTAR question (Q)5 (list of studies provided, 11·4%), Q10 (publication bias assessed, 27·7%), Q4 (status of publication included, 39·5%) and Q1 (a priori design provided, 40·9%). Factors such as meta-analysis inclusion [odds ratio (OR) 6·22; 95% confidence interval (CI) 2·78-14·86], funding by academic institutions (OR 2·90, 95% CI 1·11-7·89), Article Influence score (OR 2·14, 95% CI 1·05-6·67), 5-year impact factor (OR 1·34, 95% CI 1·02-1·40) and article page count (OR 1·08, 95% CI 1·02-1·15) significantly predicted higher quality. A high number of authors with a conflict of interest (OR 0·90, 95% CI 0·82-0·99) was significantly associated with lower quality.
CONCLUSIONS:
The methodological quality of systematic reviews published about psoriasis remains suboptimal. The type of funding sources and author conflicts may compromise study quality, increasing the risk of bias.
© 2017 British Association of Dermatologists.

DOI: http://dx.doi.org/10.1111/bjd.15380.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28192600.

Rockers PC, Tugwell P, Røttingen JA, Bärnighausen T. Quasi-Experimental Study Designs Series- Paper 13: Realizing the Full Potential of Quasi-Experiments for Health Research. J.Clin.Epidemiol. Epub 2017 Apr 5. PMID: 28390896.

While the number of quasi-experiments conducted by health researchers has increased in recent years, there clearly remains unrealized potential for using these methods for causal evaluation of health policies and programs globally. This article proposes five prescriptions for capturing the full value of quasi-experiments for health research. First, new funding opportunities targeting proposals that use quasi-experimental methods should be made available to a broad pool of health researchers. Second, administrative data from health programs, often amenable to quasi-experimental analysis, should be made more accessible to researchers. Third, training in quasi-experimental methods should be integrated into existing health science graduate programs to increase global capacity to use these methods. Fourth, clear guidelines for primary research and synthesis of evidence from quasi-experiments should be developed. Fifth, strategic investments should be made to continue to develop new innovations in quasi-experimental methodologies. Tremendous opportunities exist to expand the use of quasi-experimental methods to increase our understanding of which health programs and policies work and which do not. Health researchers should continue to expand their commitment to rigorous causal evaluation with quasi-experimental methods, and international institutions should increase their support for these efforts.

DOI: http://dx.doi.org/10.1016/j.jclinepi.2017.03.016.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28390896.

Boden C, Bidonde J, Busch A. Analysis of Current Guidance on the Use of Randomized Controlled Trial Study Protocols in Systematic Reviews. J.Clin.Epidemiol. Epub 2017 Apr 25. PMID: 28455185.

OBJECTIVE:
Use of trial registry records and randomized controlled trial (RCT) study protocols can assist systematic reviewers in evaluating and, possibly, minimizing publication and selective reporting biases. This study examined current guidance on the use of registry records and RCT study protocols from key systematic review organizations, institutes and collaborations.
STUDY DESIGN AND SETTING:
Handbooks, guidelines and standards documents from key systematic review organizations and the EQUATOR network database were identified. Textual excerpts providing guidance on the use of trial registry records, RCT protocols and ongoing/unpublished studies were extracted independently by two reviewers and coded into a systematic review framework.
RESULTS:
Eleven documents published in English between 2009 and 2016 were included. Guidance for using RCT protocols and trial registry records was provided for 7 of the 16 framework categories, and guidance for using unpublished and ongoing studies was available for 8 of the 16 categories.
CONCLUSION:
This study identified gaps and ambiguities in language in guidance on the use of RCT protocols and trial registry records. To encourage and assist reviewers to employ trial registry records and RCT study protocols in systematic reviews current guidance should be expanded and clarified.
Copyright © 2017 Elsevier Inc. All rights reserved.

DOI: http://dx.doi.org/10.1016/j.jclinepi.2017.04.021.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28455185.

Backmann M. What's in a gold standard? In defence of randomised controlled trials. Med.Health Care Philos. Epub 2017 Apr 21. PMID: 28432483.

The standardised randomised clinical trial (RCT) has been exceedingly popular in medical research, economics, and practical policy making. Recently, RCTs have faced criticism. First, it has been argued by John Worrall that we cannot be certain that our sample is not atypical with regard to possible confounding factors. I will argue that at least in the case of medical research, we know enough about the relevant causal mechanisms to be justified to ignore a number of factors we have good reason not to expect to be disruptive. I will also argue against an argument provided by Nancy Cartwright and Eileen Munro that RCTs should not be taken to deductively infer probabilistic causal claims, but ampliatively. The paper will end on a discussion of evidence hierarchies and a defence of the stance of evidence-based medicine that RCTs are the best available method to assess a treatment's efficacy.

DOI: http://dx.doi.org/10.1007/s11019-017-9773-2.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28432483.

van Grootel L, van Wesel F, O'Mara-Eves A, Thomas J, Hox J, Boeije H. Using the realist perspective to link theory from qualitative evidence synthesis to quantitative studies: Broadening the matrix approach. Res.Synth.Methods. Epub 2017 Apr 21. PMID: 28429447.

BACKGROUND:
This study describes an approach for the use of a specific type of qualitative evidence synthesis in the matrix approach, a mixed studies reviewing method. The matrix approach compares quantitative and qualitative data on the review level by juxtaposing concrete recommendations from the qualitative evidence synthesis against interventions in primary quantitative studies. However, types of qualitative evidence syntheses that are associated with theory building generate theoretical models instead of recommendations. Therefore, the output from these types of qualitative evidence syntheses cannot directly be used for the matrix approach but requires transformation. This approach allows for the transformation of these types of output.
METHOD:
The approach enables the inference of moderation effects instead of direct effects from the theoretical model developed in a qualitative evidence synthesis. Recommendations for practice are formulated on the basis of interactional relations inferred from the qualitative evidence synthesis. In doing so, we apply the realist perspective to model variables from the qualitative evidence synthesis according to the context-mechanism-outcome configuration.
FINDINGS:
A worked example shows that it is possible to identify recommendations from a theory-building qualitative evidence synthesis using the realist perspective. We created subsets of the interventions from primary quantitative studies based on whether they matched the recommendations or not and compared the weighted mean effect sizes of the subsets. The comparison shows a slight difference in effect sizes between the groups of studies. The study concludes that the approach enhances the applicability of the matrix approach.
Copyright © 2017 John Wiley & Sons, Ltd.

DOI: https://doi.org/10.1002/jrsm.1241.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28429447.

Toews I, Booth A, Berg RC, Lewin S, Glenton C, Munthe-Kaas HM, Noyes J, Schroter S, Meerpohl JJ. Further exploration of dissemination bias in qualitative research required to facilitate assessment within qualitative evidence syntheses. J.Clin.Epidemiol. Epub 2017 Apr 19. PMID: 28433676.

OBJECTIVES:
To conceptualise and discuss dissemination bias in qualitative research.
RESULTS:
It is likely that the mechanisms leading to dissemination bias in quantitative research, including time lag, language, gray literature, and truncation bias also contribute to dissemination bias in qualitative research. These conceptual considerations have informed the development of a research agenda.
CONCLUSION:
Further exploration of dissemination bias in qualitative research is needed, including the extent of non-dissemination and related dissemination bias, and how to assess dissemination bias within qualitative evidence syntheses. We also need to consider the mechanisms through which dissemination bias in qualitative research could occur to explore approaches for reducing it.
Copyright © 2017 Elsevier Inc. All rights reserved.

DOI: http://dx.doi.org/10.1016/j.jclinepi.2017.04.010.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28433676.

Baron R, Kennes LN, Elling C. Retrospective analyses versus RCTs: comparing like with like? J.Pain Res. 2017 Mar 31;10:783-6. PMID: 28435315.

[First paragraph, reference html links removed]

In their recent retrospective analysis assessing oxycodone/naloxone (OXN) vs. tapentadol (TAP) treatment for chronic low-back pain with a neuropathic component, Ueberall and Mueller-Schwefe1 compare their results to the findings of an earlier phase 3b/4 study.2 In our opinion, a proper comparison to the prospective, randomized, controlled, open-label study by Baron and colleagues is scientifically not appropriate. Although Ueberall and Mueller-Schwefe use the terms “prospective,” “randomly,” and “blinded” and refer to the PROBE design (prospective, randomized, open-label, blinded endpoint),3 their database study is retrospective, nonrandomized, and nonblinded with the treatment choice left to the discretion of the physicians. In this context, the use of the term “intention-to-treat (ITT) population” is inappropriate because ITT is unambiguously defined as including all randomized subjects and thus inseparable from true randomization (ICH E9)4.

Comment on:

Ueberall MA, Mueller-Schwefe GH. Efficacy and tolerability balance of oxycodone/naloxone and tapentadol in chronic low back pain with a neuropathic component: a blinded end point analysis of randomly selected routine data from 12-week prospective open-label observations. J Pain Res. 2016 Nov 11;9:1001-1020. eCollection 2016. PubMed PMID: 27881925; PubMed Central PMCID: PMC5115682.

FREE FULL TEXT: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5386600/pdf/jpr-10-783.pdf
DOI: http://dx.doi.org/10.2147/JPR.S133369.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28435315.
PubMed Central: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5386600/.

Brøsen K, Funck-Brentano C, Kroemer HK, Pirmohamed M, Schwab M. Open letter on access to the BIA 10-2474 clinical trial data. Lancet. 2017 Jan 14;389(10065):156. PMID: 27955829.

[First paragraph]

In January 2016, the first-in-human (FIH) study of BIA 10-2474, an experimental fatty acid amide hydrolase inhibitor, led to the death of a healthy volunteer and to serious adverse events in another four of the five participants who received the drug in the fifth cohort of the multiple ascending dose part of the study. 84 participants had previously received the drug—48 during the single ascending dose part of the study, 12 during a food interaction part of the study, and 24 during four multiple ascending dose cohorts—apparently without severe adverse events, with the unexpected dramatic events occurring abruptly in the subsequent fifth cohort. This is the first time such a sudden appearance of extremely severe adverse events has been seen at such a late stage of an FIH study. These outcomes challenge the current methodology of early drug development and maximum tolerated dose finding in humans.

DOI: http://dx.doi.org/10.1016/S0140-6736(16)32515-6.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=27955829.

Dadkhah M, Lagzian M, Borchardt G. Questionable papers in citation databases as an issue for literature review. J.Cell.Commun.Signal. 2017 Jun;11(2):181-5. PMID: 28215004.

In recent years, the academic world has been faced with much academic misconduct. Examples involve plagiarizing papers, manipulating data, and launching predatory or hijacked journals. The literature exposing these activities is growing exponentially, and so is the presentation of criteria or guidelines for counteracting the problem. Most of the research is focused on predatory or hijacked journal detection and providing suitable warnings. Overlooked in all this is the fact that papers published in these journals are questionable, but nevertheless show up in standard citation databases. We need some way to flag them so future researchers will be aware of their questionable nature and prevent their use in literature review.

DOI: http://dx.doi.org/10.1007/s12079-016-0370-6.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28215004.

Dalton J, Booth A, Noyes J, Sowden AJ. Potential value of systematic reviews of qualitative evidence in informing user-centred health and social care: findings from a descriptive overview. J.Clin.Epidemiol. 2017 Apr 24. PMID: 28450254.

OBJECTIVES:
Systematic reviews of quantitative evidence are well established in health and social care. Systematic reviews of qualitative evidence are increasingly available, but volume, topics covered, methods used, and reporting quality are largely unknown. We provide a descriptive overview of systematic reviews of qualitative evidence assessing health and social care interventions included on the Database of Abstracts of Reviews of Effects (DARE).
STUDY DESIGN AND SETTING:
We searched DARE for reviews published between January 1, 2009, and December 31, 2014. We extracted data on review content and methods, summarized narratively, and explored patterns over time.
RESULTS:
We identified 145 systematic reviews conducted worldwide (64 in the UK). Interventions varied but largely covered treatment or service delivery in community and hospital settings. There were no discernible patterns over time. Critical appraisal of primary studies was conducted routinely. Most reviews were poorly reported.
CONCLUSION:
Potential exists to use systematic reviews of qualitative evidence when driving forward user-centered health and social care. We identify where more research is needed and propose ways to improve review methodology and reporting.
Copyright © 2017 Elsevier Inc. All rights reserved.

DOI: http://dx.doi.org/10.1016/j.jclinepi.2017.04.020.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28450254.

Herrmann D, Sinnett P, Holmes J, Khan S, Koller C, Vassar M. Statistical controversies in clinical research: publication bias evaluations are not routinely conducted in clinical oncology systematic reviews. Ann.Oncol. 2017 May 1;28(5):931-7. PMID: 28039176.

Background:
Publication bias is an over-representation of statistically significant results in the published literature and may exaggerate summary effect estimates in oncology systematic reviews. Omitting non-significant results in systematic reviews may therefore affect clinical decision-making. We investigate ways that systematic reviewers attempted to limit publication bias during the search process as well as the statistical methods used to evaluate it. For a subset of reviews not reporting publication bias evaluations, we carried out our own assessments for publication bias to determine its likelihood among these reviews.
Design:
We examined systematic reviews from the top five highest impact factor oncology journals published between 2007 and 2015. Systematic reviews were screened for eligibility and qualifying reviews (n =?182) were coded for relevant publication bias study characteristics by two authors. A re-analysis of reviews not initially evaluating for publication bias was carried out using Egger's regression, trim-and-fill, and selection models.
Results:
Of the 182 systematic reviews, roughly half carried out a hand search to locate additional studies. Conference abstracts were the most commonly reported form of gray literature, followed by clinical trials registries. Fifty-one reviews reported publication bias evaluations. The most common method was the funnel plot (80%, 41/51) followed by Egger's regression (59%, 30/51) and Begg's test (43%, 22/51). Our publication bias evaluations on non-reporting reviews suggest that the degree of publication bias depends on the method employed.
Conclusion:
Our study shows publication bias assessments are not frequently used in oncology systematic reviews. Furthermore, evidence of publication bias was found in a subset of non-reporting reviews. Systematic reviewers in oncology are encouraged to conduct such analyses when appropriate and to employ more robust methods for both mitigating and evaluating publication bias.

DOI: http://dx.doi.org/10.1093/annonc/mdw691.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28039176.

Rohwer A, Pfadenhauer L, Burns J, Brereton L, Gerhardus A, Booth A, Oortwijn W, Rehfuess E. Series: Clinical Epidemiology in South Africa. Paper 3: Logic models help make sense of complexity in systematic reviews and health technology assessments. J.Clin.Epidemiol. 2017 Mar;83:37-47. PMID: 27498377.

OBJECTIVE:
To describe the development and application of logic model templates for systematic reviews and health technology assessments (HTAs) of complex interventions.
STUDY DESIGN AND SETTING:
This study demonstrates the development of a method to conceptualize complexity and make underlying assumptions transparent. Examples from systematic reviews with specific relevance to Sub-Saharan Africa (SSA) and other low- and middle-income countries (LMICs) illustrate its usefulness.
RESULTS:
Two distinct templates are presented: the system-based logic model, describing the system in which the interaction between participants, intervention, and context takes place; and the process-orientated logic model, which displays the processes and causal pathways that lead from the intervention to multiple outcomes.
CONCLUSION:
Logic models can help authors of systematic reviews and HTAs to explicitly address and make sense of complexity, adding value by achieving a better understanding of the interactions between the intervention, its implementation, and its multiple outcomes among a given population and context. They thus have the potential to help build systematic review capacity-in SSA and other LMICs-at an individual level, by equipping authors with a tool that facilitates the review process; and at a system-level, by improving communication between producers and potential users of research evidence.
Copyright © 2016 Elsevier Inc. All rights reserved.

DOI: http://dx.doi.org/10.1016/j.jclinepi.2016.06.012.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=27498377.

Sing DC, Metz LN, Dudli S. Machine Learning-Based Classification of 38 Years of Spine-Related Literature Into 100 Research Topics. Spine. 2017 Jun 1;42(11):863-70. PMID: 28125523.

STUDY DESIGN:
Retrospective review.
OBJECTIVE:
To identify the top 100 spine research topics.
SUMMARY OF BACKGROUND DATA:
Recent advances in "machine learning," or computers learning without explicit instructions, have yielded broad technological advances. Topic modeling algorithms can be applied to large volumes of text to discover quantifiable themes and trends.
METHODS:
Abstracts were extracted from the National Library of Medicine PubMed database from five prominent peer-reviewed spine journals (European Spine Journal [ESJ], The Spine Journal [SpineJ], Spine, Journal of Spinal Disorders and Techniques [JSDT], Journal of Neurosurgery: Spine [JNS]). Each abstract was entered into a latent Dirichlet allocation model specified to discover 100 topics, resulting in each abstract being assigned a probability of belonging in a topic. Topics were named using the five most frequently appearing terms within that topic. Significance of increasing ("hot") or decreasing ("cold") topic popularity over time was evaluated with simple linear regression.
RESULTS:
From 1978 to 2015, 25,805 spine-related research articles were extracted and classified into 100 topics. Top two most published topics included "clinical, surgeons, guidelines, information, care" (n?=?496 articles) and "pain, back, low, treatment, chronic" (424). Top two hot trends included "disc, cervical, replacement, level, arthroplasty" (+0.05%/yr, P?<?0.001), and "minimally, invasive, approach, technique" (+0.05%/yr, P?<?0.001). By journal, the most published topics were ESJ-"operative, surgery, postoperative, underwent, preoperative"; SpineJ-"clinical, surgeons, guidelines, information, care"; Spine-"pain, back, low, treatment, chronic"; JNS- "tumor, lesions, rare, present, diagnosis"; JSDT-"cervical, anterior, plate, fusion, ACDF."
CONCLUSION:
Topics discovered through latent Dirichlet allocation modeling represent unbiased meaningful themes relevant to spine care. Topic dynamics can provide historical context and direction for future research for aspiring investigators and trainees interested in spine careers. Please explore https://singdc.shinyapps.io/spinetopics.
LEVEL OF EVIDENCE:
N A.

DOI: http://dx.doi.org/10.1097/BRS.0000000000002079.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28125523.

Singh S. How to Conduct and Interpret Systematic Reviews and Meta-Analyses. Clin.Transl.Gastroenterol. 2017 May 18;8(5):e93. PMID: 28518130.

Systematic reviews with or without meta-analyses serve a key purpose in critically and objectively synthesizing all available evidence regarding a focused clinical question and can inform clinical practice and clinical guidelines. Performing a rigorous systematic review is multi-step process, which includes (a) identifying a well-defined focused clinically relevant question, (b) developing a detailed review protocol with strict inclusion and exclusion criteria, (c) systematic literature search of multiple databases and unpublished data, in consultation with a medical librarian, (d) meticulous study identification and (e) systematic data abstraction, by at least two sets of investigators independently, (f) risk of bias assessment, and (g) thoughtful quantitative synthesis through meta-analysis where relevant. Besides informing guidelines, credible systematic reviews and quality of evidence assessment can help identify key knowledge gaps for future studies.

DOI: http://dx.doi.org/10.1038/ctg.2017.20.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/28518130.

Stead WW. The Complex and Multifaceted Aspects of Conflicts of Interest. JAMA. 2017 May 2;317(17):1765-7. PMID: 28464123.
Comment on Conflict of Interest: Why Does It Matter? [JAMA. 2017] Why There Are No "Potential" Conflicts of Interest. [JAMA. 2017] Medical Journals, Publishers, and Conflict of Interest. [JAMA. 2017] Payments to Physicians: Does the Amount of Money Make a Difference? [JAMA. 2017] Conflict of Interest in Practice Guidelines Panels. [JAMA. 2017] How Should Journals Handle the Conflict of Interest of Their Editors?: Who Watches the "Watchers"? [JAMA. 2017] Role of Leaders in Fostering Meaningful Collaborations Between Academic Medical Centers and Industry While Also Managing Individual and Institutional Conflicts of Interest. [JAMA. 2017] Conflicts of Interest and Professional Medical Associations: Progress and Remaining Challenges. [JAMA. 2017] Funding, Institutional Conflicts of Interest, and Schools of Public Health: Realities and Solutions. [JAMA. 2017] Teaching Medical Students About Conflicts of Interest. [JAMA. 2017] Financial Conflicts of Interest in Continuing Medical Education: Implications and Accountability. [JAMA. 2017] Conflict of Interest and Legal Issues for Investigators and Authors. [JAMA. 2017] Addressing Bias and Conflict of Interest Among Biomedical Researchers. [JAMA. 2017] Strategies for Addressing a Broader Definition of Conflicts of Interest. [JAMA. 2017] Conflict of Interest and the Role of the Food Industry in Nutrition Research. [JAMA. 2017] Managing Conflicts of Interest in Industry-Sponsored Clinical Research: More Physician Engagement Is Required. [JAMA. 2017] Business Model-Related Conflict of Interests in Medicine: Problems and Potential Solutions. [JAMA. 2017] What Do Patients Think About Physicians' Conflicts of Interest?: Watching Transparency Evolve. [JAMA. 2017] Challenges and Opportunities in Disclosing Financial Interests to Patients. [JAMA. 2017] Public Disclosure of Payments to Physicians From Industry. [JAMA. 2017] Physicians, Industry Payments for Food and Beverages, and Drug Prescribing. [JAMA. 2017] Conflict of Interest and the Integrity of the Medical Profession. [JAMA. 2017]

DOI: http://dx.doi.org/10.1001/jama.2017.3435.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28464123.