Pardo-Hernandez H, Alonso-Coello P. Patient-important outcomes in decision-making: A point of no return. J.Clin.Epidemiol. Epub 2017 May 18. PMID: 28529183.

[First paragraph, reference html links removed]

The Patient-Centered Outcomes Research Institute (PCORI) defines patient-centred research as the assessment of outcomes that are important to patients, comparing individual-level patient treatment options or system-level care options [1, 2]. Consideration of patient-important outcomes in clinical research is crucial to foster health care focused not only on findings that are clinically pertinent but also on findings that are relevant to the patient [3].

DOI: https://doi.org/10.1016/j.jclinepi.2017.05.014.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28529183.

Stansfield C, O'Mara-Eves A, Thomas J. Text mining for search term development in systematic reviewing: A discussion of some methods and challenges. Res.Synth.Methods. Epub 2017 Jun 29. PMID: 28660680.

Using text mining to aid the development of database search strings for topics described by diverse terminology has potential benefits for systematic reviews; however, methods and tools for accomplishing this are poorly covered in the research methods literature. We briefly review the literature on applications of text mining for search term development for systematic reviewing. We found that the tools can be used in 5 overarching ways: improving the precision of searches; identifying search terms to improve search sensitivity; aiding the translation of search strategies across databases; searching and screening within an integrated system; and developing objectively derived search strategies. Using a case study and selected examples, we then reflect on the utility of certain technologies (term frequency-inverse document frequency and Termine, term frequency, and clustering) in improving the precision and sensitivity of searches. Challenges in using these tools are discussed. The utility of these tools is influenced by the different capabilities of the tools, the way the tools are used, and the text that is analysed. Increased awareness of how the tools perform facilitates the further development of methods for their use in systematic reviews.

DOI: http://dx.doi.org/10.1002/jrsm.1250.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28660680.

Shenkin SD, Harrison JK, Wilkinson T, Dodds RM, Ioannidis JPA. Systematic reviews: guidance relevant for studies of older people. Age Ageing. Epub 2017 Jun 24. PMID: 28655142.

Systematic reviews and meta-analyses are increasingly common. This article aims to provide guidance for people conducting systematic reviews relevant to the healthcare of older people. An awareness of these issues will also help people reading systematic reviews to determine whether the results will influence their clinical practice. It is essential that systematic reviews are performed by a team which includes the required technical and clinical expertise. Those performing reviews for the first time should ensure they have appropriate training and support. They must be planned and performed in a transparent and methodologically robust way: guidelines are available. The protocol should be written-and if possible published-before starting the review. Geriatricians will be interested in a table of baseline characteristics, which will help to determine if the studied samples or populations are similar to their patients. Reviews of studies of older people should consider how they will manage issues such as different age cut-offs; non-specific presentations; multiple predictors and outcomes; potential biases and confounders. Systematic reviews and meta-analyses may provide evidence to improve older people's care, or determine where new evidence is required. Newer methodologies, such as meta-analyses of individual level data, network meta-analyses and umbrella reviews, and realist synthesis, may improve the reliability and clinical utility of systematic reviews.

DOI: http://dx.doi.org/10.1093/ageing/afx105.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/28655142.

Kontonatsios G, Brockmeier AJ, Przyby?a P, McNaught J, Mu T, Goulermas JY, Ananiadou S. A semi-supervised approach using label propagation to support citation screening. J.Biomed.Inform. Epub 2017 Jun 22. PMID: 28648605.

Citation screening, an integral process within systematic reviews that identifies citations relevant to the underlying research question, is a time-consuming and resource-intensive task. During the screening task, analysts manually assign a label to each citation, to designate whether a citation is eligible for inclusion in the review. Recently, several studies have explored the use of active learning in text classification to reduce the human workload involved in the screening task. However, existing approaches require a significant amount of manually labelled citations for the text classification to achieve a robust performance. In this paper, we propose a semi-supervised method that identifies relevant citations as early as possible in the screening process by exploiting the pairwise similarities between labelled and unlabelled citations to improve the classification performance without additional manual labelling effort. Our approach is based on the hypothesis that similar citations share the same label (e.g., if one citation should be included, then other similar citations should be included also). To calculate the similarity between labelled and unlabelled citations we investigate two different feature spaces, namely a bag-of-words and a spectral embedding based on the bag-of-words. The semi-supervised method propagates the classification codes of manually labelled citations to neighbouring unlabelled citations in the feature space. The automatically labelled citations are combined with the manually labelled citations to form an augmented training set. For evaluation purposes, we apply our method to reviews from clinical and public health. The results show that our semi-supervised method with label propagation achieves statistically significant improvements over two state-of-the-art active learning approaches across both clinical and public health reviews.

FREE FULL TEXT: http://www.sciencedirect.com/science/article/pii/S1532046417301454
DOI: http://dx.doi.org/10.1016/j.jbi.2017.06.018.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28648605.

Mortensen ML, Adam GP, Trikalinos TA, Kraska T, Wallace BC. An exploration of crowdsourcing citation screening for systematic reviews. Res.Synth.Methods. Epub 2017 Jul 4. PMID: 28677322.

Systematic reviews are increasingly used to inform health care decisions, but are expensive to produce. We explore the use of crowdsourcing (distributing tasks to untrained workers via the web) to reduce the cost of screening citations. We used Amazon Mechanical Turk as our platform and 4 previously conducted systematic reviews as examples. For each citation, workers answered 4 or 5 questions that were equivalent to the eligibility criteria. We aggregated responses from multiple workers into an overall decision to include or exclude the citation using 1 of 9 algorithms and compared the performance of these algorithms to the corresponding decisions of trained experts. The most inclusive algorithm (designating a citation as relevant if any worker did) identified 95% to 99% of the citations that were ultimately included in the reviews while excluding 68% to 82% of irrelevant citations. Other algorithms increased the fraction of irrelevant articles excluded at some cost to the inclusion of relevant studies. Crowdworkers completed screening in 4 to 17 days, costing $460 to $2220, a cost reduction of up to 88% compared to trained experts. Crowdsourcing may represent a useful approach to reducing the cost of identifying literature for systematic reviews.

DOI: http://dx.doi.org/10.1002/jrsm.1252.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28677322.

Dalton J, Booth A, Noyes J, Sowden AJ. Potential value of systematic reviews of qualitative evidence in informing user-centered health and social care: findings from a descriptive overview. J.Clin.Epidemiol. Epub 2017 Apr 24. PMID: 28450254.

OBJECTIVES:
Systematic reviews of quantitative evidence are well established in health and social care. Systematic reviews of qualitativeevidence are increasingly available, but volume, topics covered, methods used, and reporting quality are largely unknown. We provide a descriptive overview of systematic reviews of qualitative evidence assessing health and social care interventions included on the Database of Abstracts of Reviews of Effects (DARE).
STUDY DESIGN AND SETTING:
We searched DARE for reviews published between January 1, 2009, and December 31, 2014. We extracted data on review content and methods, summarized narratively, and explored patterns over time.
RESULTS:
We identified 145 systematic reviews conducted worldwide (64 in the UK). Interventions varied but largely covered treatment or service delivery in community and hospital settings. There were no discernible patterns over time. Critical appraisal of primary studies was conducted routinely. Most reviews were poorly reported.
CONCLUSION:
Potential exists to use systematic reviews of qualitative evidence when driving forward user-centered health and social care. We identify where more research is needed and propose ways to improve review methodology and reporting.
Copyright © 2017 Elsevier Inc. All rights reserved.

DOI: http://dx.doi.org/10.1016/j.jclinepi.2017.04.020.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28450254.

Al-Jundi A, Sakka S. Critical Appraisal of Clinical Research. J.Clin.Diagn.Res. 2017 May;11(5):JE01-5. PMID: 28658805.

Evidence-based practice is the integration of individual clinical expertise with the best available external clinical evidence from systematic research and patient's values and expectations into the decision making process for patient care. It is a fundamental skill to be able to identify and appraise the best available evidence in order to integrate it with your own clinical experience and patients values. The aim of this article is to provide a robust and simple process for assessing the credibility of articles and their value to your clinical practice.

FREE FULL TEXT: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5483707/pdf/jcdr-11-JE01.pdf
DOI: http://dx.doi.org/10.7860/JCDR/2017/26047.9942.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28658805.
PubMed Central: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5483707.

Bramer W, Bain P. Updating search strategies for systematic reviews using EndNote. J.Med.Libr.Assoc. 2017 Jul;105(3):285-9. PMID: 28670219.

[First paragraph, reference html links removed]

Performing, writing, and publishing a systematic review take a long time. In a cohort of journalpublished systematic reviews, Cochrane reviews, and health technology assessment reports, the median time lag between the stated last search date and publication was 61 weeks (interquartile range, 33–87 weeks) [1]. In the same cohort of reviews, 7% were out of date at the time of publication [2]. More recently, an examination of 182 systematic reviews performed at Erasmus Medical Centre showed that the median time between the first search and the appearance of the resulting review in PubMed was 89 weeks (interquartile range, 63–126 weeks).

FREE FULL TEXT: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5490709/pdf/jmla-105-285.pdf
DOI: http://dx.doi.org/10.5195/jmla.2017.183.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28670219.
PubMed Central: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5490709.

Dechartres A, Trinquart L, Atal I, Moher D, Dickersin K, Boutron I, Perrodeau E, Altman DG, Ravaud P. Evolution of poor reporting and inadequate methods over time in 20 920 randomised controlled trials included in Cochrane reviews: research on research study. BMJ. 2017 Jun 8;357:j2490. PMID: 28596181.

Objective
To examine how poor reporting and inadequate methods for key methodological features in randomised controlled trials (RCTs) have changed over the past three decades.
Design

Mapping of trials included in Cochrane reviews.
Data sources

Data from RCTs included in all Cochrane reviews published between March 2011 and September 2014 reporting an evaluation of the Cochrane risk of bias items: sequence generation, allocation concealment, blinding, and incomplete outcome data.
Data extraction

For each RCT, we extracted consensus on risk of bias made by the review authors and identified the primary reference to extract publication year and journal. We matched journal names with Journal Citation Reports to get 2014 impact factors.
Main outcomes measures
We considered the proportions of trials rated by review authors at unclear and high risk of bias as surrogates for poor reporting and inadequate methods, respectively.
Results

We analysed 20?920 RCTs (from 2001 reviews) published in 3136 journals. The proportion of trials with unclear risk of bias was 48.7% for sequence generation and 57.5% for allocation concealment; the proportion of those with high risk of bias was 4.0% and 7.2%, respectively. For blinding and incomplete outcome data, 30.6% and 24.7% of trials were at unclear risk and 33.1% and 17.1% were at high risk, respectively. Higher journal impact factor was associated with a lower proportion of trials at unclear or high risk of bias. The proportion of trials at unclear risk of bias decreased over time, especially for sequence generation, which fell from 69.1% in 1986-1990 to 31.2% in 2011-14 and for allocation concealment (70.1% to 44.6%). After excluding trials at unclear risk of bias, use of inadequate methodsalso decreased over time: from 14.8% to 4.6% for sequence generation and from 32.7% to 11.6% for allocation concealment.
Conclusions
Poor reporting and inadequate methods have decreased over time, especially for sequence generation and allocation concealment. But more could be done, especially in lower impact factor journals.

FREE FULL TEXT: http://www.bmj.com/content/357/bmj.j2490.long
DOI: http://dx.doi.org/10.1136/bmj.j2490.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28596181.

Griffiths PG, Taylor RH, Henderson LM, Barrett BT. Letter to the Editor concerning "A systematic review of controlled trials on visual stress using intuitive overlays or colorimeter". J.Optom. 2017 Jul-Sep;10(3):199-200. PMID: 28063870.

[First paragraph, reference html links removed]

We read with interest the review written by Evans and Allen, and published in the Journal of Optometry, in July, 2016.1 Systematic reviews are considered the ‘gold-standard’ form of evidence for assessing the effectiveness of therapeutic interventions. A systematic review comprises a focussed question, a comprehensive search strategy to identify all potentially relevant studies, predefined selection criteria to minimise bias from ‘cherry-picking’ studies and an assessment of the risk of bias (RoB) of individual studies in a way that can be evaluated and reproduced. Because studies at high RoB often overestimate treatment effects,2 the aim is to either exclude studies at high RoB, or at least prioritise those studies at the lowest RoB. For this reason the RoB table is the key feature of any systematic review because it needs to inform the subsequent discussion.

FREE FULL TEXT: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5484786/pdf/main.pdf
DOI: https://doi.org/10.1016/j.optom.2016.11.004.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28063870.
PubMed Central: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5484786/.

Heneghan C, Mahtani KR, Goldacre B, Godlee F, Macdonald H, Jarvies D. Evidence based medicine manifesto for better healthcare. BMJ. 2017 Jun 20;357:2973. PMID: 28634227.

[First paragraph]

A response to systematic bias, wastage, error, and fraud in research underpinning patient care Informed decision making requires clinicians and patients to identify and integrate relevant evidence. But with the questionable integrity of much of today’s evidence, the lack of research answering questions that matter to patients, and the lack of evidence to inform shared decision how are they expected to do this?

DOI: http://dx.doi.org/10.1136/bmj.j2973.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/28634227.

Jonsson B, Martinalbo J, Pignatti F. European Medicines Agency Perspective on Oncology Study Design for Marketing Authorization and Beyond. Clin.Pharmacol.Ther. 2017 May;101(5):577-9. PMID: 28073148.

In the development of highly active anticancer drugs, the European situation may be viewed as paradoxical. Limited data may support marketing authorization, but may be insufficient for the health economic appraisal needed for reimbursement and market uptake. To achieve this, conventional confirmatory studies may be needed. For products of special interest, studies aimed at optimizing cost-effectiveness may be warranted. Efficient designs of studies to meet these objectives constitute challenges to all stakeholders.

DOI: http://dx.doi.org/10.1002/cpt.612.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28073148.

Pussegoda K, Turner L, Garritty C, Mayhew A, Skidmore B, Stevens A, Boutron I, Sarkis-Onofre R, Bjerre LM, Hróbjartsson A, et al. Identifying approaches for assessing methodological and reporting quality of systematic reviews: a descriptive study. Syst.Rev. 2017 Jun 19;6(1):117. PMID: 28629396.

BACKGROUND:
The methodological quality and completeness of reporting of the systematic reviews (SRs) is fundamental to optimal implementation of evidence-based health care and the reduction of research waste. Methods exist to appraise SRs yet little is known about how they are used in SRs or where there are potential gaps in research best-practice guidance materials. The aims of this study are to identify reports assessing the methodological quality (MQ) and/or reporting quality (RQ) of a cohort of SRs and to assess their number, general characteristics, and approaches to 'quality' assessment over time.
METHODS:
The Cochrane Library, MEDLINE®, and EMBASE® were searched from January 1990 to October 16, 2014, for reports assessingMQ and/or RQ of SRs. Title, abstract, and full-text screening of all reports were conducted independently by two reviewers. Reports assessing the MQ and/or RQ of a cohort of ten or more SRs of interventions were included. All results are reported as frequencies and percentages of reports.
RESULTS:
Of 20,765 unique records retrieved, 1189 of them were reviewed for full-text review, of which 76 reports were included. Eight previously published approaches to assessing MQ or reporting guidelines used as proxy to assess RQ were used in 80% (61/76) of identified reports. These included two reporting guidelines (PRISMA and QUOROM) and five quality assessment tools (AMSTAR, R-AMSTAR, OQAQ, Mulrow, Sacks) and GRADE criteria. The remaining 24% (18/76) of reports developed their own criteria. PRISMA, OQAQ, and AMSTAR were the most commonly used published tools to assess MQ or RQ. In conjunction with other approaches, published tools were used in 29% (22/76) of reports, with 36% (8/22) assessing adherence to both PRISMA and AMSTAR criteria and 26% (6/22) using QUOROM and OQAQ.
CONCLUSIONS:
The methods used to assess quality of SRs are diverse, and none has become universally accepted. The most commonly used quality assessment tools are AMSTAR, OQAQ, and PRISMA. As new tools and guidelines are developed to improve both the MQ and RQ of SRs, authors of methodological studies are encouraged to put thoughtful consideration into the use of appropriate tools to assess quality and reporting.

FREE FULL TEXT: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5477124/pdf/13643_2017_Article_507.pdf
DOI: http://dx.doi.org/10.1186/s13643-017-0507-6.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28629396.
PubMed Central: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5477124/.

Richardson WS. The practice of evidence-based medicine involves the care of whole persons. J.Clin.Epidemiol. 2017 Apr;84:18-21. PMID: 28532613.

In this issue of the Journal, Dr. Fava posits that evidence-based medicine (EBM) was bound to fail. I share some of the concerns he expresses, yet I see more reasons for optimism. Having been on rounds with both Drs. Engel and Sackett, I reckon they would have agreed more than they disagreed. Their central teaching was the compassionate and well-informed care of sick persons. The model that emerged from these rounds was that patient care could be both person-centered and evidence-based, that clinical judgment was essential to both, and the decisions could and should be shared. Both clinicians and patients can bring knowledge from several sources into the shared decision making process in the clinical encounter, including evidence from clinical care research. I thank Dr. Fava for expressing legitimate doubts and providing useful criticism, yet I am cautiously optimistic that the model of EBM described here is robust enough to meet the challenges and is not doomed to fail.

DOI: https://doi.org/10.1016/j.jclinepi.2017.02.002.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28532613.

Stone SP, Wilcox M, Hawkey P. Is AGREE II a counsel of perfection? A letter commenting on Lytvyn et al. Infect.Control Hosp.Epidemiol. 2017 May;38(5):636-8. PMID: 28367786.

[First paragraph]

To the Editor—We read the systematic survey (review) of Clostridium difficile (CD) guidelines (August 2016) with interest. We suggest that Lytvyn et al are proposing a counsel of perfection, ignoring the realities of producing practical guidelines to address rising infection levels. In particular, we question their data extraction from the UK guidelines and their views that (1) a systematic review is a pre-requisite of guideline writing; (2) relatively weak evidence should not result in strong recommendations; (3) ecological studies are grade 5 evidence; and (4) probiotics have the highest level of evidence for any CD prevention intervention.

DOI: http://dx.doi.org/10.1017/ice.2017.5.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28367786.

Toth PP, Stevens W, Chou JW. Why published studies of the cost-effectiveness of PCSK-9 inhibitors yielded such markedly different results. J.Med.Econ. 2017 Jul;20(7):749-51. PMID: 28471246.

[First paragraph]

Cost-effectiveness (CE) models are being developed and employed with increasing frequency to help determine the relative value of healthcare treatments and services, and to inform efforts to improve care value. Often, different published models yield a wide range of CE estimates for the same technology or service. For example, six CE analyses of PCSK9 inhibitors published between November 2015 and January 20171–6Arrieta A, Page TF, Veledar E, et al. Economic evaluation of PCSK9 inhibitors in reducing cardiovascular risk from health system and private payer perspectives. PLoS One 2017;12:e0169761Jena AB, Blumenthal DM, Stevens W, et al. Value of improved lipid control in patients at high risk for adverse cardiac events. Am J Manag Care 2016;22:e199Gandra SR, Villa G, Fonarow GC, et al. Cost?effectiveness of LDL?C lowering with evolocumab in patients with high cardiovascular risk in the United States. Clin Cardiol 2016;39:313-20Kazi DS, Moran AE, Coxson PG, et al. Cost-effectiveness of PCSK9 inhibitor therapy in patients with heterozygous familial hypercholesterolemia or atherosclerotic cardiovascular disease. J Am Med Assoc 2016;316:743-53Toth PP, Danese M, Villa G, et al. Estimated burden of cardiovascular disease and value-based price range for evolocumab in a high-risk, secondary-prevention population in the US payer context. J Med Econ 2017:1-10Institute for Clinical and Economic Review. PCSK9 inhibitors for treatment of high cholesterol: effectiveness, value, and value-based price benchmarks. New England: Institute for Clinical and Economic Review; 2015 , which included patients with varying risk profiles, have yielded widely varying estimates of the CE of PCSK9 inhibitors—ranging from $120,000–$350,000 per quality adjusted life year (QALY).

DOI: http://dx.doi.org/10.1080/13696998.2017.1327440.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28471246.

Townsend WA, Anderson PF, Ginier EC, MacEachern MP, Saylor KM, Shipman BL, Smith JE. A competency framework for librarians involved in systematic reviews. J.Med.Libr.Assoc. 2017 Jul;105(3):268-75. PMID: 28670216.

OBJECTIVE:
The project identified a set of core competencies for librarians who are involved in systematic reviews.
METHODS:
A team of seven informationists with broad systematic review experience examined existing systematic review standards, conducted a literature search, and used their own expertise to identify core competencies and skills that are necessary to undertake various roles in systematic review projects.
RESULTS:
The team identified a total of six competencies for librarian involvement in systematic reviews: "Systematic review foundations," "Process management and communication," "Research methodology," "Comprehensive searching," "Data management," and "Reporting." Within each competency are the associated skills and knowledge pieces (indicators). Competence can be measured using an adaptation of Miller's Pyramid for Clinical Assessment, either through self-assessment or identification of formal assessment instruments.
CONCLUSIONS:
The Systematic Review Competencies Framework provides a standards-based, flexible way for librarians and organizations to identify areas of competence and areas in need of development to build capacity for systematic review integration. The framework can be used to identify or develop appropriate assessment tools and to target skill development opportunities.

FREE FULL TEXT: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5490706/pdf/jmla-105-268.pdf
DOI: http://dx.doi.org/10.5195/jmla.2017.189.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28670216.
PubMed Central: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5490706/.

Yepes Nunez JJ, Zhang Y, Xie F, Alonso-Coello P, Selva A, Schunemann H, Guyatt G. 42 systematic reviews generated 23 items for assessing the risk of bias in Values and Preferences studies. J.Clin.Epidemiol. 2017 May;85:21-31. PMID: 28478082.

OBJECTIVES:
In systematic reviews of studies of patients' values and preferences, the objective of the study was to summarize items and domains authors have identified when considering the risk of bias (RoB) associated with primary studies.
STUDY DESIGN AND SETTING:
We conducted a systematic survey of systematic reviews of patients' values and preference studies. Our search included three databases (MEDLINE, EMBASE, and PsycINFO) from their inception to August 2015. We conducted duplicate data extraction, focusing on items that authors used to address RoB in the primary studies included in their reviews and the associated underlying domains, and summarized criteria in descriptive tables.
RESULTS:
We identified 42 eligible systematic reviews that addressed 23 items relevant to RoB and grouped the items into 7 domains: appropriate administration of instrument; instrument choice; instrument-described health state presentation; choice of participants group; description, analysis, and presentation of methods and results; patient understanding; and subgroup analysis.
CONCLUSION:
The items and domains identified provide insight into issues of RoB in patients' values and preference studies and establish the basis for an instrument to assess RoB in such studies.
Copyright © 2017 Elsevier Inc. All rights reserved.

DOI: http://dx.doi.org/10.1016/j.jclinepi.2017.04.019.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28478082.