Hollman C, Paulden M, Pechlivanoglou P, McCabe C. A Comparison of Four Software Programs for Implementing Decision Analytic Cost-Effectiveness Models. Pharmacoeconomics. Epub 2017 May 9. PMID: 28488257.

The volume and technical complexity of both academic and commercial research using decision analytic modelling has increased rapidly over the last two decades. The range of software programs used for their implementation has also increased, but it remains true that a small number of programs account for the vast majority of cost-effectiveness modelling work. We report a comparison of four software programs: TreeAge Pro, Microsoft Excel, R and MATLAB. Our focus is on software commonly used for building Markov models and decision trees to conduct cohort simulations, given their predominance in the published literature around cost-effectiveness modelling. Our comparison uses three qualitative criteria as proposed by Eddy et al.: "transparency and validation", "learning curve" and "capability". In addition, we introduce the quantitative criterion of processing speed. We also consider the cost of each program to academic users and commercial users. We rank the programs based on each of these criteria. We find that, whilst Microsoft Excel and TreeAge Pro are good programs for educational purposes and for producing the types of analyses typically required by health technology assessment agencies, the efficiency and transparency advantages of programming languages such as MATLAB and R become increasingly valuable when more complex analyses are required.

DOI: https://doi.org/10.1007/s40273-017-0510-8.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28488257.

Toth PP, Stevens W, Chou JW. Why have published studies of the cost effectiveness of PCSK-9 inhibitors yielded such markedly different results. J.Med.Econ. Epub 2017 May 4. PMID: 28471246.

[First paragraph, reference html links removed]

Cost-effectiveness (CE) models are being developed and employed with increasing frequency to help determine the relative value of healthcare treatments and services, and to inform efforts to improve care value. Often, different published models yield a wide range of CE estimates for the same technology or service. For example, six CE analyses of PCSK9 inhibitors published between November 2015 and January 2017 (1-6), which included patients with varying risk profiles, have yielded widely varying estimates of the CE of PCSK9 inhibitors - ranging from $120,000-$350,000 per quality adjusted life year (QALY).

DOI: https://doi.org/10.1080/13696998.2017.1327440.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28471246.

Bojke L, Manca A, Asaria M, Mahon R, Ren S, Palmer S. How to Appropriately Extrapolate Costs and Utilities in Cost-Effectiveness Analysis. Pharmacoeconomics. Epub 2017 May 3. PMID: 28470594.

Costs and utilities are key inputs into any cost-effectiveness analysis. Their estimates are typically derived from individual patient-level data collected as part of clinical studies the follow-up duration of which is often too short to allow a robust quantification of the likely costs and benefits a technology will yield over the patient's entire lifetime. In the absence of long-term data, some form of temporal extrapolation-to project short-term evidence over a longer time horizon-is required. Temporal extrapolation inevitably involves assumptions regarding the behaviour of the quantities of interest beyond the time horizon supported by the clinical evidence. Unfortunately, the implications for decisions made on the basis of evidence derived following this practice and the degree of uncertainty surrounding the validity of any assumptions made are often not fully appreciated. The issue is compounded by the absence of methodological guidance concerning the extrapolation of non-time-to-event outcomes such as costs and utilities. This paper considers current approaches to predict long-term costs and utilities, highlights some of the challenges with the existing methods, and provides recommendations for future applications. It finds that, typically, economic evaluation models employ a simplistic approach to temporal extrapolation of costs and utilities. For instance, their parameters (e.g. mean) are typically assumed to be homogeneous with respect to both time and patients' characteristics. Furthermore, costs and utilities have often been modelled to follow the dynamics of the associated time-to-event outcomes. However, cost and utility estimates may be more nuanced, and it is important to ensure extrapolation is carried out appropriately for these parameters.

DOI: https://doi.org/10.1007/s40273-017-0512-6.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28470594.

Dakin H, Gray A. Economic evaluation of factorial randomised controlled trials: challenges, methods and recommendations. Stat.Med. Epub 2017 May 3. PMID: 28470760.

Increasing numbers of economic evaluations are conducted alongside randomised controlled trials. Such studies include factorial trials, which randomise patients to different levels of two or more factors and can therefore evaluate the effect of multiple treatments alone and in combination. Factorial trials can provide increased statistical power or assess interactions between treatments, but raise additional challenges for trial-based economic evaluations: interactions may occur more commonly for costs and quality-adjusted life-years (QALYs) than for clinical endpoints; economic endpoints raise challenges for transformation and regression analysis; and both factors must be considered simultaneously to assess which treatment combination represents best value for money. This article aims to examine issues associated with factorial trials that include assessment of costs and/or cost-effectiveness, describe the methods that can be used to analyse such studies and make recommendations for health economists, statisticians and trialists. A hypothetical worked example is used to illustrate the challenges and demonstrate ways in which economic evaluations of factorial trials may be conducted, and how these methods affect the results and conclusions. Ignoring interactions introduces bias that could result in adopting a treatment that does not make best use of healthcare resources, while considering all interactions avoids bias but reduces statistical power. We also introduce the concept of the opportunity cost of ignoring interactions as a measure of the bias introduced by not taking account of all interactions. We conclude by offering recommendations for planning, analysing and reporting economic evaluations based on factorial trials, taking increased analysis costs into account. (c) 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

DOI: https://doi.org/10.1002/sim.7322.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/28470760.

Xie X, Sinclair A, Dendukuri N. Evaluating the accuracy and economic value of a new test in the absence of a perfect reference test. Res.Synth.Methods. Epub 2017 May 23. PMID: 28544646.

Streptococcus pneumoniae (SP) pneumonia is often treated empirically as diagnosis is challenging because of the lack of a perfect test. Using BinaxNOW-SP, a urinary antigen test, as an add-on to standard cultures may not only increase diagnostic yield but also increase costs.
To estimate the sensitivity and specificity of BinaxNOW-SP and subsequently estimate the cost-effectiveness of adding BinaxNOW-SP to the diagnostic work-up.
We fit a Bayesian latent-class meta-analysis model to obtain estimates of BinaxNOW-SP accuracy that adjust for the imperfect accuracy of culture. Meta-analysis results were combined with information on prevalence of SP pneumonia to estimate the number of patients who are correctly classified under competing diagnostic strategies. Taking into consideration the cost of antibiotics, we determined the incremental cost of adding BinaxNOW-SP to the work-up per case correctly diagnosed.
The BinaxNOW-SP test had a pooled sensitivity of 0.74 (95% credible interval [CrI], 0.67-0.83) and a pooled specificity of 0.96 (95% CrI, 0.92-0.99). An overall increase in diagnostic accuracy of 6.2% due to the addition of BinaxNOW-SP corresponded to an incremental cost per case correctly classified of $582 Canadian dollars.
The methods we have described allow us to evaluate the accuracy and economic value of a new test in the absence of a perfect reference test using an evidence-based approach.
Copyright © 2017 John Wiley & Sons, Ltd.

DOI: https://doi.org/10.1002/jrsm.1243.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28544646.

Kim DD, Wilkinson CL, Pope EF, Chambers JD, Cohen JT, Neumann PJ. The Influence of Time Horizon on Results of Cost-effectiveness Analyses. Expert Rev.Pharmacoecon Outcomes Res. Epub 2017 May 14. PMID: 28504026.

Debates persist on the appropriate time horizon from a payer's perspective and how the time horizon in cost-effectiveness analysis (CEA) influences the value assessment.
We systematically reviewed the Tufts Medical Center CEA Registry and identified US-based studies that used a payer perspective from 2005-2014. We classified the identified CEAs as short-term (time horizon ? 5 years) and long-term (> 5 years), and examined associations between study characteristics and the specified time horizon. We also developed case studies with selected interventions to further explore the relationship between time horizon and projected costs, benefits, and incremental cost-effectiveness ratios (ICER).
Among 782 identified studies that met our inclusion criteria, 552 studies (71%) utilized a long-term time horizon while 198 studies (25%) used a short-term horizon. Among studies that employed multiple time horizons, the extension of the time horizon yielded more favorable ICERs in 19 cases and less favorable ICERs in 4 cases. Case studies showed the use of a longer time horizon also yielded more favorable ICERs.
The assumed time horizon in CEAs can substantially influence the value assessment of medical interventions. To capture all consequences, we encourage the use of time horizons that extend sufficiently into the future.

DOI: https://doi.org/10.1080/14737167.2017.1331432.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/28504026.

Nord E. Beyond QALYs: Multi-criteria based estimation of maximum willingness to pay for health technologies. Eur.J.Health.Econ. Epub 2017 Mar 3. PMID: 28258399.

The QALY is a useful outcome measure in cost-effectiveness analysis. But in determining the overall value of and societal willingness to pay for health technologies, gains in quality of life and length of life are prima facie separate criteria that need not be put together in a single concept. A focus on costs per QALY can also be counterproductive. One reason is that the QALY does not capture well the value of interventions in patients with reduced potentials for health and thus different reference points. Another reason is a need to separate losses of length of life and losses of quality of life when it comes to judging the strength of moral claims on resources in patients of different ages. An alternative to the cost-per-QALY approach is outlined, consisting in the development of two bivariate value tables that may be used in combination to estimate maximum cost acceptance for given units of treatment-for instance a surgical procedure, or 1 year of medication-rather than for 'obtaining one QALY.' The approach is a follow-up of earlier work on 'cost value analysis.' It draws on work in the QALY field insofar as it uses health state values established in that field. But it does not use these values to weight life years and thus avoids devaluing gained life years in people with chronic illness or disability. Real tables of the kind proposed could be developed in deliberative processes among policy makers and serve as guidance for decision makers involved in health technology assessment and appraisal.

DOI: https://doi.org/10.1007/s10198-017-0882-x.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28258399.

Thom H, Jackson C, Welton N, Sharples L. Using Parameter Constraints to Choose State Structures in Cost-Effectiveness Modelling. Pharmacoeconomics. Epub 2017 Mar 24. PMID: 28342114.

This article addresses the choice of state structure in a cost-effectiveness multi-state model. Key model outputs, such as treatment recommendations and prioritisation of future research, may be sensitive to state structure choice. For example, it may be uncertain whether to consider similar disease severities or similar clinical events as the same state or as separate states. Standard statistical methods for comparing models require a common reference dataset but merging states in a model aggregates the data, rendering these methods invalid.
We propose a method that involves re-expressing a model with merged states as a model on the larger state space in which particular transition probabilities, costs and utilities are constrained to be equal between states. This produces a model that gives identical estimates of cost effectiveness to the model with merged states, while leaving the data unchanged. The comparison of state structures can be achieved by comparing maximised likelihoods or information criteria between constrained and unconstrained models. We can thus test whether the costs and/or health consequences for a patient in two states are the same, and hence if the states can be merged. We note that different structures can be used for rates, costs and utilities, as appropriate.
We illustrate our method with applications to two recent models evaluating the cost effectiveness of prescribing anti-depressant medications by depression severity and the cost effectiveness of diagnostic tests for coronary artery disease.
State structures in cost-effectiveness models can be compared using standard methods to compare constrained and unconstrained models.

DOI: https://doi.org/10.1007/s40273-017-0501-9.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28342114.

Dasbach EJ, Elbasha EH. Verification of Decision-Analytic Models for Health Economic Evaluations: An Overview. Pharmacoeconomics. Epub 2017 Apr 29. PMID: 28456972.

Decision-analytic models for cost-effectiveness analysis are developed in a variety of software packages where the accuracy of the computer code is seldom verified. Although modeling guidelines recommend using state-of-the-art quality assurance and control methods for software engineering to verify models, the fields of pharmacoeconomics and health technology assessment (HTA) have yet to establish and adopt guidance on how to verify health and economic models. The objective of this paper is to introduce to our field the variety of methods the software engineering field uses to verify that software performs as expected. We identify how many of these methods can be incorporated in the development process of decision-analytic models in order to reduce errors and increase transparency. Given the breadth of methods used in software engineering, we recommend a more in-depth initiative to be undertaken (e.g., by an ISPOR-SMDM Task Force) to define the best practices for model verification in our field and to accelerate adoption. Establishing a general guidance for verifying models will benefit the pharmacoeconomics and HTA communities by increasing accuracy of computer programming, transparency, accessibility, sharing, understandability, and trust of models.

DOI: https://doi.org/10.1007/s40273-017-0508-2.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28456972.

Heath A, Manolopoulou I, Baio G. A Review of Methods for Analysis of the Expected Value of Information. Med.Decis.Making. Epub 2017 Apr 1. PMID: 28410564.

In recent years, value-of-information analysis has become more widespread in health economic evaluations, specifically as a tool to guide further research and perform probabilistic sensitivity analysis. This is partly due to methodological advancements allowing for the fast computation of a typical summary known as the expected value of partial perfect information (EVPPI). A recent review discussed some approximation methods for calculating the EVPPI, but as the research has been active over the intervening years, that review does not discuss some key estimation methods. Therefore, this paper presents a comprehensive review of these new methods. We begin by providing the technical details of these computation methods. We then present two case studies in order to compare the estimation performance of these new methods. We conclude that a method based on nonparametric regression offers the best method for calculating the EVPPI in terms of accuracy, computational time, and ease of implementation. This means that the EVPPI can now be used practically in health economic evaluations, especially as all the methods are developed in parallel with R functions and a web app to aid practitioners.

DOI: https://doi.org/10.1177/0272989X17697692.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/28410564.

Burnham JM, Meta F, Lizzio V, Makhni EC, Bozic KJ. Technology assessment and cost-effectiveness in orthopedics: how to measure outcomes and deliver value in a constantly changing healthcare environment. Curr.Rev.Musculoskelet.Med. 2017 Jun;10(2):233-9. PMID: 28421386.

The purpose of this study is to review the basic concepts of healthcare value, patient outcome measurement, and cost-effectiveness analyses as they relate to the introduction of new surgical techniques and technologies in the field of orthopedic surgery.
An increased focus on financial stewardship in healthcare has resulted in a plethora of cost-effectiveness and patient outcome research. Recent research has made great progress in identifying orthopedic technologies that provide exceptional value and those that do not meet adequate standards for widespread adoption. As the pace of technological innovation advances in lockstep with an increased focus on value, orthopedic surgeons will need to have a working knowledge of value-based healthcare decision-making. Value-based healthcare and cost-effectiveness analyses can aid orthopedic surgeons in making ethical and fiscally responsible treatment choices for their patients.

FREE FULL TEXT: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5435638/pdf/12178_2017_Article_9407.pdf
DOI: https://doi.org/10.1007/s12178-017-9407-6.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28421386.
PubMed Central: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5435638/.

Charlton V, Littlejohns P, Kieslich K, Mitchell P, Rumbold B, Weale A, Wilson J, Rid A. Cost effective but unaffordable: an emerging challenge for health systems. BMJ. 2017 Mar 22;356:j1402. PMID: 28330879.

[First paragraph, reference html link removed]

With hospital wards overflowing and trusts in deficit, the introduction of cost effective but expensive new technologies places increasing strain on NHS finances. The National Institute for Health and Care Excellence (NICE) and NHS England plan to tackle this problem by delaying the introduction of interventions with a “high budget impact.”1The change may deliver short term savings but is flawed.

DOI: https://dx.doi.org/10.1136/bmj.j1402.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28330879.

Franklin M, Davis S, Horspool M, Kua WS, Julious S. Economic Evaluations Alongside Efficient Study Designs Using Large Observational Datasets: the PLEASANT Trial Case Study. Pharmacoeconomics. 2017 May;35(5):561-73. PMID: 28110382.

Large observational datasets such as Clinical Practice Research Datalink (CPRD) provide opportunities to conduct clinical studies and economic evaluations with efficient designs.
Our objectives were to report the economic evaluation methodology for a cluster randomised controlled trial (RCT) of a UK NHS-delivered public health intervention for children with asthma that was evaluated using CPRD and describe the impact of this methodology on results.
CPRD identified eligible patients using predefined asthma diagnostic codes and captured 1-year pre- and post-intervention healthcare contacts (August 2012 to July 2014). Quality-adjusted life-years (QALYs) 4 months post-intervention were estimated by assigning utility values to exacerbation-related contacts; a systematic review identified these utility values because preference-based outcome measures were not collected. Bootstrapped costs were evaluated 12 months post-intervention, both with 1-year regression-based baseline adjustment (BA) and without BA (observed).
Of 12,179 patients recruited, 8190 (intervention 3641; control 4549) were evaluated in the primary analysis, which included patients who received the protocol-defined intervention and for whom CPRD data were available. The intervention's per-patient incremental QALY loss was 0.00017 (bias-corrected and accelerated 95% confidence intervals [BCa 95% CI] -0.00051 to 0.00018) and cost savings were £14.74 (observed; BCa 95% CI -75.86 to 45.19) or £36.07 (BA; BCa 95% CI -77.11 to 9.67), respectively. The probability of cost savings was much higher when accounting for BA versus observed costs due to baseline cost differences between trial arms (96.3 vs. 67.3%, respectively).
Economic evaluations using data from a large observational database without any primary data collection is feasible, informative and potentially efficient. Clinical Trials Registration Number: ISRCTN03000938.

FREE FULL TEXT: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5385191/pdf/40273_2016_Article_484.pdf
DOI: https://doi.org/10.1007/s40273-016-0484-y.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28110382.
PubMed Central: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5385191/.

Kim DD, Basu A. New Metrics for Economic Evaluation in the Presence of Heterogeneity: Focusing on Evaluating Policy Alternatives Rather than Treatment Alternatives. Med.Decis.Making. 2017 Apr 01. PMID: 28441507.

Cost-effectiveness analysis (CEA) methods fail to acknowledge that where cost-effectiveness differs across subgroups, there may be differential adoption of technology. Also, current CEA methods are not amenable to incorporating the impact of policy alternatives that potentially influence the adoption behavior. Unless CEA methods are extended to allow for a comparison of policies rather than simply treatments, their usefulness to decision makers may be limited.
We conceptualize new metrics, which estimate the realized value of technology from policy alternatives, through introducing subgroup-specific adoption parameters into existing metrics, incremental cost-effectiveness ratios (ICERs) and Incremental Net Monetary Benefits (NMBs). We also provide the Loss with respect to Efficient Diffusion (LED) metrics, which link with existing value of information metrics but take a policy evaluation perspective. We illustrate these metrics using policies on treatment with combination therapy with a statin plus a fibrate v. statin monotherapy for patients with diabetes and mixed dyslipidemia.
Under the traditional approach, the population-level ICER of combination v. monotherapy was $46,000/QALY. However, after accounting for differential rates of adoption of the combination therapy (7.2% among males and 4.3% among females), the modified ICER was $41,733/QALY, due to the higher rate of adoption in the more cost-effective subgroup (male). The LED metrics showed that an education program to increase the uptake of combination therapy among males would provide the largest economic returns due to the significant underutilization of the combination therapy among males under the current policy.
This framework may have the potential to improve the decision-making process by producing metrics that are better aligned with the specific policy decisions under consideration for a specific technology.

DOI: https://doi.org/10.1177/0272989X17702379.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28441507.

Lopatina E, Donald F, DiCenso A, Martin-Misener R, Kilpatrick K, Bryant-Lukosius D, Carter N, Reid K, Marshall DA. Economic evaluation of nurse practitioner and clinical nurse specialist roles: A methodological review. Int.J.Nurs.Stud. 2017 May 4;72:71-82. PMID: 28500955.

Advanced practice nurses (e.g., nurse practitioners and clinical nurse specialists) have been introduced internationally to increase access to high quality care and to tackle increasing health care expenditures. While randomised controlled trials and systematic reviews have demonstrated the effectiveness of nurse practitioner and clinical nurse specialist roles, their cost-effectiveness has been challenged. The poor quality of economic evaluations of these roles to date raises the question of whether current economic evaluation guidelines are adequate when examining their cost-effectiveness.
To examine whether current guidelines for economic evaluation are appropriate for economic evaluations of nurse practitioner and clinical nurse specialist roles.
Our methodological review was informed by a qualitative synthesis of four sources of information: 1) narrative review of literature reviews and discussion papers on economic evaluation of advanced practice nursing roles; 2) quality assessment of economic evaluations of nurse practitioner and clinical nurse specialist roles alongside randomised controlled trials; 3) review of guidelines for economic evaluation; and, 4) input from an expert panel.
The narrative literature review revealed several challenges in economic evaluations of advanced practice nursing roles (e.g., complexity of the roles, variability in models and practice settings where the roles are implemented, and impact on outcomes that are difficult to measure). The quality assessment of economic evaluations of nurse practitioner and clinical nurse specialist roles alongside randomised controlled trials identified methodological limitations of these studies. When we applied the Guidelines for the Economic Evaluation of Health Technologies: Canada to the identified challenges and limitations, discussed those with experts and qualitatively synthesized all findings, we concluded that standard guidelines for economic evaluation are appropriate for economic evaluations of nurse practitioner and clinical nurse specialist roles and should be routinely followed. However, seven out of 15 current guideline sections (describing a decision problem, choosing type of economic evaluation, selecting comparators, determining the study perspective, estimating effectiveness, measuring and valuing health, and assessing resource use and costs) may require additional role-specific considerations to capture costs and effects of these roles.
Current guidelines for economic evaluation should form the foundation for economic evaluations of nurse practitioner and clinical nurse specialist roles. The proposed role-specific considerations, which clarify application of standard guidelines sections to economic evaluation of nurse practitioner and clinical nurse specialist roles, may strengthen the quality and comprehensiveness of future economic evaluations of these roles.
Copyright © 2017 Elsevier Ltd. All rights reserved.

DOI: https://doi.org/10.1016/j.ijnurstu.2017.04.012.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28500955.

Shinkins B, Yang Y, Abel L, Fanshawe TR. Evidence synthesis to inform model-based cost-effectiveness evaluations of diagnostic tests: a methodological review of health technology assessments. BMC Med.Res.Methodol. 2017 Apr 14;17(1):56. PMID: 28410588.

Evaluations of diagnostic tests are challenging because of the indirect nature of their impact on patient outcomes. Model-based health economic evaluations of tests allow different types of evidence from various sources to be incorporated and enable cost-effectiveness estimates to be made beyond the duration of available study data. To parameterize a health-economic model fully, all the ways a test impacts on patient health must be quantified, including but not limited to diagnostic test accuracy.
We assessed all UK NIHR HTA reports published May 2009-July 2015. Reports were included if they evaluated a diagnostic test, included a model-based health economic evaluation and included a systematic review and meta-analysis of test accuracy. From each eligible report we extracted information on the following topics: 1) what evidence aside from test accuracy was searched for and synthesised, 2) which methods were used to synthesise test accuracy evidence and how did the results inform the economic model, 3) how/whether threshold effects were explored, 4) how the potential dependency between multiple tests in a pathway was accounted for, and 5) for evaluations of tests targeted at the primary care setting, how evidence from differing healthcare settings was incorporated.
The bivariate or HSROC model was implemented in 20/22 reports that met all inclusion criteria. Test accuracy data for health economic modelling was obtained from meta-analyses completely in four reports, partially in fourteen reports and not at all in four reports. Only 2/7 reports that used a quantitative test gave clear threshold recommendations. All 22 reports explored the effect of uncertainty in accuracy parameters but most of those that used multiple tests did not allow for dependence between test results. 7/22 tests were potentially suitable for primary care but the majority found limited evidence on test accuracy in primary care settings.
The uptake of appropriate meta-analysis methods for synthesising evidence on diagnostic test accuracy in UK NIHR HTAs has improved in recent years. Future research should focus on other evidence requirements for cost-effectiveness assessment, threshold effects for quantitative tests and the impact of multiple diagnostic tests.

FREE FULL TEXT: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5391551/pdf/12874_2017_Article_331.pdf
DOI: https://doi.org/10.1186/s12874-017-0331-7.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/28410588.
PubMed Central: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5391551/.

Silva EN, Silva MT, Pereira MG. Uncertainty in economic evaluation studies | Incerteza em estudos de avaliação econômica. Epidemiol.Serv.Saude. 2017 Jan-Mar;26(1):211-3. PMID: 28226023.

[Article in English, Portuguese; reference html links removed]

As seen in the previous articles of this series on economic evaluation, 1 - 4 there is a great deal of information needed for the decision maker on health costs and outcomes, and how they propagate over time. To obtain this information, epidemiological, economic, mathematical and statistical methods are employed, which have limitations inherent to any scientific method. It is to be expected that during the whole process of elaborating the economic evaluation, there are uncertainties that could have a substantial impact on the final result of the analysis. 5 In considering the uncertainty, it is sought to quantify the influence of the data and the assumptions adopted on the conclusion of the research. This article points out three types of uncertainty in economic evaluation: methodological, structural and parametric.

DOI: https://doi.org/10.5123/S1679-49742017000100022.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28226023.

van Giessen A, Peters J, Wilcher B, Hyde C, Moons C, de Wit A, Koffijberg E. Systematic Review of Health Economic Impact Evaluations of Risk Prediction Models: Stop Developing, Start Evaluating. Value Health. 2017 Apr;20(4):718-26. PMID: 28408017.

Although health economic evaluations (HEEs) are increasingly common for therapeutic interventions, they appear to be rare for the use of risk prediction models (PMs).
To evaluate the current state of HEEs of PMs by performing a comprehensive systematic review.
Four databases were searched for HEEs of PM-based strategies. Two reviewers independently selected eligible articles. A checklist was compiled to score items focusing on general characteristics of HEEs of PMs, model characteristics and quality of HEEs, evidence on PMs typically used in the HEEs, and the specific challenges in performing HEEs of PMs.
After screening 791 abstracts, 171 full texts, and reference checking, 40 eligible HEEs evaluating 60 PMs were identified. In these HEEs, PM strategies were compared with current practice (n = 32; 80%), to other stratification methods for patient management (n = 19; 48%), to an extended PM (n = 9; 23%), or to alternative PMs (n = 5; 13%). The PMs guided decisions on treatment (n = 42; 70%), further testing (n = 18; 30%), or treatment prioritization (n = 4; 7%). For 36 (60%) PMs, only a single decision threshold was evaluated. Costs of risk prediction were ignored for 28 (46%) PMs. Uncertainty in outcomes was assessed using probabilistic sensitivity analyses in 22 (55%) HEEs.
Despite the huge number of PMs in the medical literature, HEE of PMs remains rare. In addition, we observed great variety in their quality and methodology, which may complicate interpretation of HEE results and implementation of PMs in practice. Guidance on HEE of PMs could encourage and standardize their application and enhance methodological quality, thereby improving adequate use of PM strategies.
Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

DOI: http://doi.org/10.1016/j.jval.2017.01.001.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28408017.

Wailoo A, Hernandez Alava M, Grimm S, Pudney S, Gomes M, Sadique Z, Meads D, O'Dwyer J, Barton G, Irvine L. Comparing the EQ-5D-3L and 5L versions. What are the implications for cost effectiveness estimates? . Sheffield, UK: Decision Support Unit, ScHARR, University of Sheffield. March 13 2017.


BACKGROUND The NICE Guide to the Methods of Technology Appraisal expresses a preference for using the EQ-5D for adult populations to estimate the health related quality of life in adults which, in turn, are used to calculate the impact of different technologies in terms of Quality Adjusted Life Years (QALYs). The EQ-5D comprises five dimensions of health: mobility, ability to self-care, ability to undertake usual activities, pain and discomfort, and anxiety and depression. The original version of EQ-5D allows respondents to indicate the degree of impairment on each dimension according to three levels (no problems, some problems, extreme problems). This is the EQ-5D-3L. A new version of the instrument, EQ-5D-5L, includes five levels of severity for each dimension (no problems, slight problems, moderate problems, severe problems, and extreme problems). This report is intended to provide information on how using 5L instead of 3L is likely to affect the results of economic evaluations, and highlight the implications of the findings for NICE. ESTIMATING THE RELATIONSHIP BETWEEN EQ-5D-3L AND 5L We used two reference datasets where patients filled in both 3L and 5L instruments. One was supplied by the EuroQoL group (EQG). Questionnaires were administered in six countries and included eight broad patient groups plus a healthy student population (n=3691). The second was provided by the National Databank for Rheumatic Diseases (NDB) from the January 2011 wave of questionnaires to patients of rheumatologists in the US and Canada (n=5311). Our aim was to estimate the joint distribution of the responses to the two versions of EQ-5D, conditional only on age and gender, to provide a general model that could be applied widely. A flexible model has previously been developed by two of the co-authors (MH and SP) for mapping between 3L and 5L. The model is a system of ordinal regressions estimated jointly, incorporating a flexible copula mixture residual distribution. It is a type of response mapping model where the relationships between the two versions of EQ-5D are estimated jointly, so that mapping can, in principle be made consistently in either direction. Our implementation of this approach is based on much less restrictive assumptions than linear regression and its extensions, and can be expected to be less vulnerable to specification error bias. The model was estimated using the EQG dataset and the NDB dataset but excluding all rheumatology specific outcomes as covariates, thus making the mapping usable in any patient group. The dependence between responses to the two variants of EQ-5D in each dimension was captured with a copula representation. Copulas are very useful as they can generate a number of dependence structures. We assessed five different copulas in the analysis. In the final models, there were significant statistical differences in the coefficients of the covariates and latent factor between EQ-5D-3L and EQ-5D-5L in most dimensions. This highlights that the effect of moving from 3 levels to 5 levels is not just a uniform realignment of the response levels. The only exception to this in both datasets is in the anxiety/depression dimension and in the self-care dimension in the NDB dataset. COST EFFECTIVENESS CASE STUDIES Nine cost-effectiveness studies conducted alongside clinical trials were used as case studies. Each had existing analyses based on patient completion of the EQ-5D-3L instrument. In each case, we used the copula models to generate a revised analysis based on estimated 5L scores. We compared directly-observed 3L and estimated 5L (EQG and NDB) results. The 5L instrument and associated tariff has the effect of shifting mean utility scores further up the utility scale towards full health, and compresses them into a smaller space. Thus, improvements in quality of life tend to be valued less using 5L than equivalent changes measured with 3L. In almost all cases, this means that a switch from 3L to 5L causes a decrease in the incremental QALY gain from effective health technologies and therefore technologies appear less cost-effective. This is true whether the estimation of 5L is based on EQG or NDB data. However, an important exception is where life extension is a substantial element of health gain, the ICER can reduce rather than increase. Estimated incremental QALY gains reduced by up to 75% when moving from 3L to 5L (EQG dataset) or 87% (NDB dataset). DISCUSSION The 3L and 5L versions of EQ-5D produce substantially different estimates of cost effectiveness. Improvement in quality of life will be measured as a greater health utility gain 3L than the same change using the 5L. This is because of the combined effect of differences in the way individuals respond to the changed descriptive system and the changed valuation system, compared to 3L. In this sense, 3L and 5L are not consistent with each other. 5L is already being used as the descriptive system in many ongoing clinical studies. Yet 3L will remain part of the relevant evidence base for many years, perhaps decades. This raises several challenges for decision-making, particularly where there is a need to ensure consistency between appraisals. The use of either 3L or 5L with no adjustment to either, as if they were interchangeable, is not appropriate. Nor is there a simple proportional adjustment that can be made between 3L and 5L. Changes do not happen equally across the distribution of health and therefore different technologies are affected to different degrees by the shift from one instrument to another. It is feasible to reliably adjust 3L evidence to 5L equivalent values, as has been done in this report. Whilst the model also allows translation of 5L to 3L, the performance is worse. There are also significant differences in utility estimates according to whether we estimate the expected 5L score using data from the EQG or from the NDB. Those differences were even more pronounced when we incorporated disease specific covariates to further improve the mapping model. This raises the possibility that future mapping between the instruments may be best performed using estimates based on disease-specific datasets, rather than a single generic mapping. These findings have implications for recommendations NICE may make about its willingness to accept unadjusted utility values from the different EQ-5D instruments, how it may wish to specify any adjustments be made, and the cost-effectiveness threshold.

FREE FULL TEXT: http://www.nicedsu.org.uk/USERIMAGES/DSU_3L%20to%205L%20FINAL.pdf.

Westrich K, Buelt L, Dubois RW. Why Value Framework Assessments Arrive at Different Conclusions: A Multiple Myeloma Case Study. J.Manag.Care.Spec.Pharm. 2017 Jun;23(6-a Suppl):S28-33. PMID: 28535102.

As the United States transitions from a volume-based health care system to one that rewards value, new frameworks are emerging to help patients, providers, and payers assess the value of medical services and biopharmaceutical products. These value assessment frameworks are intended to support various types of health care decision making. They have the potential to substantially affect patients, whether as tools for shared decision making with their doctors, as an input to care pathways used by providers, or through payer use of the frameworks to make coverage or reimbursement decisions. Prominent among current U.S. value assessment frameworks are those developed by the American Society of Clinical Oncology, the Institute for Clinical and Economic Review, the Memorial Sloan Kettering Cancer Center, and the National Comprehensive Cancer Network. These frameworks generally reflect the interests and expertise of the organizations that developed them. The evidence, methodology, and intended use differ substantially across frameworks, which can lead to highly variable determinations of value for the same treatment therapy. To demonstrate this variability, we explored how these frameworks assess the value of treatment regimens for multiple myeloma. Cross-framework comparisons of multiple myeloma assessments were conducted, and consistency of findings was examined for 3 case studies. A discussion of the analysis explores why different frameworks arrive at different conclusions, whether those differences are cause for concern, and the resulting implications for framework readiness to support health care decision making.
Funding for this project was provided by the National Pharmaceutical Council. The authors are employees of the National Pharmaceutical Council, an industry-funded health policy research group that is not involved in lobbying or advocacy. Study concept and design were contributed by Westrich and Dubois, along with Buelt. Westrich took the lead in data collection, along with Dubois, and data interpretation was performed by all the authors. The manuscript was written by Westrich and Buelt, along with Dubois, and revised by all the authors.

FREE FULL TEXT: http://www.jmcp.org/doi/pdf/10.18553/jmcp.2017.23.6-a.s28
DOI: https://doi.org/10.18553/jmcp.2017.23.6-a.s28.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28535102.

Marsh K, Ganz M, Nørtoft E, Lund N, Graff-Zivin J. Incorporating Environmental Outcomes into a Health Economic Model. Int.J.Technol.Assess.Health Care. 2016 Jan;32(6):400-6. PMID: 28065172.

Traditional economic evaluations for most health technology assessments (HTAs) have previously not included environmental outcomes. With the growing interest in reducing the environmental impact of human activities, the need to consider how to include environmental outcomes into HTAs has increased. We present a simple method of doing so.
We adapted an existing clinical-economic model to include environmental outcomes (carbon dioxide [CO2] emissions) to predict the consequences of adding insulin to an oral antidiabetic (OAD) regimen for patients with type 2 diabetes mellitus (T2DM) over 30 years, from the United Kingdom payer perspective. Epidemiological, efficacy, healthcare costs, utility, and carbon emissions data were derived from published literature. A scenario analysis was performed to explore the impact of parameter uncertainty.
The addition of insulin to an OAD regimen increases costs by 2,668 British pounds per patient and is associated with 0.36 additional quality-adjusted life-years per patient. The insulin-OAD combination regimen generates more treatment and disease management-related CO2 emissions per patient (1,686 kg) than the OAD-only regimen (310 kg), but generates fewer emissions associated with treating complications (3,019 kg versus 3,337 kg). Overall, adding insulin to OAD therapy generates an extra 1,057 kg of CO2 emissions per patient over 30 years.
The model offers a simple approach for incorporating environmental outcomes into health economic analyses, to support a decision-maker's objective of reducing the environmental impact of health care. Further work is required to improve the accuracy of the approach; in particular, the generation of resource-specific environmental impacts.

DOI: https://dx.doi.org/10.1017/S0266462316000581.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28065172.

Schmier JK, Hulme-Lowe CK. Cost-Effectiveness Models in Age-Related Macular Degeneration: Issues and Challenges. Pharmacoeconomics. 2016 Mar;34(3):259-72. PMID: 26563248.

Age-related macular degeneration (AMD) is a common ophthalmic condition that can have few symptoms in its early stage but can progress to major visual impairment. While there are no treatments for early-stage AMD, there are multiple modalities of treatment for advanced disease. Given the increasing prevalence of the disease, there are dozens of analyses of cost effectiveness of AMD treatments, but methods and approaches vary broadly. The goal of this review was to identify, characterize, and critique published models in AMD and provide guidance for their interpretation. After a literature review was performed to identify studies, and exclusion criteria applied to limit the review to studies comparing treatments for AMD, we compared methods across the 36 studies meeting the review criteria. To some extent, variation was related to targeting different audiences or acknowledging the most appropriate population for a given treatment. However, the review identified potential areas of uncertainty and difficulty in interpretation, particularly regarding duration of observation periods and the importance of visual acuity as an endpoint or a proxy for patient-reported utilities. We urge thoughtful consideration of these study characteristics when comparing results.

DOI: https://doi.org/10.1007/s40273-015-0347-y.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=26563248.

Schwander B, Hiligsmann M, Nuijten M, Evers S. Systematic review and overview of health economic evaluation models in obesity prevention and therapy. Expert Rev.Pharmacoecon Outcomes Res. 2016 Oct;16(5):561-70. PMID: 27570095.

Given the increasing clinical and economic burden of obesity, it is of major importance to identify cost-effective approaches for obesity management. Areas covered: This study aims to systematically review and compile an overview of published decision models for health economic assessments (HEA) in obesity, in order to summarize and compare their key characteristics as well as to identify, inform and guide future research. Of the 4,293 abstracts identified, 87 papers met our inclusion criteria. A wide range of different methodological approaches have been identified. Of the 87 papers, 69 (79%) applied unique /distinctive modelling approaches. Expert commentary: This wide range of approaches suggests the need to develop recommendations /minimal requirements for model-based HEA of obesity. In order to reach this long-term goal, further research is required. Valuable future research steps would be to investigate the predictiveness, validity and quality of the identified modelling approaches.

DOI: https://doi.org/10.1080/14737167.2016.1230497.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/27570095.

Kaltenthaler E, Tappenden P, Paisley S. Reviewing the evidence to inform the population of cost-effectiveness models within health technology assessments. Value Health. 2013 Jul-Aug;16(5):830-6. PMID: 23947977.

Health technology assessments (HTAs) typically require the development of a cost-effectiveness model, which necessitates the identification, selection, and use of other types of information beyond clinical effectiveness evidence to populate the model parameters. The reviewing activity associated with model development should be transparent and reproducible but can result in a tension between being both timely and systematic. Little procedural guidance exists in this area. The purpose of this article was to provide guidance, informed by focus groups, on what might constitute a systematic and transparent approach to reviewing information to populate model parameters.
A focus group series was held with HTA experts in the United Kingdom including systematic reviewers, information specialists, and health economic modelers to explore these issues. Framework analysis was used to analyze the qualitative data elicited during focus groups.
Suggestions included the use of rapid reviewing methods and the need to consider the trade-off between relevance and quality. The need for transparency in the reporting of review methods was emphasized. It was suggested that additional attention should be given to the reporting of parameters deemed to be more important to the model or where the preferred decision regarding the choice of evidence is equivocal.
These recommendations form part of a Technical Support Document produced for the National Institute for Health and Clinical Excellence Decision Support Unit in the United Kingdom. It is intended that these recommendations will help to ensure a more systematic, transparent, and reproducible process for the review of model parameters within HTA.
Copyright © 2013 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

FREE FULL TEXT: http://ac.els-cdn.com/S1098301513018081/1-s2.0-S1098301513018081-main.pdf?_tid=19298dec-4a47-11e7-88d8-00000aacb35f&acdnat=1496705668_c54bd655aaf16242c775c93d140b28fe
DOI: https://doi.org/10.1016/j.jval.2013.04.009.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=23947977.