Riley RD, Ensor J, Jackson D, Burke DL. Deriving percentage study weights in multi-parameter meta-analysis models: with application to meta-regression, network meta-analysis and one-stage individual participant data models. Stat.Methods Med.Res. Epub 2017 Jan 1. PMID: 28162044.

Many meta-analysis models contain multiple parameters, for example due to multiple outcomes, multiple treatments or multiple regression coefficients. In particular, meta-regression models may contain multiple study-level covariates, and one-stage individual participant data meta-analysis models may contain multiple patient-level covariates and interactions. Here, we propose how to derive percentage study weights for such situations, in order to reveal the (otherwise hidden) contribution of each study toward the parameter estimates of interest. We assume that studies are independent, and utilise a decomposition of Fisher's information matrix to decompose the total variance matrix of parameter estimates into study-specific contributions, from which percentage weights are derived. This approach generalises how percentage weights are calculated in a traditional, single parameter meta-analysis model. Application is made to one- and two-stage individual participant data meta-analyses, meta-regression and network (multivariate) meta-analysis of multiple treatments. These reveal percentage study weights toward clinically important estimates, such as summary treatment effects and treatment-covariate interactions, and are especially useful when some studies are potential outliers or at high risk of bias. We also derive percentage study weights toward methodologically interesting measures, such as the magnitude of ecological bias (difference between within-study and across-study associations) and the amount of inconsistency (difference between direct and indirect evidence in a network meta-analysis).

DOI: https://dx.doi.org/10.1177/0962280216688033.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28162044.

Bakbergenuly I, Kulinskaya E. Beta-binomial model for meta-analysis of odds ratios. Stat.Med. 2017 May 20;36(11):1715-34. PMID: 28124446.

In meta-analysis of odds ratios (ORs), heterogeneity between the studies is usually modelled via the additive random effects model (REM). An alternative, multiplicative REM for ORs uses overdispersion. The multiplicative factor in this overdispersion model (ODM) can be interpreted as an intra-class correlation (ICC) parameter. This model naturally arises when the probabilities of an event in one or both arms of a comparative study are themselves beta-distributed, resulting in beta-binomial distributions. We propose two new estimators of the ICC for meta-analysis in this setting. One is based on the inverted Breslow-Day test, and the other on the improved gamma approximation by Kulinskaya and Dollinger (2015, p. 26) to the distribution of Cochran's Q. The performance of these and several other estimators of ICC on bias and coverage is studied by simulation. Additionally, the Mantel-Haenszel approach to estimation of ORs is extended to the beta-binomial model, and we study performance of various ICC estimators when used in the Mantel-Haenszel or the inverse-variance method to combine ORs in meta-analysis. The results of the simulations show that the improved gamma-based estimator of ICC is superior for small sample sizes, and the Breslow-Day-based estimator is the best for n100. The Mantel-Haenszel-based estimator of OR is very biased and is not recommended. The inverse-variance approach is also somewhat biased for ORs not equal1, but this bias is not very large in practical settings. Developed methods and R programs, provided in the Web Appendix, make the beta-binomial model a feasible alternative to the standard REM for meta-analysis of ORs. (c) 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

FREE FULL TEXT: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5434808/pdf/SIM-36-1715.pdf
DOI: https://doi.org/10.1002/sim.7233.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/28124446.
PubMed Central: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5434808/.

Fisher Z, Tipton E, Hou Z. Robumeta: Robust variance meta-regression. Version 2.0. . [s.l.]: Comprehensive R Archive Network. May 29 2017.

Functions for conducting robust variance estimation (RVE) meta-regression using both large and small sample RVE estimators under various weighting schemes. These methods are distribution free and provide valid point estimates, standard errors and hypothesis tests even when the degree and structure of dependence between effect sizes is unknown. Also included are functions for conducting sensitivity analyses under correlated effects weighting and producing RVE-based forest plots.

FREE FULL TEXT: http://cran.r-project.org/web/packages/robumeta/robumeta.pdf.

Sangnawakij P, Böhning D, Adams S, Stanton M, Holling H. Statistical methodology for estimating the mean difference in a meta-analysis without study-specific variance information. Stat.Med. 2017 Apr 30;36(9):1395-413. PMID: 28168731.

Statistical inference for analyzing the results from several independent studies on the same quantity of interest has been investigated frequently in recent decades. Typically, any meta-analytic inference requires that the quantity of interest is available from each study together with an estimate of its variability. The current work is motivated by a meta-analysis on comparing two treatments (thoracoscopic and open) of congenital lung malformations in young children. Quantities of interest include continuous end-points such as length of operation or number of chest tube days. As studies only report mean values (and no standard errors or confidence intervals), the question arises how meta-analytic inference can be developed. We suggest two methods to estimate study-specific variances in such a meta-analysis, where only sample means and sample sizes are available in the treatment arms. A general likelihood ratio test is derived for testing equality of variances in two groups. By means of simulation studies, the bias and estimated standard error of the overall mean difference from both methodologies are evaluated and compared with two existing approaches: complete study analysis only and partial variance information. The performance of the test is evaluated in terms of type I error. Additionally, we illustrate these methods in the meta-analysis on comparing thoracoscopic and open surgery for congenital lung malformations and in a meta-analysis on the change in renal function after kidney donation. Copyright © 2017 John Wiley & Sons, Ltd.

DOI: http://dx.doi.org/10.1002/sim.7232.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28168731.

Stanley TD, Doucouliagos H. Neither fixed nor random: weighted least squares meta-regression. Res.Synth.Method. 2017 Mar;8(1):19-42. PMID: 27322495.

Our study revisits and challenges two core conventional meta-regression estimators: the prevalent use of 'mixed-effects' or random-effects meta-regression analysis and the correction of standard errors that defines fixed-effects meta-regression analysis (FE-MRA). We show how and explain why an unrestricted weighted least squares MRA (WLS-MRA) estimator is superior to conventional random-effects (or mixed-effects) meta-regression when there is publication (or small-sample) bias that is as good as FE-MRA in all cases and better than fixed effects in most practical applications. Simulations and statistical theory show that WLS-MRA provides satisfactory estimates of meta-regression coefficients that are practically equivalent to mixed effects or random effects when there is no publication bias. When there is publication selection bias, WLS-MRA always has smaller bias than mixed effects or random effects. In practical applications, an unrestricted WLS meta-regression is likely to give practically equivalent or superior estimates to fixed-effects, random-effects, and mixed-effects meta-regression approaches. However, random-effects meta-regression remains viable and perhaps somewhat preferable if selection for statistical significance (publication bias) can be ruled out and when random, additive normal heterogeneity is known to directly affect the 'true' regression coefficient. Copyright (c) 2016 John Wiley & Sons, Ltd.

DOI: http://dx.doi.org/10.1002/jrsm.1211.
PubMed: http://www.ncbi.nlm.nih.gov/pubmed/27322495.

Thomas D, Platt R, Benedetti A. A comparison of analytic approaches for individual patient data meta-analyses with binary outcomes. BMC Med.Res.Methodol. 2017 Feb 16;17(1):28. PMID: 28202011.

BACKGROUND:
Individual patient data meta-analyses (IPD-MA) are often performed using a one-stage approach-- a form of generalized linear mixed model (GLMM) for binary outcomes. We compare (i) one-stage to two-stage approaches (ii) the performance of two estimation procedures (Penalized Quasi-likelihood-PQL and Adaptive Gaussian Hermite Quadrature-AGHQ) for GLMMs with binary outcomes within the one-stage approach and (iii) using stratified study-effect or random study-effects.
METHODS:
We compare the different approaches via a simulation study, in terms of bias, mean-squared error (MSE), coverage and numerical convergence, of the pooled treatment effect (? 1) and between-study heterogeneity of the treatment effect (? 12 ). We varied the prevalence of the outcome, sample size, number of studies and variances and correlation of the random effects.
RESULTS:
The two-stage and one-stage methods produced approximately unbiased ? 1 estimates. PQL performed better than AGHQ for estimating ? 12 with respect to MSE, but performed comparably with AGHQ in estimating the bias of ? 1 and of ? 12 . The random study-effects model outperformed the stratified study-effects model in small size MA.
CONCLUSION:
The one-stage approach is recommended over the two-stage method for small size MA. There was no meaningful difference between the PQL and AGHQ procedures. Though the random-intercept and stratified-intercept approaches can suffer from their underlining assumptions, fitting GLMM with a random-intercept are less prone to misfit and has good convergence rate.

FREE FULL TEXT: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5312561/pdf/12874_2017_Article_307.pdf
DOI: http://dx.doi.org/10.1186/s12874-017-0307-7.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=28202011.
PubMed Central: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5312561/.

Bangdiwala SI, Bhargava A, O'Connor DP, Robinson TN, Michie S, Murray DM, Stevens J, Belle SH, Templin TN, Pratt CA. Statistical methodologies to pool across multiple intervention studies. Transl.Behav.Med. 2016 Jun;6(2):228-35. PMID: 27356993.

Combining and analyzing data from heterogeneous randomized controlled trials of complex multiple-component intervention studies, or discussing them in a systematic review, is not straightforward. The present article describes certain issues to be considered when combining data across studies, based on discussions in an NIH-sponsored workshop on pooling issues across studies in consortia (see Belle et al. in Psychol Aging, 18(3):396-405, 2003). Several statistical methodologies are described and their advantages and limitations are explored. Whether weighting the different studies data differently, or via employing random effects, one must recognize that different pooling methodologies may yield different results. Pooling can be used for comprehensive exploratory analyses of data from RCTs and should not be viewed as replacing the standard analysis plan for each study. Pooling may help to identify intervention components that may be more effective especially for subsets of participants with certain behavioral characteristics. Pooling, when supported by statistical tests, can allow exploratory investigation of potential hypotheses and for the design of future interventions.

FREE FULL TEXT: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4927450/pdf/13142_2016_Article_386.pdf
DOI: https://doi.org/10.1007/s13142-016-0386-8.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=27356993.
PubMed Central: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4927450/.

Boucher M, Bennetts M. The Many Flavors of Model-Based Meta-Analysis: Part I-Introduction and Landmark Data. CPT Pharmacometrics Syst.Pharmacol. 2016 Feb;5(2):54-64. PMID: 26933516.

Meta-analysis is an increasingly important aspect of drug development as companies look to benchmark their own compounds with the competition. There is scope to carry out a wide range of analyses addressing key research questions from preclinical through to postregistration. This set of tutorials will take the reader through key model-based meta-analysis (MBMA) methods with this first installment providing a general introduction before concentrating on classical and Bayesian methods for landmark data.

FREE FULL TEXT: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4761229/pdf/PSP4-5-54.pdf
DOI: http://dx.doi.org/10.1002/psp4.12041.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=26933516.
PubMed Central: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4761229/.

Dias S, Welton NJ, Sutton AJ, Ades AE. NICE DSU Technical Support Document 2: A Generalised Linear Modelling Framework for Pairwise and Network Meta-Analysis of Randomised Controlled Trials. September 2016.

EXECUTIVE SUMMARY

This paper sets out a generalised linear model (GLM) framework for the synthesis of data from randomised controlled trials (RCTs). We describe a common model taking the form of a linear regression for both fixed and random effects synthesis, that can be implemented with Normal, Binomial, Poisson, and Multinomial data. The familiar logistic model for meta-analysis withBinomial data is a GLM with a logit link function, which is appropriate for probability outcomes. The same linear regression framework can be applied to continuous outcomes, rate models, competing risks, or ordered category outcomes, by using other link functions, such asidentity, log, complementary log-log, and probit link functions. The common core model for the linear predictor can be applied to pair-wise meta-analysis, indirect comparisons, synthesis of multi-arm trials, and mixed treatment comparisons, also known as network meta-analysis, without distinction.

We take a Bayesian approach to estimation and provide WinBUGS program code for a Bayesian analysis using Markov chain Monte Carlo (MCMC) simulation. An advantage of this approach is that it is straightforward to extend to shared parameter models where different RCTs report outcomes in different formats but from a common underlying model. Use of the GLM framework allows us to present a unified account of how models can be compared using the Deviance Information Criterion (DIC), and how goodness of fit can be assessed using the residual deviance. WinBUGS code for model critique is provided. Our approach is illustrated through a range of worked examples for the commonly encountered evidence formats, including shared parameter models.

We give suggestions on computational issues that sometimes arise in MCMC evidence synthesis, and comment briefly on alternative software.

August 2011; last updated September 2016

FREE FULL TEXT: http://www.nicedsu.org.uk/TSD2%20General%20meta%20analysis%20corrected%202Sep2016v2.pdf.

Greco T, Landoni G, Biondi-Zoccai G, D'Ascenzo F, Zangrillo A. A Bayesian network meta-analysis for binary outcome: how to do it. Stat.Methods Med.Res. 2016 Oct;25(5):1757-73. PMID: 23970014.

This study presents an overview of conceptual and practical issues of a network meta-analysis (NMA), particularly focusing on its application to randomised controlled trials with a binary outcome of interest. We start from general considerations on NMA to specifically appraise how to collect study data, structure the analytical network and specify the requirements for different models and parameter interpretations, with the ultimate goal of providing physicians and clinician-investigators a practical tool to understand pros and cons of NMA. Specifically, we outline the key steps, from the literature search to sensitivity analysis, necessary to perform a valid NMA of binomial data, exploiting Markov Chain Monte Carlo approaches. We also apply this analytical approach to a case study on the beneficial effects of volatile agents compared to total intravenous anaesthetics for surgery to further clarify the statistical details of the models, diagnostics and computations. Finally, datasets and models for the freeware WinBUGS package are presented for the anaesthetic agent example.

DOI: https://doi.org/10.1177/0962280213500185.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/23970014.

Higgins S. Meta-synthesis and comparative meta-analysis of education research findings: some risks and benefits. Rev.Educ. 2016 Feb;4(1):31-53

Meta-analysis, or quantitative synthesis, is the statistical combination of research findings. It can identify whether an intervention or approach, on balance, is effective or not, and can explain variation in findings by identifying patterns associated with larger or smaller effects across studies. It is now more widely applied in medicine and psychology, even though the term was first used in education, and the underpinning statistical ideas date back 70 years or so. This review traces the development of meta-analysis in education and the history of meta-meta-analysis or 'meta-synthesis' in more detail, where the temptation is not just to draw conclusions about similar studies, but to aggregate findings across meta-analyses to understand the relative benefits of different approaches on educational outcomes. A final section presents the rationale for the Sutton Trust-Education Endowment Foundation Teaching and Learning Toolkit, which aims to present accurate and accessible findings from research studies which are sufficiently applicable to inform professional decision-making and action in schools, as an example of a 'meta-synthesis' for education.

FREE FULL TEXT: http://onlinelibrary.wiley.com/doi/10.1002/rev3.3067/epdf
DOI: http://dx.doi.org/10.1002/rev3.3067.

Hoaglin DC. Misunderstandings about Q and 'Cochran's Q test' in meta-analysis. Stat.Med. 2016 Feb 20;35(4):485-95. PMID: 26303773.

Many meta-analyses report using 'Cochran's Q test' to assess heterogeneity of effect-size estimates from the individual studies. Some authors cite work by W. G. Cochran, without realizing that Cochran deliberately did not use Q itself to test for heterogeneity. Further, when heterogeneity is absent, the actual null distribution of Q is not the chi-squared distribution assumed for 'Cochran's Q test'. This paper reviews work by Cochran related to Q. It then discusses derivations of the asymptotic approximation for the null distribution of Q, as well as work that has derived finite-sample moments and corresponding approximations for the cases of specific measures of effect size. Those results complicate implementation and interpretation of the popular heterogeneity index I(2) . Also, it turns out that the test-based confidence intervals used with I(2) are based on a fallacious approach. Software that outputs Q and I(2) should use the appropriate reference value of Q for the particular measure of effect size and the current meta-analysis. Q is a key element of the popular DerSimonian-Laird procedure for random-effects meta-analysis, but the assumptions of that procedure and related procedures do not reflect the actual behavior of Q and may introduce bias. The DerSimonian-Laird procedure should be regarded as unreliable. Copyright (c) 2015 John Wiley & Sons, Ltd.

DOI: http://dx.doi.org/10.1002/sim.6632.
PubMed: http://www.ncbi.nlm.nih.gov/pubmed/26303773.

Kabali C, Ghazipura M. Transportability in Network Meta-analysis. Epidemiology. 2016 Jul;27(4):556-61. PMID: 26963292.

Network meta-analysis is an extension of the conventional pair wise meta-analysis to include treatments that have not been compared head to head. It has in recent years caught the interest of clinical investigators in comparative effectiveness research. While allowing a simultaneous comparison of a large number of treatment effects, an inclusion of indirect effects (i.e., estimating effects using treatments that have not been randomized head to head) may introduce bias. This bias occurs from not accounting for covariates differences in the analysis, in a way that allows transfer of causal information across trials. Although this problem might not be entirely new to network meta-analysis researchers, it has not been given a formal treatment. Occasionally it is tackled by fitting a meta-regression model to account for imbalance of covariates. However, this approach may still produce biased estimates if covariates responsible for disparity across studies are post-treatment variables. To address the problem, we use the graphical method known as transportability to demonstrate whether and how indirect treatment effects can validly be estimated in network meta-analysis. See Video Abstract at http://links.lww.com/EDE/B37.

DOI: http://dx.doi.org/10.1097/EDE.0000000000000475.
PubMed: http://www.ncbi.nlm.nih.gov/pubmed/?term=26963292.

Kingsolver JG, Diamond SE, Siepielski AM, Carlson SM. Errors in meta-analyses of selection. J.Evol.Biol. 2016 Oct;29(10):1905-6. PMID: 27396976.
DOI: https://doi.org/10.1111/jeb.12941.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=27396976.

Li JX, Chen WC, Scott JA. Addressing Prior-data Conflict with Empirical Meta-analytic Predictive Priors in Clinical Studies with Historical Information. J.Biopharm.Stat. 2016;26(6):1056-66. PMID: 27541990.

A common question in clinical studies is how to use historical data from earlier studies, leveraging relevant information into the design and analysis of a new study. Bayesian approaches are particularly well-suited to this task, with their natural ability to borrow strength across data sources. In this paper, we propose an eMAP approach for incorporating historical data into the analysis of clinical studies and we discuss an application of this method to the analysis of observational safety studies for a class of products for patients with hemophilia A. The eMAP prior approach is flexible and robust to prior-data conflict. We conducted simulations to compare the frequentist operating characteristics of the three approaches under different prior-data conflict assumptions and sample size scenarios.

DOI: http://dx.doi.org/10.1080/10543406.2016.1226324.
PubMed: http://www.ncbi.nlm.nih.gov/pubmed/27541990.

Prendergast LA, Staudte RG. Meta-analysis of ratios of sample variances. Stat.Med. 2016 May 20;35(11):1780-99. PMID: 27062644.

When conducting a meta-analysis of standardized mean differences (SMDs), it is common to use Cohen's d, or its variants, that require equal variances in the two arms of each study. While interpretation of these SMDs is simple, this alone should not be used as a justification for assuming equal variances. Until now, researchers have either used an F-test for each individual study or perhaps even conveniently ignored such tools altogether. In this paper, we propose a meta-analysis of ratios of sample variances to assess whether the equality of variances assumptions is justified prior to a meta-analysis of SMDs. Quantile–quantile plots, an omnibus test for equal variances or an overall meta-estimate of the ratio of variances can all be used to formally justify the use of less common methods when evidence of unequal variances is found. The methods in this paper are simple to implement and the validity of the approaches are reinforced by simulation studies and an application to a real data set.

DOI: http://dx.doi.org/10.1002/sim.6838.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/27062644.

Sargeant JM, O'Connor AM. Potential for Meta-Analysis in the Realm of Preharvest Food Safety. Microbiol.Spectr. 2016 Oct;4(5):PFS-0004-2014. PMID: 27780015.

Meta-analysis, the statistical combination of results from multiple studies, can be used to summarize all of the available research on an intervention, etiology, descriptive, or diagnostic test accuracy question. Meta-analysis should be conducted as a component of a systematic review, to increase transparency in the selection of studies and to incorporate an evaluation of the risk of bias in the individual studies included in the meta-analysis. The process of meta-analysis may include a forest plot to graphically display the study results and the calculation of a weighted average summary effect size. Heterogeneity (differences in the effect size between studies) can be evaluated using formal statistics and the reasons for heterogeneity can be explored using sub-group analysis or meta-regression. Thus, meta-analysis may be a useful methodology for preharvest food safety research to aid in policy or clinical decision-making or to provide input to quantitative risk assessment or other models.

DOI: https://doi.org/10.1128/microbiolspec.PFS-0004-2014.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=27780015.

Schmid CH. Outcome Reporting Bias: A Pervasive Problem in Published Meta-analyses. Am.J.Kidney Dis. 2016 Feb;69(2):172-4. PMID: 27940062.
DOI: https://doi.org/10.1053/j.ajkd.2016.11.003.
PubMed: https://www.ncbi.nlm.nih.gov/pubmed/?term=27940062.

Yuan KH. Meta analytical structural equation modeling: comments on issues with current methods and viable alternatives. Res.Synth.Method. 2016 Jun;7(2):215-31. PMID: 27286905.
DOI: http://dx.doi.org/10.1002/jrsm.1213.
PubMed: http://www.ncbi.nlm.nih.gov/pubmed/?term=27286905.

Shun Z, Lei G, Liu Q, Zheng W, Quan H, Hitier S. Concepts, Methods, and Practical Considerations of Meta-Analysis in Drug Development. Concepts, Methods, and Practical Considerations of Meta-Analysis in Drug Development. Proceedings of the Papers from the 2015 ASA Biopharmaceutical Section Statistics Workshop; September 16-18; Washington, DC. Stat.Biopharm.Res. 2015 ;8(3):287-92.

Meta-analysis is the statistical methodology to combine study results from different clinical trials to address questions of clinical interest. It is done as a systematic review of the relevant clinical trials. Meta-analysis is usually conducted in three steps: data/study selection, statistical analysis, and interpretation. We discuss the practical issues in these three steps and introduce how a proper meta-analysis is conducted. A case study of meta-analysis of fatal adverse events in cancer patients treated with aflibercept is introduced and its limitations are discussed.; ABSTRACTMeta-analysis is the statistical methodology to combine study results from different clinical trials to address questions of clinical interest. It is done as a systematic review of the relevant clinical trials. Meta-analysis is usually conducted in three steps: data/study selection, statistical analysis, and interpretation. We discuss the practical issues in these three steps and introduce how a proper meta-analysis is conducted. A case study of meta-analysis of fatal adverse events in cancer patients treated with aflibercept is introduced and its limitations are discussed.

DOI: http://dx.doi.org/10.1080/19466315.2016.1174148.