Date
|
Book (and chapters)/Journal
|
Presented/ led by
|
24 Sep 2013
|
Dmitrienko, A., D'Agostino, R. B. and Huque, M. F. (2013). Key multiplicity issues in clinical drug development. Statistics in Medicine, 32: 1079-1111.
Chapters 1, 2 and 13 of Westfall, P. H., Randall, R. D. and Wolfinger, R. D. (2011) Multiple comparisons and multiple tests using SAS, second edition. SAS Publishing.
This meeting was loosely based on the Dmitrienko et al. paper and chapters 1, 2 and 13 of the book by Westfall et al. The meeting, as well as the paper and book chapters, provided an introduction to multiple testing methodology with reference to clinical drug development.
First the notion of a family of hypotheses was introduced and the different classes of family-wise error rate control (weak/strong). Then the fundamental Bonferroni, Sidak and Simes procedures for controlling this error rate were discussed. The paper explains in a very intuitive way the assumptions and advantages (usually in terms of power) of these methods. Based on the methods the closure principle was explained as a core principle for constructing multiple testing procedures. At this point the meeting and the discussed paper diverged. In the paper the closure principle is only briefly introduced and the narrative moved on to complicated problems with testing of multiple families of hypotheses and gatekeeping procedures. During the meeting the closure principle was discussed in great detail with examples and its application in the Bonferroni-Holm, Hochberg and Hommel procedures.
The paper by Dmitrienko et al. does an excellent job providing a not too technical overview of the fundamental multiple testing methodology in the first half of the paper. Topics discussed in this part are accessible and of interest for any medical statistician. Methods described in the second half of the paper are the particular research interest of the authors, more complicated and specific for pharmaceutical drug development.
|
Nigel Stallard |
15 Oct 2013
|
Bretz, F., Maurer, W., Brannath, W. and Posch, M. (2009). A graphical approach to sequentially rejective multiple test procedures. Statistics in Medicine, 28: 586-604.
Graphical methods described by Bretz et al. are a way of illustrating Bonferroni-type multiple testing procedures. Crucially they cannot be used to illustrate procedures which make use of correlations between tests such as the Sidak or Dunnett test.
The paper implicitly assumed readers will have some knowledge of multiple test procedures. Hence, the meeting started with an explanation of how the simple Bonferroni α-split method would be graphically represented in this framework. It was explained how the allocated α can be transferred to other hypotheses following rejection. The meeting then moved on to more complicated Bonferroni-Holm and sequentially rejective procedures.
It was generally felt that the graphical illustrations of multiple testing strategies quickly become confusing for all but the simplest cases. The general consensus was that these figures are probably not easier to communicate to clinical collaborators as claimed by the authors.
All in all, this and the Dmitrienko et al. papers have been very helpful in understanding multiple testing procedures which medical statisticians frequently come across when analysing clinical trials or genome sequence data.
|
Nigel Stallard |
20 Nov 2013
|
Statistical Analysis Plan for Changing Case Order to Optimise Patterns of Performance in Screening (CO-OPS) Randomised Controlled Trial.
It was useful to get lots of statisticians together to discuss my statistical analysis plan and help to improve it. It brought up issues that I can resolve before analysing and publishing my data. - Sian Phillips
|
Sian Phillips |
14 Jan 2014
|
White, I. R., Royston, P., Wood and A. M. (2011). Multiple imputation using chained equations: Issues and guidance for practice. Statistics in Medicine, 30(4):377-99.
The meeting focused on the White el al paper, looking at the use of chained equations for the purpose of multiple imputation. The paper gives an overview of the basic procedure, and then considers different model structures and approaches that can be used for differing types of dataset. In general it provided a very clear overview of how this procedure can be implemented in practice, provided you have a dataset that meets the appropriate criteria for doing so. However, the discussion soon diverged into consideration of more complex cases, where the approach to take is not so obvious. These include where you have a low proportion of missing data (is imputation necessary?), a high proportion of missing data (can the models be trusted?), a large number of variables (convergence problems) or where the model fails because of perfect predictions. Whilst the paper does mention these issues, it does not provide definitive ways to deal with them (at least not ones that satisfied the group), so further research will often be necessary to address issues specific to the dataset one is analysing. - Joshua Pink
|
Andrea Marshall |
25 Feb 2014
|
Chakraborty, B., Collins, L. M., Strecher, V. J., and Murphy, S. A. (2009). Developing multicomponent interventions using fractional factorial designs. Statististics in Medicine, 28:2687-2708
In this seminar, we discussed a paper by Chakraborty et al. 2009. This article discussed a natural, yet uncommon, experimental approach to developing and refining multi-component interventions in health sciences. These can be composed of behavioural, delivery, or implementation factors in addition to medications. An example of this type of intervention is smoking cessation intervention, which is divided into simple advice by a healthcare professional, emotional support, nicotine replacement therapy and other medication.
Conventional studies in health sciences compare a single intervention with control (either placebo or standard treatment). The proposed method would test the effects of various factors (and their interactions) on outcome with the aim of selection the important ones.
The biggest problem of this type of approach is feasibility of testing all relevant combinations and choosing which combinations to test. What are the most important interactions? Which one is the main/simple effect? The interaction effect is often much smaller than the main effect. We would need a much bigger sample size for this type of study. We discussed that healthcare professionals may be unwilling to develop studies using this methodology if they are not practically feasible. We also discussed the possibility of using this type of study design in rare diseases.
From this seminar, I realised the importance of understanding the application of these statistical methods in order to follow the seminar. - Christos Mousoulis
|
Nick Parsons
|
Mar - May 2014
|
Glick, H., Doshi, J., Sonnad, S. & Polsky, D. (2007). Economic Evaluation in Clinical Trials. Oxford
University Press.
This book provides a gentle introduction to health economic analysis in clinical trials. The authors explain the most important concepts of economic evaluations and illustrate them with real-life examples. The main issues commonly found in health economics in clinical trials are touched upon. These issues were introduced but the theoretical background to methods used/involved to deal with these was generally not expanded on. An area notably absent from this book is Bayesian methods in economic evaluations.
We are of the opinion that the book is useful as an introductory text and provides sufficient detail to allow a statistician or other reasonably numerate researcher to conduct a basic HE analysis. If the intention is to conduct research into HE methodology, a more theoretical introduction will be required.
A club member also suggested the following paper as a good introductory read: Petrou, S. and Gray, A. (2011). Economic evaluation alongside randomised controlled trials: design, conduct, analysis, and reporting. BMJ, 342:d1548. Doi: 10.1136/bmj.d1548.
|
|
18 Mar
|
Ch2. Designing economic evaluations in clinical trials
|
Siew Wan Hee |
|
Ch 3. Valuing medical service use
|
Nigel Stallard |
|
Ch 4. Assessing quality-adjusted life years
|
Tom Hamborg |
29 Apr
|
Ch 5. Analyzing cost
|
Jason Madan |
|
Ch 6. Analyzing censored cost
|
Joshua Pink |
|
Ch 7. Comparing cost and effect: point estimates for cost-effectiveness ratios and net monetary benefit
|
Nick Parsons |
20 May
|
Ch 8. Understanding sampling uncertainty: the concepts
|
Ric Crossman |
|
Ch 9. Sampling uncertainty: calculation, sample size and power, and decision criteria
|
Peter Kimani |
|
Ch 10. Evaluating transferability of the results from trials
|
Helen Parsons |
|
Ch 11. Relevance of trial-based economic analyses
|
Melina Dritsaki |
24 June 2014
|
National Research Council, Institute of Medicine (2001). Small Clinical Trials: Issues and Challenges. The National Academies Press.
In this meeting we discussed the Institute of Medicine guideline on small clinical trials. The report defines a small clinical trial as a trial which cannot be adequately powered due to lack of availability of a sufficient number of research participants. This situation might for example occur in rare diseases settings or studies of unique patient populations such as astronauts. The report recommends 6 study designs and 7 analysis methods which lend themselves to the small trial scenario. It concludes with general recommendations for the conduct of small clinical trials.
The report has been written more than 10 years ago and the consensus during the meeting was that it is showing its age in several ways. Many of the suggested analysis methods are related to the efficient use of available data – something which is desirable in any clinical trial and, in fact, some of these methods are commonly utilised nowadays. Some of the definitions and names used are outdated or used differently these days which led to some confusion. However, despite its age, the two main recommendations made by the authors are still valid today:
1. Whenever possible an adequately powered large-scale trial should be conducted
2. Further research into efficient designs and analysis methods for small clinical trials is needed.
|
Siew Wan Hee & Tom Hamborg
|
29 July 2014
|
Weiss, H. (2013). The SIR model and the foundations of public health. Materials Matematics, 3.
The review by Weiss was an introductory to the susceptible-infected-recovered (SIR) model of infectious disease transmission. The objective of today’s discussion was on how to include stochastic parameters (e.g. from a discrete event simulation) into the deterministic framework of an SIR model to construct a decision making model that includes the uncertainties from both components. Three possible ways of achieving this goal were discussed: i) consecutively iterating between the two models over short time intervals, ii) combining both components into a single simulation model, but with parameters identified separately for each component, iii) combining both components into a single simulation model with parameters for that model being jointly estimated. Each of these possible approaches has a trade of between the validity of the solution and the computational complexity in evaluating it.
|
Joshua Pink |