March 10-13, 2024

ENAR 2024 Educational Program | SHORT COURSES

Short Courses are offered as full- or half-day courses. The extended length of these courses allows attendees to obtain an in-depth understanding of the topic. These courses often integrate seminar lectures covering foundational concepts with hands-on lab sessions that allow users to implement these concepts into practice.

 

Sunday, March 10 | 8:00 am – 5:00 pm
SC1 | Bayesian Modelling of Epidemics: From Population to Individual-level Models

Instructors:
Rob Deardon, Department of Mathematics & Statistics and Faculty of Veterinary Medicine, University of Calgary
Caitlin Ward, School of Public Health, University of Minnesota

Course Description:

Following the COVID-19 pandemic, there has been an understandable increase in the interest in epidemic and transmission models. However, transmission processes have always been of interest across public health, agriculture and ecology.

Inference for such models is made more complicated by the fact that we often have latent variables (e.g., infection times). Additionally, we often have complex heterogeneities in the population we wish to account for, since populations do not tend to mix homogeneously. This often leads to a need for spatial and/or network-based models. Typically, inference for such models is done in a Bayesian Markov chain Monte Carlo (MCMC) framework, accounting for latent or uncertain variables such as event times in a data-augmented framework.

In this workshop, we will examine characteristics of such transmission models, and inference for them, starting with the classic SIR population-level model, and expanding into more complex individual-level models. Topics will be investigated using real epidemic data on diseases such as Ebola and COVID-19, and via simulation. In addition, we will address how to fit these models in the presence of data uncertainty using data-augmented MCMC. This will all be done in R using the packages deSolve, Nimble and EpiILM.

Statistical/Programming Knowledge Required:
It is assumed that the audience will have a basic knowledge of statistics and statistical models (e.g., linear regression, maximum likelihood estimation). A basic knowledge of R and Bayesian statistics is desirable, but not essential.

Instructor Biographies:

Rob Deardon is a Professor of Biostatistics with a joint position in the Faculty of Veterinary Medicine and Department of Mathematics & Statistics at the University of Calgary. Much of his recent work has been in the area of infectious disease modelling, but he also works in Bayesian & computational statistics, experimental design, disease surveillance methods and spatiotemporal modelling. He has published 75+ papers in peer-reviewed journals. He has trained over 80 graduate and postgraduate trainees, and currently has a research group consisting of 15. He has served as associate editor of a number of journals including the Journal of the Royal Statistical Society (Series C) and the Canadian Journal of Statistics. He is currently the Graduate Coordinator of the Interdisciplinary Biostatistics Graduate Program at Calgary and has recently served a 2-year term as the Chair of the Statistics Section of the Canadian NSERC Discovery Grants Math/Stats Evaluation Group.

Caitlin Ward is an Assistant Professor in the Division of Biostatistics in the University of Minnesota School of Public Health, United States. Her research focuses on the development of Bayesian models in settings with complex or correlated data, such as infectious disease modeling and spatiotemporal disease mapping, as well as improving computational techniques for implementing these methods. She has published 25+ papers in peer-reviewed journals. Her PhD, received in 2021 from the University of Iowa, was on Bayesian methods for spatiotemporal epidemic models. She has received a number of awards including a Canadian Statistical Sciences Institute Distinguished Postdoctoral Fellowship and the Milford E. Barnes Award for Outstanding Graduate Students.


Sunday, March 10 | 8:00 am – 5:00 pm
SC2 | Targeted Learning in the tlverse: Techniques and Tools for Causal Machine Learning

Instructors:
Mark van der Laan, Division of Biostatistics, University of California at Berkeley
Alan Hubbard, Division of Biostatistics, University of California at Berkeley
Ivana Malenica, Department of Statistics, Harvard University
Nima Hejazi, Department of Biostatistics, Harvard T.H. Chan School of Public Health, Harvard University
Rachael V. Phillips, Division of Biostatistics, University of California at Berkeley

Course Description:

Great care is required when disentangling intricate relationships for causal and statistical inference in medicine, public health, marketing, political science, and myriad other fields. However, traditional statistical practice ignores complexities that exist in real-world problems, for example, by avoiding interaction terms in regression analysis because such terms complicate and obfuscate the interpretation of results. The field of Targeted Learning (TL) presents a solution to such practices by outlining a modern statistical framework that unifies semiparametric theory, machine learning, and causal inference. This workshop provides a comprehensive introduction to TL and its accompanying free and open source software ecosystem, the TLverse (https://github.com/tlverse). It will be of interest to statisticians and data scientists who wish to apply cutting-edge statistical and causal inference approaches to rigorously formalize and answer substantive scientific questions. This workshop incorporates discussion and hands-on R programming exercises, allowing participants to familiarize themselves with methodology and tools that translate to improvements in real-world data analytic practice. Participants are highly recommended to have had prior training in basic statistical concepts, such as confounding, probability distributions, (linear and logistic) regression, hypothesis testing and confidence intervals. Advanced knowledge of mathematical statistics may be useful but is not necessary. Familiarity with the R programming language is essential.

Statistical/Programming Knowledge Required:
Participants are highly recommended to have had prior training in basic statistical concepts, such as confounding, probability distributions, (linear and logistic) regression, hypothesis testing and confidence intervals. Advanced knowledge of mathematical statistics may be useful but is not necessary. Familiarity with the R programming language is essential.

Instructor Biographies:

Dr. Mark van der Laan is the Jiann-Ping Hsu/Karl E. Peace Professor of Biostatistics and Statistics at the University of California, Berkeley. He has made contributions to survival analysis, semiparametric statistics, multiple testing, and causal inference. He also developed the targeted maximum likelihood methodology and general theory for super learning. He is a founding editor of the Journal of Causal Inference and the International Journal of Biostatistics. He has authored 4 books on Targeted Learning, censored data and multiple testing, authored over 300 publications, and graduated over 50 Ph.D students. He received the COPSS Presidents' Award in 2005, the Mortimer Spiegelman Award in 2004, and the van Dantzig Award in 2005.

Dr. Alan Hubbard is a Professor and the Head of Biostatistics at the University of California, Berkeley, Co-Director of the Center of Targeted Machine Learning and Causal Inference (https://ctml.berkeley.edu/), and Head of the Computational Biology Core of the SuperFund Center at UC Berkeley (NIH/EPA), as well a consulting statistician on several federally funded and foundation projects. He has worked as well on projects ranging from molecular biology of aging, epidemiology, and infectious disease modeling, but most all of his work has focused on semi-parametric estimation in high-dimensional data. His current methods-research focuses on precision medicine, variable importance, statistical inference for data-adaptive parameters, and statistical software implementing targeted learning methods. Currently working in several areas of applied research, including early childhood development in developing countries, environmental genomics and comparative effectiveness research. He has most recently concentrated on using complex patient data for better prediction for acute trauma patients.

Dr. Ivana Malenica (https://imalenica.github.io/) is a Postdoctoral Researcher in the Department of Statistics (https://statistics.fas.harvard.edu/) at Harvard University and a Wojcicki and Troper Data Science Fellow at the Harvard Data Science Initiative (https://datascience.harvard.edu/). She obtained her PhD in Biostatistics at UC Berkeley working with Mark van der Laan, where she was a Berkeley Institute for Data Science Fellow and a NIH Biomedical Big Data Fellow. Her research interests span non/semi-parametric theory, causal inference and machine learning, with emphasis on personalized health and dependent settings. Most of her current work involves causal inference with time and network dependence, online learning, optimal individualized treatment, reinforcement learning, and adaptive sequential designs.

Dr. Nima Hejazi (https://nimahejazi.org) is an Assistant Professor of Biostatistics at the Harvard T.H. Chan School of Public Health. His research interests concentrate in causal inference and statistical machine learning (or “causal machine learning”), focusing on the development of efficient, model-agnostic or assumption-lean inferential methods. Nima is often motivated by topics from non- and semi-parametric inference; high-dimensional inference; and complex study designs (e.g., outcome-dependent two-phase sampling, sequentially adaptive clinical trials). He is also deeply interested in high-performance statistical computing and is a passionate advocate for open-source software and the critical role it plays in the promotion of transparency, reproducibility, and “data analytic hygiene” in the practice of applied statistics. Recently, Nima has been captivated by the rich statistical issues and pressing public health challenges common in clinical trials and observational studies evaluating efficacy of vaccines and/or therapeutics for infectious diseases (HIV/AIDS, COVID-19) and in infectious disease epidemiology.

Dr. Rachael Phillips is a Biostatistician at UC Berkeley’s Center of Targeted Machine Learning and Causal Inference (CTML, https://ctml.berkeley.edu/). She has a PhD in Biostatistics, MA in Biostatistics, BS in Biology, and BA in Mathematics. Motivated by issues arising in healthcare, the projects Rachael’s pursued include the development of (i) clinical algorithm frameworks and guidelines; (ii) real-world data (RWD) analysis methodologies for generating and evaluating real-world evidence (RWE); and (iii) biostatistics graduate-level courses and other educational material for causal inference and Targeted Learning (TL). Related to this work, she is also interested in study design, human-computer interaction; open-source software development; and statistical analysis pre-specification, automation, and reproducibility. At CTML, Rachael collaborates with pharmaceutical researchers at Novo Nordisk. She worked closely throughout her PhD studies with Dr. Susan Gruber, Dr. Mark van der Laan, and FDA collaborators in the Center for Drug Evaluation and Research (CDER) on an FDA-supported project. She also maintains active research with clinical researchers at the University of California, San Francisco.


Sunday, March 10 | 8:00 am – 12:00 pm
SC3 | A Practical Course in Difference-in-Differences

Instructors:
Laura Hatfield, Harvard Medical School

Course Description:

This course aims to provide participants with a solid grounding in methods for Difference-in-Differences (DID), a popular method for quasi-experimental causal inference in the social sciences.

The course begins by exploring target estimands, focusing on the comparison between what happened and what would have happened (i.e., the counterfactual). Various approaches to proxy the counterfactual, such as policy targets, extrapolations, and comparison groups, will be discussed. Then we will dive into DID methods, including the required causal assumptions (parallel trends, no spillovers, no hidden levels of treatment, and no anticipation) and the non-testability of these assumptions.

Selecting appropriate comparison groups is crucial for ensuring the plausibility of causal assumptions. The course will provide strategies for identifying suitable comparison groups that satisfy the assumptions, with special attention to matching strategies. Confounding in DID can defy our intuition from cross-sectional studies. We will explore the limitations of adjusting for observed confounders using matching, weighting, or regression.

Estimation methods must align with the target estimand, and participants will learn how seemingly innocuous changes to regression models can impact causal assumptions. Analyses for staggered adoption will be discussed, focusing on alternative approaches to two-way fixed effects models. We will also discuss inference, especially for small numbers of clusters and highlight alternative approaches, such as aggregation and permutation.

Sensitivity analyses play a vital role in assessing the robustness of causal inferences. Participants will learn about non-inferiority/equivalence tests as an alternative to conventional parallel trends testing as well as placebo tests, event study plots, negatively correlated control groups, and worst-case differential trends. DID-related methods, such as synthetic controls, lagged dependent variables, and remixes of existing techniques, will also be introduced. The course concludes with a literature round-up, highlighting new developments and a few useful reviews and tutorials.

Statistical/Programming Knowledge Required:
This course is intended for researchers who want to be responsible users of difference-in-differences methods but do not have time to keep up with the deluge of new methods developments. Attendees should be familiar with statistics on the level of a year-long graduate-level course in statistical modeling/inference, especially regression-based estimation and inference. Familiarity with basic concepts of causal inference (e.g., potential outcomes, confounding, target estimands, identification assumptions) will also be helpful. The course includes no programming, but the tools recommended to implement the techniques discussed in the course will primarily be in R.

Instructor Biography:

Dr. Hatfield is an Associate Professor of Health Care Policy (Biostatistics) at Harvard Medical School's Department of Health Care Policy. Her methods research focuses on causal inference in non-randomized settings, especially using difference-in-differences, and quantifying variation in health care utilization, outcomes, and quality using clustering and hierarchical Bayesian models. She co-directs the Health Policy Data Science Lab and leads the Data & Methods Core of a National Institute of Aging-funded Program Project titled, “Improving Medicare in an Era of Change.” She is the PI of an AHRQ-funded R01 entitled “Examining payment and delivery model impacts on health equity using novel quasi-experimental causal inference methods.


Sunday, March 10 | 8:00 am – 12:00 pm
SC4 | Model-Assisted Designs: Make Adaptive Clinical Trials Easy and Accessible

Instructor:
Ying Yuan, The University of Texas MD Anderson Cancer Center
J. Jack Lee, PhD, University of Texas MD Anderson Cancer Center

Course Description:

Drug development and clinical research face the challenges of prohibitively high costs, high failure rates, long trial duration, and slow accrual. One important approach to addressing this pressing issue is to use novel adaptive designs, which unfortunately can be hampered by the requirement of complicated statistical modeling, demanding computation, and expensive infrastructure for implementation. This short course is designed to provide an overview of model-assisted designs, a new class of designs developed to simplify the implementation of adaptive designs in practice. Model-assisted designs are derived based on rigorous statistical theory, and thus possess superior operating characteristics and great flexibility, while can be implemented as simple as algorithm-based designs. Easy-to-use Shiny applications on the web and downloadable standalone programs will be introduced to facilitate the study design and conduct. The main application areas include adaptive dose finding, adaptive toxicity and efficacy evaluation, posterior probability and predictive probability for interim monitoring of study endpoints, outcome-adaptive randomization, hierarchical models, multi-arm, multi-stage designs, and platform designs. Lessons learned from real trial examples and practical considerations for conducting adaptive designs will be given.

Statistical/Programming Knowledge Required:
Basic knowledge of statistical inference and Bayesian statistics (e.g., prior and posterior)

Instructor Biographies:

Ying Yuan is Bettyann Asche Murray Distinguished Professor and Deputy Chair in the Department of Biostatistics at University of Texas MD Anderson Cancer Center. Dr. Yuan is an internationally renowned researcher in innovative Bayesian adaptive designs. The designs and software developed by Dr. Yuan’s lab (www.trialdesign.org) have been widely used in medical research institutes and pharmaceutical companies. The BOIN design, developed by Dr. Yuan’s team, is a groundbreaking oncology dose-finding design that has been recognized by the FDA as a fit-for-purpose drug development tool. Dr. Yuan was elected as the American Statistical Association Fellow, and is the leading author of two books, “Bayesian Designs for Phase I-II Clinical Trials” and “Model-Assisted Bayesian Designs for Dose Finding and Optimization: Methods and Applications,” both published by Chapman & Hall/CRC.

Dr. J. Jack Lee is Professor of Biostatistics and Kenedy Foundation Chair in Cancer Research. His areas of statistical research include design and analysis of clinical trials, Bayesian adaptive designs, statistical computation/graphics, drug combination studies, and biomarkers identification and validation. He is an elected Fellow of American Statistical Association, Society for Clinical Trials, and American Association for the Advancement of Science. He has more than 500 publications in statistical and medical journals. He co-authored two books entitled: “Bayesian Adaptive Methods for Clinical Trials” and “Model-Assisted Bayesian Designs for Dose Finding and Optimization: Methods and Applications.”


Sunday, March 10 | 1:00 pm – 5:00 pm
SC5 | Improving Precision and Power in Randomized Trials by Leveraging Baseline Variables

Instructors:
Michael Rosenblum, Department of Biostatistics, Johns Hopkins Bloomberg School of Public Health, Baltimore
Kelly van Lancker, Department of Applied Mathematics, Computer Science and Statistics, Ghent University, Ghent, Belgium
Joshua Betz, Department of Biostatistics, Johns Hopkins Bloomberg School of Public Health, Baltimore

Course Description:

In May 2023, the U.S. Food and Drug Administration (FDA) released guidance for industry on “Adjustment for Covariates in Randomized Clinical Trials for Drugs and Biological Products”. Covariate adjustment is a statistical analysis method for improving precision and power in clinical trials by adjusting for pre-specified, prognostic baseline variables. Here, the term “covariates” refers to baseline variables, that is, variables that are measured before randomization such as age, gender, BMI, comorbidities. The resulting sample size reductions can lead to substantial cost savings and more ethical trials since they avoid exposing more participants than necessary to experimental treatments. Though covariate adjustment is recommended by the FDA and the European Medicines Agency (EMA), many trials do not exploit the available information in baseline variables or only make use of the baseline measurement of the outcome.

In Part 1, we introduce the concept of covariate adjustment. We explain what covariate adjustment is, how it works, when it may be useful to apply, and how to implement it (in a preplanned way that is robust to model misspecification) for a variety of scenarios.

In Part 2, we present a new statistical method that enables us to easily combine covariate adjustment with group sequential designs. The result will be faster, more efficient trials for many disease areas, without sacrificing validity or power. This approach can lead to faster trials even when the experimental treatment is ineffective; this may be more ethical in settings where it is desirable to stop as early as possible to avoid unnecessary exposure to side effects.

In Part 3, we demonstrate the impact of covariate adjustment using completed trial data sets in multiple disease areas. We provide step-by-step, clear documentation of how to apply the software in each setting. Participants will have the time to apply the software tools on the different datasets in small groups.

Statistical/Programming Knowledge Required:
Participants should have a basic understanding of randomized trials, regression models, and survival analysis. Familiarity with R is helpful but not required.

Instructor Biographies:

Michael Rosenblum is a Professor of Biostatistics at Johns Hopkins Bloomberg School of Public Health. His research is in causal inference with a focus on developing new statistical methods and software for the design and analysis of randomized trials, with clinical applications in HIV, Alzheimer’s disease, stroke, and cardiac resynchronization devices. He is funded by the Johns Hopkins Center for Excellence in Regulatory Science and Innovation for the project: “Statistical methods to improve precision and reduce the required sample size in many phase 2 and 3 clinical trials, including COVID-19 trials, by covariate adjustment.”

Kelly Van Lancker is a postdoctoral research fellow in the Department of Applied Mathematics, Computer Science and Statistics of Ghent University (Belgium). She has obtained a PhD in statistics from Ghent University. Her primary research interests are the use of causal inference methods and in particular covariate adjustment in clinical trials.

Josh Betz is an Assistant Scientist in the Biostatistics department of the Johns Hopkins Bloomberg School of Public Health, and part of the Johns Hopkins Biostatistics Center. His research includes the design, monitoring, and analysis of randomized trials in practice and developing software to assist with randomized trial design and analysis.


Sunday, March 10 | 1:00 pm – 5:00 pm
SC6 | Decompositions of Model Comparison Criteria for Quantifying Importance and Contribution of Longitudinal Biomarkers to the Fit of Survival Data: Theory, Methods, and Applications

Instructors:
Ming-Hui Chen, University of Connecticut

Course Description:

Joint modeling of longitudinal and time-to-event outcomes has become more popular in the analysis of patient-reported outcomes (PROs) or quality of life (QOL) measures for the purpose of evaluating the efficacy and tolerability of cancer treatment. In oncology applications, information from the patients' perspectives can be useful in evaluating actual patients' experiences on dimensions known to be important to them and also associated with treatment outcomes. One of the critical issues in joint modeling is how to evaluate the distinct effects of longitudinal and time-to-event outcomes on the fit of a joint model. In this regard, decompositions of AIC, BIC, DIC, the logarithm of the pseudo marginal likelihood (LPML), and Bayesian C-Index have been recently developed to assess the fit of each component of the joint model as well as to determine the importance and contribution of longitudinal biomarkers to the model fit of the survival data.

The course starts with an overview of joint models of longitudinal and survival data, in which the survival components include the Cox model, the cure rate models, and the competing risk models. The course will then presents the detailed development of decompositions of AIC, BIC, DIC, the logarithm of the pseudo marginal likelihood (LPML), and in-sample and out-of-sample Bayesian C-Indices. The course will further introduce SAS Macro JMFit, which implements a variety of popular joint models and provides several model assessment measures including the decomposition of AIC and BIC as well as Delta AIC and Delta BIC. The course will also show several important applications in cancer clinical trials and cancer prevention trials. The course concludes with discussion on an extension of these new decomposition measures to other types of joint models, and on the applications of these decompositions in redundancy analysis and association study between two sets of outcome variables or predictors.

Statistical/Programming Knowledge Required:
Model comparison criteria, Bayesian statistics, and SAS

Instructor Biography:

Dr. Ming-Hui Chen is a Board of Trustees Distinguished Professor and Head of Department of Statistics at University of Connecticut (UConn). He was elected to Fellow of International Society for Bayesian Analysis in 2016, Fellow of Institute of Mathematical Statistics in 2007, and Fellow of American Statistical Association in 2005. He received the UConn AAUP Research Excellence Award in 2013, the UConn College of Liberal Arts Sciences Excellence in Research Award in the Physical Sciences Division in 2013, the UConn Alumni Association's University Award for Faculty Excellence in Research and Creativity (Sciences) in 2014, the ICSA Distinguished Achievement Award in 2020, and the Distinguished Science Alumni Award from Purdue University in 2023. He has published 450+ peer-reviewed journal articles and five books including two advanced graduate-level books on Bayesian survival analysis and Monte Carlo methods in Bayesian computation. He has supervised 42 PhD students. He served as, President of ICSA (2013), President of New England Statistical Society (2018-2020), and the 2022 JSM Program Chair. Currently, he is Co Editor-in-Chief of Statistics and Its Interface, inaugurated Co Editor-in-Chief of New England Journal of Statistics in Data Science, and an Associate Editor for several other statistical journals.


Monday, March 11 | 8:00 am – 12:00 pm
SC7 | Statistical and Computational Methods for Microbiome Data Analysis

Instructors:
Gen Li, University of Michigan

Course Description:

The human microbiome plays a critical role in human health and disease. A thorough understanding of the microbiome and its link to health promises to revolutionize precision medicine. Vast amounts of high-throughput data have been generated from 16s rRNA sequencing or metagenomic sequencing technologies to characterize the human microbiome in different anatomical sites (e.g., oral, skin, vaginal, gut, and lung). Large collaborative efforts such as the Human Microbiome Project (HMP) have curated valuable databases. New computational and statistical methods are being developed to understand the function of microbial communities. In this short course, we will start with a brief introduction to how microbiome data are obtained and then provide detailed presentations on the statistical and computational methods for analyzing microbiome data. We will focus on preprocessing, exploratory data analysis (e.g., data visualization, dimension reduction, diversity calculation), and more advanced statistical analysis (e.g., association analysis, mediation analysis, network analysis). We will demonstrate how to use some state-of-the-art tools to conduct these analyses. Open questions and future research directions on microbiome data analysis will also be discussed. Participants will get a good understanding of existing methods and future directions for microbiome data analysis and be able to perform basic processing and analysis of microbiome data after taking the short course.

Statistical/Programming Knowledge Required:
The course aims to be inclusive and welcomes researchers with diverse backgrounds and research experience. Its primary focus is to provide an overview of the current state of method development in microbiome research, rather than delving into technical details of specific topics or methods. Prior experience or knowledge in microbiome research is not necessary to participate.

However, participants should have a basic understanding of multivariate analysis and R programming. Familiarity with high-dimensional data analysis is advantageous, although not mandatory. The course is designed to cater to a wide range of individuals interested in microbiome research, ensuring accessibility and encouraging interdisciplinary collaboration.

Instructor Biography:

Dr. Gen Li, is a tenured associate professor in the Department of Biostatistics at the University of Michigan. He has extensive experience in microbiome research. He has developed novel dimension reduction, association analysis, cluster analysis, and network analysis methods for microbiome data. His microbiome-related work has been published in top statistical journals (e.g., Biometrics and the Annals of Applied Statistics) and scientific journals (e.g., American Journal of Respiratory and Critical Care Medicine). His methodological research has been supported by several NIH-funded grants.


Monday, March 11 | 1:00 pm – 5:00 pm
SC8 | Incorporating Diversity, Equity, and Inclusion Principles and Content into Biostatistics Courses

Instructors:
Andrea Lane, PhD, Social Science Research Institute at Duke University
Scarlett Bellamy, ScD, Boston University School of Public Health

Course Description:

As we embrace conversations about improving diversity, equity, and inclusion (DEI) in the field of biostatistics, we recognize that those principles should appear in every aspect of the profession, including coursework. By incorporating DEI into biostatistics pedagogy, instructors and trainees can cultivate a more holistic understanding of both historical background and current challenges in the field. Ideally, this will have a downstream effect of all students seeing themselves in the content and thus making the field more diverse, equitable, and inclusive.

This interactive short course will have two parts. The first part will cover general inclusive teaching practices. We will discuss how we can ensure that biostatistics courses are inclusive for students with different gender, sexual, and racial identities, and accessible for students with disabilities. In part two, we will engage with critical pedagogy, which directly examines and critiques societal power structures. This second part goes beyond cultivating an inclusive classroom to developing coursework that directly addresses how statistics can be used to make the world a more diverse, equitable, and inclusive place. We will introduce practical examples from our own experiences of how to introduce these concepts into courses without compromising course objectives and without requiring additional time for these modifications. The short course will be highly interactive and encourage open discussion where participants can share their own experiences and ideas for making biostatistics courses more diverse, equitable, and inclusive.

Statistical/Programming Knowledge Required:
None

Instructor Biographies:

Andrea Lane is an Assistant Professor of the Practice in the Social Science Research Institute at Duke University. She obtained her PhD in biostatistics from Emory University in 2022. Andrea teaches courses in the Master in Interdisciplinary Data Science (MIDS) program and the Department of Statistical Sciences. She is the co-director of the MIDS Capstone program. Andrea is passionate about statistics and data science education and incorporating diversity, equity, and inclusion principles into coursework.

Scarlett Bellamy is the Chair and Professor of Biostatistics at the Boston University School of Public Health. Prior to her arrival at BU, she was a professor in the Department of Epidemiology and Biostatistics and the Associate Dean for Diversity and Inclusion at Drexel University Dornsife School of Public Health. Before joining Drexel University in 2016, Bellamy spent 15 years at the University of Pennsylvania (UPenn) Perelman School of Medicine, where she was a professor of biostatistics. She holds a bachelor’s degree in mathematics from Hampton University and completed her doctoral training in biostatistics at the Harvard University T.H. Chan School of Public Health.

Much of Bellamy’s research centers on evaluating the efficacy of interventions in longitudinal behavioral modification trials, including cluster- and group-randomized trials. She is particularly interested in applying this methodology to address health disparities for a variety of clinical and behavioral outcomes, including HIV/AIDS, cardiovascular disease, and health-promoting behaviors. She was also PI of the Fostering Diversity in Biostatistics Workshop at the Eastern North American Region of the International Biometric Society (ENAR). This federally funded initiative aims to increase the number of underrepresented minorities in graduate training and professional careers in biostatistics.