ouR data generation
https://www.rdatagen.net/
Recent content on ouR data generationHugo -- gohugo.iokeith.goldfeld@nyumc.org (Keith Goldfeld)keith.goldfeld@nyumc.org (Keith Goldfeld)Tue, 17 Sep 2019 00:00:00 +0000Simulating an open cohort stepped-wedge trial
https://www.rdatagen.net/post/simulating-an-open-cohort-stepped-wedge-trial/
Tue, 17 Sep 2019 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/simulating-an-open-cohort-stepped-wedge-trial/<p>In a current multi-site study, we are using a stepped-wedge design to evaluate whether improved training and protocols can reduce prescriptions of anti-psychotic medication for home hospice care patients with advanced dementia. The study is officially called the Hospice Advanced Dementia Symptom Management and Quality of Life (HAS-QOL) Stepped Wedge Trial. Unlike my previous work with <a href="https://www.rdatagen.net/post/alternatives-to-stepped-wedge-designs/">stepped-wedge designs</a>, where individuals were measured once in the course of the study, this study will collect patient outcomes from the home hospice care EHRs over time. This means that for some patients, the data collection period straddles the transition from control to intervention.</p>
<p>Whenever I contemplate a simulation, I first think about the general structure of the data generating process before even thinking about outcome model. In the case of a more standard two-arm randomized trial, that structure is quite simple and doesn’t require much, if any, thought. In this case, however, the overlaying of a longitudinal patient outcome process on top of a stepped-wedge design presents a little bit of a challenge.</p>
<p>Adding to the challenge is that, in addition to being a function of site- and individual-specific characteristics/effects, the primary outcome will likely be a function of time-varying factors. In particular here, certain patient-level health-related factors that might contribute to the decision to prescribe anti-psychotic medications, and the time-varying intervention status, which is determined by the stepped-wedge randomization scheme. So, the simulation needs to accommodate the generation of both types of time-varying variables.</p>
<p>I’ve developed a bare-boned simulation of sites and patients to provide a structure that I can add to at some point in the future. While this is probably a pretty rare study design (though as stepped-wedge designs become more popular, it may be less rare than I am imagining), I thought the code could provide yet another example of how to approach a potentially vexing simulation in a relatively simple way.</p>
<div id="data-definition" class="section level3">
<h3>Data definition</h3>
<p>The focus here is on the structure of the data, so I am not generating any outcome data. However, in addition to generating the treatment assignment, I am creating the time-varying health status, which will affect the outcome process when I get to that.</p>
<p>In this simulation, there will be 5 sites, each followed for 25 weeks (starting with week 0). Each week, a site will have approximately 20 new patients, so we should expect to generate around <span class="math inline">\(5 \times 25 \times 20 = 2500\)</span> total patients.</p>
<p>For each patient, we will be generating a series of health status, which ranges from 1 to 4, with 1 being healthiest, and 4 being death. I will use a <a href="https://www.rdatagen.net/post/simstudy-1-14-update/">Markov chain</a> to generate this series. Two arguments required to simulate the Markov process are the starting state (which is created in <code>S0</code>) and the transition matrix <code>P</code>, which determines the probabilities of moving from one state to another.</p>
<pre class="r"><code>NPER <- 25
perDef <- defDataAdd(varname = "npatient", formula = 20,
dist = "poisson")
patDef <- defDataAdd(varname = "S0", formula = "0.4;0.4;0.2",
dist = "categorical")
P <- t(matrix(c( 0.7, 0.2, 0.1, 0.0,
0.1, 0.3, 0.4, 0.2,
0.0, 0.1, 0.5, 0.4,
0.0, 0.0, 0.0, 1.0),
nrow = 4))</code></pre>
</div>
<div id="data-generation" class="section level3">
<h3>Data generation</h3>
<p>The data generation process starts with the sites and then proceeds to the patient level data. To begin, the five sites are generated (for now without any site-specific variables, but that could easily be modified in the future). Next, records for each site for each of the 25 periods (from week 0 to week 24) are generated; these site level records include the number patients to be generated for each site, each week:</p>
<pre class="r"><code>set.seed(3837263)
dsite <- genData(5, id = "site")
dper <- addPeriods(dsite, nPeriods = NPER, idvars = "site",
timeid = "site.time", perName = "period")
dper <- addColumns(perDef, dper)
dper</code></pre>
<pre><code>## site period site.time npatient
## 1: 1 0 1 17
## 2: 1 1 2 20
## 3: 1 2 3 25
## 4: 1 3 4 18
## 5: 1 4 5 23
## ---
## 121: 5 20 121 17
## 122: 5 21 122 15
## 123: 5 22 123 16
## 124: 5 23 124 19
## 125: 5 24 125 20</code></pre>
<p>Now, we assign each of the five sites to its own intervention “wave”. The first site starts at the beginning of the the study, week 0. The second starts 4 weeks later at week 4, and so on, until the fifth and last site starts the intervention at week 16. (Obviously, a more realistic simulation would include many more sites, but all of this can easily be scaled up.) The intervention indicator is <span class="math inline">\(I_{ct}\)</span>, and is set to 1 when cluster <span class="math inline">\(c\)</span> during week <span class="math inline">\(t\)</span> is in the intervention, and is 0 otherwise.</p>
<pre class="r"><code>dsw <- trtStepWedge(dper, "site", nWaves = 5, lenWaves = 4,
startPer = 0, perName = "period",
grpName = "Ict")
dsw <- dsw[, .(site, period, startTrt, Ict)]</code></pre>
<p>Here are the intervention assignments for the first two sites during the first 8 weeks.</p>
<pre class="r"><code>dsw[site %in% c(1,2) & period < 8]</code></pre>
<pre><code>## site period startTrt Ict
## 1: 1 0 0 1
## 2: 1 1 0 1
## 3: 1 2 0 1
## 4: 1 3 0 1
## 5: 1 4 0 1
## 6: 1 5 0 1
## 7: 1 6 0 1
## 8: 1 7 0 1
## 9: 2 0 4 0
## 10: 2 1 4 0
## 11: 2 2 4 0
## 12: 2 3 4 0
## 13: 2 4 4 1
## 14: 2 5 4 1
## 15: 2 6 4 1
## 16: 2 7 4 1</code></pre>
<p>To generate the patients, we start by generating the 2500 or so individual records. The single baseline factor that we include this time around is the starting health status <code>S0</code>.</p>
<pre class="r"><code>dpat <- genCluster(dper, cLevelVar = "site.time",
numIndsVar = "npatient", level1ID = "id")
dpat <- addColumns(patDef, dpat)
dpat</code></pre>
<pre><code>## site period site.time npatient id S0
## 1: 1 0 1 17 1 2
## 2: 1 0 1 17 2 1
## 3: 1 0 1 17 3 2
## 4: 1 0 1 17 4 2
## 5: 1 0 1 17 5 1
## ---
## 2524: 5 24 125 20 2524 3
## 2525: 5 24 125 20 2525 2
## 2526: 5 24 125 20 2526 1
## 2527: 5 24 125 20 2527 1
## 2528: 5 24 125 20 2528 1</code></pre>
<p>Here is a visualization of the patients (it turns out there are 2528 of them) by site and starting point, with each point representing a patient. The color represents the intervention status: light blue is control (pre-intervention) and dark blue is intervention. Even though a patient may start in the pre-intervention period, they may actually receive services in the intervention period, as we will see further on down.</p>
<p><img src="https://www.rdatagen.net/post/2019-09-17-simulating-an-open-cohort-stepped-wedge-trial.en_files/figure-html/unnamed-chunk-6-1.png" width="672" /></p>
<p>The patient health status series are generated using a Markov chain process. This particular transition matrix has an “absorbing” state, as indicated by the probability 1 in the last row of the matrix. Once a patient enters state 4, they will not transition to any other state. (In this case, state 4 is death.)</p>
<pre class="r"><code>dpat <- addMarkov(dpat, transMat = P,
chainLen = NPER, id = "id",
pername = "seq", start0lab = "S0")
dpat</code></pre>
<pre><code>## site period site.time npatient id S0 seq state
## 1: 1 0 1 17 1 2 1 2
## 2: 1 0 1 17 1 2 2 3
## 3: 1 0 1 17 1 2 3 3
## 4: 1 0 1 17 1 2 4 3
## 5: 1 0 1 17 1 2 5 4
## ---
## 63196: 5 24 125 20 2528 1 21 4
## 63197: 5 24 125 20 2528 1 22 4
## 63198: 5 24 125 20 2528 1 23 4
## 63199: 5 24 125 20 2528 1 24 4
## 63200: 5 24 125 20 2528 1 25 4</code></pre>
<p>Now, we aren’t interested in the periods following the one where death occurs. So, we want to trim the data.table <code>dpat</code> to include only those periods leading up to state 4 and the first period in which state 4 is entered. We do this first by identifying the first time a state of 4 is encountered for each individual (and if an individual never reaches state 4, then all the individual’s records are retained, and the variable <code>.last</code> is set to the maximum number of periods <code>NPER</code>, in this case 25).</p>
<pre class="r"><code>dlast <- dpat[, .SD[state == 4][1,], by = id][, .(id, .last = seq)]
dlast[is.na(.last), .last := NPER]
dlast</code></pre>
<pre><code>## id .last
## 1: 1 5
## 2: 2 13
## 3: 3 2
## 4: 4 6
## 5: 5 3
## ---
## 2524: 2524 7
## 2525: 2525 5
## 2526: 2526 19
## 2527: 2527 20
## 2528: 2528 8</code></pre>
<p>Next, we use the <code>dlast</code> data.table to “trim” <code>dpat</code>. We further trim the data set so that we do not have patient-level observations that extend beyond the overall follow-up period:</p>
<pre class="r"><code>dpat <- dlast[dpat][seq <= .last][ , .last := NULL][]
dpat[, period := period + seq - 1]
dpat <- dpat[period < NPER]
dpat</code></pre>
<pre><code>## id site period site.time npatient S0 seq state
## 1: 1 1 0 1 17 2 1 2
## 2: 1 1 1 1 17 2 2 3
## 3: 1 1 2 1 17 2 3 3
## 4: 1 1 3 1 17 2 4 3
## 5: 1 1 4 1 17 2 5 4
## ---
## 12608: 2524 5 24 125 20 3 1 3
## 12609: 2525 5 24 125 20 2 1 2
## 12610: 2526 5 24 125 20 1 1 1
## 12611: 2527 5 24 125 20 1 1 1
## 12612: 2528 5 24 125 20 1 1 1</code></pre>
<p>And finally, we merge the patient data with the stepped-wedge treatment assignment data to create the final data set. The individual outcomes for each week could now be generated, because would we have all the baseline and time-varying information in a single data set.</p>
<pre class="r"><code>dpat <- merge(dpat, dsw, by = c("site","period"))
setkey(dpat, id, period)
dpat <- delColumns(dpat, c("site.time", "seq", "npatient"))
dpat</code></pre>
<pre><code>## site period id S0 state startTrt Ict
## 1: 1 0 1 2 2 0 1
## 2: 1 1 1 2 3 0 1
## 3: 1 2 1 2 3 0 1
## 4: 1 3 1 2 3 0 1
## 5: 1 4 1 2 4 0 1
## ---
## 12608: 5 24 2524 3 3 16 1
## 12609: 5 24 2525 2 2 16 1
## 12610: 5 24 2526 1 1 16 1
## 12611: 5 24 2527 1 1 16 1
## 12612: 5 24 2528 1 1 16 1</code></pre>
<p>Here is what the individual trajectories of health state look like. In the plot, each column represents a different site, and each row represents a different starting week. For example the fifth row represents patients who appear for the first time in week 4. Sites 1 and 2 are already in the intervention in week 4, so none of these patients will transition. However, patients in sites 3 through 5 enter in the pre-intervention stage in week 4, and transition into the intervention at different points, depending on the site.</p>
<p><img src="https://www.rdatagen.net/post/2019-09-17-simulating-an-open-cohort-stepped-wedge-trial.en_files/figure-html/unnamed-chunk-11-1.png" width="1056" /></p>
<p>The basic structure is in place, so we are ready to extend this simulation to include more covariates, random effects, and outcomes. And once we’ve done that, we can explore analytic approaches.</p>
<p>
<small><font color="darkkhaki">This study is supported by the National Institutes of Health National Institute of Aging R61AG061904. The views expressed are those of the author and do not necessarily represent the official position of the funding organizations.</font></small>
</p>
</div>
Analyzing a binary outcome arising out of within-cluster, pair-matched randomization
https://www.rdatagen.net/post/analyzing-a-binary-outcome-in-a-study-with-within-cluster-pair-matched-randomization/
Tue, 03 Sep 2019 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/analyzing-a-binary-outcome-in-a-study-with-within-cluster-pair-matched-randomization/<p>A key motivating factor for the <code>simstudy</code> package and much of this blog is that simulation can be super helpful in understanding how best to approach an unusual, or least unfamiliar, analytic problem. About six months ago, I <a href="https://www.rdatagen.net/post/a-case-where-prospecitve-matching-may-limit-bias/">described</a> the DREAM Initiative (Diabetes Research, Education, and Action for Minorities), a study that used a slightly innovative randomization scheme to ensure that two comparison groups were evenly balanced across important covariates. At the time, we hadn’t finalized the analytic plan. But, now that we have started actually randomizing and recruiting (yes, in that order, oddly enough), it is important that we do that, with the help of a little simulation.</p>
<div id="the-study-design" class="section level3">
<h3>The study design</h3>
<p>The <a href="https://www.rdatagen.net/post/a-case-where-prospecitve-matching-may-limit-bias/">original post</a> has the details about the design and matching algorithm (and code). The randomization is taking place at 20 primary care clinics, and patients within these clinics are matched based on important characteristics before randomization occurs. There is little or no risk that patients in the control arm will be “contaminated” or affected by the intervention that is taking place, which will minimize the effects of clustering. However, we may not want to ignore the clustering altogether.</p>
</div>
<div id="possible-analytic-solutions" class="section level3">
<h3>Possible analytic solutions</h3>
<p>Given that the primary outcome is binary, one reasonable procedure to assess whether or not the intervention is effective is McNemar’s test, which is typically used for paired dichotomous data. However, this approach has two limitations. First, McNemar’s test does not take into account the clustered nature of the data. Second, the test is just that, a test; it does not provide an estimate of effect size (and the associated confidence interval).</p>
<p>So, in addition to McNemar’s test, I considered four additional analytic approaches to assess the effect of the intervention: (1) Durkalski’s extension of McNemar’s test to account for clustering, (2) conditional logistic regression, which takes into account stratification and matching, (3) standard logistic regression with specific adjustment for the three matching variables, and (4) mixed effects logistic regression with matching covariate adjustment and a clinic-level random intercept. (In the mixed effects model, I assume the treatment effect does not vary by site, since I have also assumed that the intervention is delivered in a consistent manner across the sites. These may or may not be reasonable assumptions.)</p>
<p>While I was interested to see how the two tests (McNemar and the extension) performed, my primary goal was to see if any of the regression models was superior. In order to do this, I wanted to compare the methods in a scenario without any intervention effect, and in another scenario where there <em>was</em> an effect. I was interested in comparing bias, error rates, and variance estimates.</p>
</div>
<div id="data-generation" class="section level3">
<h3>Data generation</h3>
<p>The data generation process parallels the earlier <a href="https://www.rdatagen.net/post/a-case-where-prospecitve-matching-may-limit-bias/">post</a>. The treatment assignment is made in the context of the matching process, which I am not showing this time around. Note that in this initial example, the outcome <code>y</code> depends on the intervention <code>rx</code> (i.e. there <em>is</em> an intervention effect).</p>
<pre class="r"><code>library(simstudy)
### defining the data
defc <- defData(varname = "ceffect", formula = 0, variance = 0.4,
dist = "normal", id = "cid")
defi <- defDataAdd(varname = "male", formula = .4, dist = "binary")
defi <- defDataAdd(defi, varname = "age", formula = 0, variance = 40)
defi <- defDataAdd(defi, varname = "bmi", formula = 0, variance = 5)
defr <- defDataAdd(varname = "y",
formula = "-1 + 0.08*bmi - 0.3*male - 0.08*age + 0.45*rx + ceffect",
dist = "binary", link = "logit")
### generating the data
set.seed(547317)
dc <- genData(20, defc)
di <- genCluster(dc, "cid", 60, "id")
di <- addColumns(defi, di)
### matching and randomization within cluster (cid)
library(parallel)
library(Matching)
RNGkind("L'Ecuyer-CMRG") # to set seed for parallel process
### See addendum for dmatch code
dd <- rbindlist(mclapply(1:nrow(dc),
function(x) dmatch(di[cid == x]),
mc.set.seed = TRUE
)
)
### generate outcome
dd <- addColumns(defr, dd)
setkey(dd, pair)
dd</code></pre>
<pre><code>## cid ceffect id male age bmi rx pair y
## 1: 1 1.168 11 1 4.35 0.6886 0 1.01 1
## 2: 1 1.168 53 1 3.85 0.2215 1 1.01 1
## 3: 1 1.168 51 0 6.01 -0.9321 0 1.02 0
## 4: 1 1.168 58 0 7.02 0.1407 1 1.02 1
## 5: 1 1.168 57 0 9.25 -1.3253 0 1.03 1
## ---
## 798: 9 -0.413 504 1 -8.72 -0.0767 1 9.17 0
## 799: 9 -0.413 525 0 1.66 3.5507 0 9.18 0
## 800: 9 -0.413 491 0 4.31 2.6968 1 9.18 0
## 801: 9 -0.413 499 0 7.36 0.6064 0 9.19 0
## 802: 9 -0.413 531 0 8.05 0.8068 1 9.19 0</code></pre>
<p>Based on the outcomes of each individual, each pair can be assigned to a particular category that describes the outcomes. Either both fail, both succeed, or one fails and the other succeeds. These category counts can be represented in a <span class="math inline">\(2 \times 2\)</span> contingency table. The counts are the number of pairs in each of the four possible pairwise outcomes. For example, there were 173 pairs where the outcome was determined to be unsuccessful for both intervention and control arms.</p>
<pre class="r"><code>dpair <- dcast(dd, pair ~ rx, value.var = "y")
dpair[, control := factor(`0`, levels = c(0,1),
labels = c("no success", "success"))]
dpair[, rx := factor(`1`, levels = c(0, 1),
labels = c("no success", "success"))]
dpair[, table(control,rx)]</code></pre>
<pre><code>## rx
## control no success success
## no success 173 102
## success 69 57</code></pre>
<p>Here is a figure that depicts the <span class="math inline">\(2 \times 2\)</span> matrix, providing a visualization of how the treatment and control group outcomes compare. (The code is in the addendum in case anyone wants to see the lengths I took to make this simple graphic.)</p>
<p><img src="https://www.rdatagen.net/post/2019-09-03-analyzing-a-binary-outcome-in-a-study-with-within-cluster-pair-matched-randomization.en_files/figure-html/unnamed-chunk-4-1.png" width="576" /></p>
<div id="mcnemars-test" class="section level4">
<h4>McNemar’s test</h4>
<p>McNemar’s test requires the data to be in table format, and the test really only takes into consideration the cells which represent disagreement between treatment arms. In terms of the matrix above, this would be the lower left and upper right quadrants.</p>
<pre class="r"><code>ddc <- dcast(dd, pair ~ rx, value.var = "y")
dmat <- ddc[, .N, keyby = .(`0`,`1`)][, matrix(N, 2, 2, byrow = T)]
mcnemar.test(dmat)</code></pre>
<pre><code>##
## McNemar's Chi-squared test with continuity correction
##
## data: dmat
## McNemar's chi-squared = 6, df = 1, p-value = 0.01</code></pre>
<p>Based on the p-value = 0.01, we would reject the null hypothesis that the intervention has no effect.</p>
</div>
<div id="durkalski-extension-of-mcnemars-test" class="section level4">
<h4>Durkalski extension of McNemar’s test</h4>
<p>Durkalski’s test also requires the data to be in tabular form, though there essentially needs to be a table for each cluster. The <code>clust.bin.pair</code> function needs us to separate the table into vectors <code>a</code>, <code>b</code>, <code>c</code>, and <code>d</code>, where each element in each of the vectors is a count for a specific cluster. Vector <code>a</code> is collection of counts for the upper left hand quadrants, <code>b</code> is for the upper right hand quadrants, etc. We have 20 clusters, so each of the four vectors has length 20. Much of the work done in the code below is just getting the data in the right form for the function.</p>
<pre class="r"><code>library(clust.bin.pair)
ddc <- dcast(dd, cid + pair ~ rx, value.var = "y")
ddc[, ypair := 2*`0` + 1*`1`]
dvec <- ddc[, .N, keyby=.(cid, ypair)]
allpossible <- data.table(expand.grid(1:20, 0:3))
setnames(allpossible, c("cid","ypair"))
setkey(dvec, cid, ypair)
setkey(allpossible, cid, ypair)
dvec <- dvec[allpossible]
dvec[is.na(N), N := 0]
a <- dvec[ypair == 0, N]
b <- dvec[ypair == 1, N]
c <- dvec[ypair == 2, N]
d <- dvec[ypair == 3, N]
clust.bin.pair(a, b, c, d, method = "durkalski")</code></pre>
<pre><code>##
## Durkalski's Chi-square test
##
## data: a, b, c, d
## chi-square = 5, df = 1, p-value = 0.03</code></pre>
<p>Again, the p-value, though larger, leads us to reject the null.</p>
</div>
<div id="conditional-logistic-regression" class="section level4">
<h4>Conditional logistic regression</h4>
<p>Conditional logistic regression is conditional on the pair. Since the pair is similar with respect to the matching variables, no further adjustment (beyond specifying the strata) is necessary.</p>
<pre class="r"><code>library(survival)
summary(clogit(y ~ rx + strata(pair), data = dd))$coef["rx",]</code></pre>
<pre><code>## coef exp(coef) se(coef) z Pr(>|z|)
## 0.3909 1.4783 0.1559 2.5076 0.0122</code></pre>
<p> </p>
</div>
<div id="logistic-regression-with-matching-covariates-adjustment" class="section level4">
<h4>Logistic regression with matching covariates adjustment</h4>
<p>Using logistic regression should in theory provide a reasonable estimate of the treatment effect, though given that there is clustering, I wouldn’t expect the standard error estimates to be correct. Although we are not specifically modeling the matching, by including covariates used in the matching, we are effectively estimating a model that is conditional on the pair.</p>
<pre class="r"><code>summary(glm(y~rx + age + male + bmi, data = dd,
family = "binomial"))$coef["rx",]</code></pre>
<pre><code>## Estimate Std. Error z value Pr(>|z|)
## 0.3679 0.1515 2.4285 0.0152</code></pre>
<p> </p>
</div>
<div id="generalized-mixed-effects-model-with-matching-covariates-adjustment" class="section level4">
<h4>Generalized mixed effects model with matching covariates adjustment</h4>
<p>The mixed effects model merely improves on the logistic regression model by ensuring that any clustering effects are reflected in the estimates.</p>
<pre class="r"><code>library(lme4)
summary(glmer(y ~ rx + age + male + bmi + (1|cid), data= dd,
family = "binomial"))$coef["rx",]</code></pre>
<pre><code>## Estimate Std. Error z value Pr(>|z|)
## 0.4030 0.1586 2.5409 0.0111</code></pre>
<p> </p>
</div>
</div>
<div id="comparing-the-analytic-approaches" class="section level3">
<h3>Comparing the analytic approaches</h3>
<p>To compare the methods, I generated 1000 data sets under each scenario. As I mentioned, I wanted to conduct the comparison under two scenarios. The first when there is no intervention effect, and the second with an effect (I will use the effect size used to generate the first data set.</p>
<p>I’ll start with no intervention effect. In this case, the outcome definition sets the true parameter of <code>rx</code> to 0.</p>
<pre class="r"><code>defr <- defDataAdd(varname = "y",
formula = "-1 + 0.08*bmi - 0.3*male - 0.08*age + 0*rx + ceffect",
dist = "binary", link = "logit")</code></pre>
<p>Using the updated definition, I generate 1000 datasets, and for each one, I apply the five analytic approaches. The results from each iteration are stored in a large list. (The code for the iterative process is shown in the addendum below.) As an example, here are the contents from the 711th iteration:</p>
<pre class="r"><code>res[[711]]</code></pre>
<pre><code>## $clr
## coef exp(coef) se(coef) z Pr(>|z|)
## rx -0.0263 0.974 0.162 -0.162 0.871
##
## $glm
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -0.6583 0.1247 -5.279 1.30e-07
## rx -0.0309 0.1565 -0.198 8.43e-01
## age -0.0670 0.0149 -4.495 6.96e-06
## male -0.5131 0.1647 -3.115 1.84e-03
## bmi 0.1308 0.0411 3.184 1.45e-03
##
## $glmer
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -0.7373 0.1888 -3.91 9.42e-05
## rx -0.0340 0.1617 -0.21 8.33e-01
## age -0.0721 0.0156 -4.61 4.05e-06
## male -0.4896 0.1710 -2.86 4.20e-03
## bmi 0.1366 0.0432 3.16 1.58e-03
##
## $mcnemar
##
## McNemar's Chi-squared test with continuity correction
##
## data: dmat
## McNemar's chi-squared = 0.007, df = 1, p-value = 0.9
##
##
## $durk
##
## Durkalski's Chi-square test
##
## data: a, b, c, d
## chi-square = 0.1, df = 1, p-value = 0.7</code></pre>
</div>
<div id="summary-statistics" class="section level3">
<h3>Summary statistics</h3>
<p>To compare the five methods, I am first looking at the proportion of iterations where the p-value is less then 0.05, in which case we would reject the the null hypothesis. (In the case where the null is true, the proportion is the Type 1 error rate; when there is truly an effect, then the proportion is the power.) I am less interested in the hypothesis test than the bias and standard errors, but the first two methods only provide a p-value, so that is all we can assess them on.</p>
<p>Next, I calculate the bias, which is the average effect estimate minus the true effect. And finally, I evaluate the standard errors by looking at the estimated standard error as well as the observed standard error (which is the standard deviation of the point estimates).</p>
<pre class="r"><code>pval <- data.frame(
mcnm = mean(sapply(res, function(x) x$mcnemar$p.value <= 0.05)),
durk = mean(sapply(res, function(x) x$durk$p.value <= 0.05)),
clr =mean(sapply(res, function(x) x$clr["rx","Pr(>|z|)"] <= 0.05)),
glm = mean(sapply(res, function(x) x$glm["rx","Pr(>|z|)"] <= 0.05)),
glmer = mean(sapply(res, function(x) x$glmer["rx","Pr(>|z|)"] <= 0.05))
)
bias <- data.frame(
clr = mean(sapply(res, function(x) x$clr["rx", "coef"])),
glm = mean(sapply(res, function(x) x$glm["rx", "Estimate"])),
glmer = mean(sapply(res, function(x) x$glmer["rx", "Estimate"]))
)
se <- data.frame(
clr = mean(sapply(res, function(x) x$clr["rx", "se(coef)"])),
glm = mean(sapply(res, function(x) x$glm["rx", "Std. Error"])),
glmer = mean(sapply(res, function(x) x$glmer["rx", "Std. Error"]))
)
obs.se <- data.frame(
clr = sd(sapply(res, function(x) x$clr["rx", "coef"])),
glm = sd(sapply(res, function(x) x$glm["rx", "Estimate"])),
glmer = sd(sapply(res, function(x) x$glmer["rx", "Estimate"]))
)
sumstat <- round(plyr::rbind.fill(pval, bias, se, obs.se), 3)
rownames(sumstat) <- c("prop.rejected", "bias", "se.est", "se.obs")
sumstat</code></pre>
<pre><code>## mcnm durk clr glm glmer
## prop.rejected 0.035 0.048 0.043 0.038 0.044
## bias NA NA 0.006 0.005 0.005
## se.est NA NA 0.167 0.161 0.167
## se.obs NA NA 0.164 0.153 0.164</code></pre>
<p>In this first case, where the true underlying effect size is 0, the Type 1 error rate should be 0.05. The Durkalski test, the conditional logistical regression, and the mixed effects model are below that level but closer than the other two methods. All three models provide unbiased point estimates, but the standard logistic regression (glm) underestimates the standard errors. The results from the conditional logistic regression and the mixed effects model are quite close across the board.</p>
<p>Here are the summary statistics for a data set with an intervention effect of 0.45. The results are consistent with the “no effect” simulations, except that the standard linear regression model exhibits some bias. In reality, this is not necessarily bias, but a different estimand. The model that ignores clustering is a marginal model (with respect to the site), whereas the conditional logistic regression and mixed effects models are conditional on the site. (I’ve described this phenomenon <a href="https://www.rdatagen.net/post/marginal-v-conditional/">here</a> and <a href="https://www.rdatagen.net/post/mixed-effect-models-vs-gee/">here</a>.) We are interested in the conditional effect here, so that argues for the conditional models.</p>
<p>The conditional logistic regression and the mixed effects model yielded similar estimates, though the mixed effects model had slightly higher power, which is the reason I opted to use this approach at the end of the day.</p>
<pre><code>## mcnm durk clr glm glmer
## prop.rejected 0.766 0.731 0.784 0.766 0.796
## bias NA NA 0.000 -0.033 -0.001
## se.est NA NA 0.164 0.156 0.162
## se.obs NA NA 0.165 0.152 0.162</code></pre>
<p>In this last case, the true underlying data generating process still includes an intervention effect but <em>no clustering</em>. In this scenario, all of the analytic yield similar estimates. However, since there is no guarantee that clustering is not a factor, the mixed effects model will still be the preferred approach.</p>
<pre><code>## mcnm durk clr glm glmer
## prop.rejected 0.802 0.774 0.825 0.828 0.830
## bias NA NA -0.003 -0.002 -0.001
## se.est NA NA 0.159 0.158 0.158
## se.obs NA NA 0.151 0.150 0.150</code></pre>
<p>
<small><font color="darkkhaki">The DREAM Initiative is supported by the National Institutes of Health National Institute of Diabetes and Digestive and Kidney Diseases R01DK11048. The views expressed are those of the author and do not necessarily represent the official position of the funding organizations.</font></small>
</p>
<p> </p>
</div>
<div id="addendum-multiple-datasets-and-model-estimates" class="section level3">
<h3>Addendum: multiple datasets and model estimates</h3>
<pre class="r"><code>gen <- function(nclust, m) {
dc <- genData(nclust, defc)
di <- genCluster(dc, "cid", m, "id")
di <- addColumns(defi, di)
dr <- rbindlist(mclapply(1:nrow(dc), function(x) dmatch(di[cid == x])))
dr <- addColumns(defr, dr)
dr[]
}
iterate <- function(ncluster, m) {
dd <- gen(ncluster, m)
clrfit <- summary(clogit(y ~ rx + strata(pair), data = dd))$coef
glmfit <- summary(glm(y~rx + age + male + bmi, data = dd,
family = binomial))$coef
mefit <- summary(glmer(y~rx + age + male + bmi + (1|cid), data= dd,
family = binomial))$coef
## McNemar
ddc <- dcast(dd, pair ~ rx, value.var = "y")
dmat <- ddc[, .N, keyby = .(`0`,`1`)][, matrix(N, 2, 2, byrow = T)]
mc <- mcnemar.test(dmat)
# Clustered McNemar
ddc <- dcast(dd, cid + pair ~ rx, value.var = "y")
ddc[, ypair := 2*`0` + 1*`1`]
dvec <- ddc[, .N, keyby=.(cid, ypair)]
allpossible <- data.table(expand.grid(1:20, 0:3))
setnames(allpossible, c("cid","ypair"))
setkey(dvec, cid, ypair)
setkey(allpossible, cid, ypair)
dvec <- dvec[allpossible]
dvec[is.na(N), N := 0]
a <- dvec[ypair == 0, N]
b <- dvec[ypair == 1, N]
c <- dvec[ypair == 2, N]
d <- dvec[ypair == 3, N]
durk <- clust.bin.pair(a, b, c, d, method = "durkalski")
list(clr = clrfit, glm = glmfit, glmer = mefit,
mcnemar = mc, durk = durk)
}
res <- mclapply(1:1000, function(x) iterate(20, 60))</code></pre>
<p> </p>
<div id="code-to-generate-figure" class="section level4">
<h4>Code to generate figure</h4>
<pre class="r"><code>library(ggmosaic)
dpair <- dcast(dd, pair ~ rx, value.var = "y")
dpair[, control := factor(`0`, levels = c(1,0),
labels = c("success", "no success"))]
dpair[, rx := factor(`1`, levels = c(0, 1),
labels = c("no success", "success"))]
p <- ggplot(data = dpair) +
geom_mosaic(aes(x = product(control, rx)))
pdata <- data.table(ggplot_build(p)$data[[1]])
pdata[, mcnemar := factor(c("diff","same","same", "diff"))]
textloc <- pdata[c(1,4), .(x=(xmin + xmax)/2, y=(ymin + ymax)/2)]
ggplot(data = pdata) +
geom_rect(aes(xmin=xmin, xmax=xmax, ymin=ymin, ymax=ymax,
fill = mcnemar)) +
geom_label(data = pdata,
aes(x = (xmin+xmax)/2, y = (ymin+ymax)/2, label=.wt),
size = 3.2) +
scale_x_continuous(position = "top",
breaks = textloc$x,
labels = c("no success", "success"),
name = "intervention",
expand = c(0,0)) +
scale_y_continuous(breaks = textloc$y,
labels = c("success", "no success"),
name = "control",
expand = c(0,0)) +
scale_fill_manual(values = c("#6b5dd5", "grey80")) +
theme(panel.grid = element_blank(),
legend.position = "none",
axis.ticks = element_blank(),
axis.text.x = element_text(angle = 0, hjust = 0.5),
axis.text.y = element_text(angle = 90, hjust = 0.5)
)</code></pre>
<p> </p>
</div>
<div id="original-matching-algorithm" class="section level4">
<h4>Original matching algorithm</h4>
<pre class="r"><code>dmatch <- function(dsamp) {
dsamp[, rx := 0]
dused <- NULL
drand <- NULL
dcntl <- NULL
while (nrow(dsamp) > 1) {
selectRow <- sample(1:nrow(dsamp), 1)
dsamp[selectRow, rx := 1]
myTr <- dsamp[, rx]
myX <- as.matrix(dsamp[, .(male, age, bmi)])
match.dt <- Match(Tr = myTr, X = myX,
caliper = c(0, 0.50, .50), ties = FALSE)
if (length(match.dt) == 1) { # no match
dused <- rbind(dused, dsamp[selectRow])
dsamp <- dsamp[-selectRow, ]
} else { # match
trt <- match.dt$index.treated
ctl <- match.dt$index.control
drand <- rbind(drand, dsamp[trt])
dcntl <- rbind(dcntl, dsamp[ctl])
dsamp <- dsamp[-c(trt, ctl)]
}
}
dcntl[, pair := paste0(cid, ".", formatC(1:.N, width=2, flag="0"))]
drand[, pair := paste0(cid, ".", formatC(1:.N, width=2, flag="0"))]
rbind(dcntl, drand)
}</code></pre>
</div>
</div>
simstudy updated to version 0.1.14: implementing Markov chains
https://www.rdatagen.net/post/simstudy-1-14-update/
Tue, 20 Aug 2019 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/simstudy-1-14-update/<p>I’m developing study simulations that require me to generate a sequence of health status for a collection of individuals. In these simulations, individuals gradually grow sicker over time, though sometimes they recover slightly. To facilitate this, I am using a stochastic Markov process, where the probability of a health status at a particular time depends only on the previous health status (in the immediate past). While there are packages to do this sort of thing (see for example the <a href="https://cran.r-project.org/web/packages/markovchain/index.html">markovchain</a> package), I hadn’t yet stumbled upon them while I was tackling my problem. So, I wrote my own functions, which I’ve now incorporated into the latest version of <code>simstudy</code> that is now available on <a href="https://cran.r-project.org/web/packages/simstudy/index.html">CRAN</a>. As a way of announcing the new release, here is a brief overview of Markov chains and the new functions. (See <a href="https://cran.r-project.org/web/packages/simstudy/news/news.html">here</a> for a more complete list of changes.)</p>
<div id="markov-processes" class="section level3">
<h3>Markov processes</h3>
<p>The key “parameter” of a stochastic Markov process is the transition matrix, which defines the probability of moving from one state to another (or remaining in the same state). Each row of the matrix is indexed by the current state, while the columns are indexed by the target state. The values of the matrix represent the probabilities of transitioning from the current state to the target state. The sum of the probabilities across each row must equal one.</p>
<p>In the transition matrix below, there are three states <span class="math inline">\((1, 2, 3)\)</span>. The probability of moving from state 1 to state 3 is represented by <span class="math inline">\(p_{13}\)</span>. Likewise the probability of moving from state 3 to state 2 is <span class="math inline">\(p_{32}\)</span>. And <span class="math inline">\(\sum_{j=1}^3 p_{ij} = 1\)</span> for all <span class="math inline">\(i \in (1,2,3)\)</span>.</p>
<p><span class="math display">\[
\left(
\begin{matrix}
p_{11} & p_{12} & p_{13} \\
p_{21} & p_{22} & p_{23} \\
p_{31} & p_{32} & p_{33}
\end{matrix}
\right )
\]</span></p>
<p>Here’s a possible <span class="math inline">\(3 \times 3\)</span> transition matrix:</p>
<p><span class="math display">\[
\left(
\begin{matrix}
0.5 & 0.4 & 0.1 \\
0.2 & 0.5 & 0.3 \\
0.0 & 0.0 & 1.0
\end{matrix}
\right )
\]</span></p>
<p>In this case, the probability of moving from state 1 to state 2 is <span class="math inline">\(40\%\)</span>, whereas there is no possibility that you can move from 3 to 1 or 2. (State 3 is considered to be an “absorbing” state since it is not possible to leave; if we are talking about health status, state 3 could be death.)</p>
</div>
<div id="function-genmarkov" class="section level3">
<h3>function genMarkov</h3>
<p>The new function <code>genMarkov</code> generates a random sequence for the specified number of individuals. (The sister function <code>addMarkov</code> is quite similar, though it allows users to add a Markov chain to an existing data set.) In addition to defining the transition matrix, you need to indicate the length of the chain to be generated for each simulated unit or person. The data can be returned either in long or wide form, depending on how you’d ultimately like to use the data. In the first case, I am generating wide format data for sequences of length of 6 for 12 individuals:</p>
<pre class="r"><code>library(simstudy)
set.seed(3928398)
tmatrix <- matrix(c(0.5, 0.4, 0.1,
0.2, 0.5, 0.3,
0.0, 0.0, 1.0), 3, 3, byrow = T)
dd <- genMarkov(n = 12, transMat = tmatrix, chainLen = 6, wide = TRUE)
dd</code></pre>
<pre><code>## id S1 S2 S3 S4 S5 S6
## 1: 1 1 2 2 1 2 2
## 2: 2 1 1 2 2 2 3
## 3: 3 1 1 2 3 3 3
## 4: 4 1 2 2 1 1 2
## 5: 5 1 1 2 2 2 3
## 6: 6 1 1 1 1 1 1
## 7: 7 1 1 1 1 2 2
## 8: 8 1 1 1 1 1 1
## 9: 9 1 1 2 3 3 3
## 10: 10 1 1 2 3 3 3
## 11: 11 1 2 2 2 2 1
## 12: 12 1 2 1 1 2 1</code></pre>
<p>In the long format, the output is multiple records per id. This could be useful if you are going to be estimating longitudinal models, or as in this case, creating longitudinal plots:</p>
<pre class="r"><code>set.seed(3928398)
dd <- genMarkov(n = 12, transMat = tmatrix, chainLen = 6, wide = FALSE)</code></pre>
<p>Here are the resulting data (for the first two individuals):</p>
<pre class="r"><code>dd[id %in% c(1,2)]</code></pre>
<pre><code>## id period state
## 1: 1 1 1
## 2: 1 2 2
## 3: 1 3 2
## 4: 1 4 1
## 5: 1 5 2
## 6: 1 6 2
## 7: 2 1 1
## 8: 2 2 1
## 9: 2 3 2
## 10: 2 4 2
## 11: 2 5 2
## 12: 2 6 3</code></pre>
<p>And here’s a plot for each individual, showing their health status progressions over time:</p>
<p><img src="https://www.rdatagen.net/post/2019-08-20-simstudy-0-1-14-update.en_files/figure-html/unnamed-chunk-4-1.png" width="672" /></p>
<p>I do plan on sharing the details of the simulation that inspired the creation of these new functions, though I am still working out a few things. In the meantime, as always, if anyone has any suggestions or questions about simstudy, definitely let me know.</p>
</div>
Bayes models for estimation in stepped-wedge trials with non-trivial ICC patterns
https://www.rdatagen.net/post/bayes-model-to-estimate-stepped-wedge-trial-with-non-trivial-icc-structure/
Tue, 06 Aug 2019 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/bayes-model-to-estimate-stepped-wedge-trial-with-non-trivial-icc-structure/<p>Continuing a series of posts discussing the structure of intra-cluster correlations (ICC’s) in the context of a stepped-wedge trial, this latest edition is primarily interested in fitting Bayesian hierarchical models for more complex cases (though I do talk a bit more about the linear mixed effects models). The first two posts in the series focused on generating data to simulate various scenarios; the <a href="https://www.rdatagen.net/post/estimating-treatment-effects-and-iccs-for-stepped-wedge-designs/">third post</a> considered linear mixed effects and Bayesian hierarchical models to estimate ICC’s under the simplest scenario of constant between-period ICC’s. Throughout this post, I use code drawn from the previous one; I am not repeating much of it here for brevity’s sake. So, if this is all new, it is probably worth <a href="https://www.rdatagen.net/post/estimating-treatment-effects-and-iccs-for-stepped-wedge-designs/">glancing at</a> before continuing on.</p>
<div id="data-generation" class="section level3">
<h3>Data generation</h3>
<p>The data generating model this time around is only subtly different from before, but that difference is quite important. Rather than a single cluster-specific effect <span class="math inline">\(b_c\)</span>, there is now a vector of cluster effects <span class="math inline">\(\mathbf{b_c} = \left( b_{c1}, b_{c2}, \ldots, b_{cT} \right)\)</span>, where <span class="math inline">\(b_c \sim MVN(\mathbf{0}, \sigma^2 \mathbf{R})\)</span> (see <a href="https://www.rdatagen.net/post/varying-intra-cluster-correlations-over-time/">this earlier post</a> for a description of the correlation matrix <span class="math inline">\(\mathbf{R}\)</span>.)</p>
<p><span class="math display">\[
Y_{ict} = \mu + \beta_0t + \beta_1X_{ct} + b_{ct} + e_{ict}
\]</span></p>
<p>By altering the correlation structure of <span class="math inline">\(\mathbf{b_c}\)</span> (that is <span class="math inline">\(\mathbf{R}\)</span>), we can the change the structure of the ICC’s. (The data generation was the focus of the first two posts of this series, <a href="https://www.rdatagen.net/post/intra-cluster-correlations-over-time/">here</a> and <a href="https://www.rdatagen.net/post/varying-intra-cluster-correlations-over-time/">here</a>. The data generating function <code>genDD</code> includes an argument where you can specify the two correlation structures, <em>exchangeable</em> and <em>auto-regressive</em>:</p>
<pre class="r"><code>library(simstudy)
defc <- defData(varname = "mu", formula = 0,
dist = "nonrandom", id = "cluster")
defc <- defData(defc, "s2", formula = 0.15, dist = "nonrandom")
defc <- defData(defc, "m", formula = 15, dist = "nonrandom")
defa <- defDataAdd(varname = "Y",
formula = "0 + 0.10 * period + 1 * rx + cteffect",
variance = 2, dist = "normal")</code></pre>
<pre class="r"><code>genDD <- function(defc, defa, nclust, nperiods,
waves, len, start, rho, corstr) {
dc <- genData(nclust, defc)
dp <- addPeriods(dc, nperiods, "cluster")
dp <- trtStepWedge(dp, "cluster", nWaves = waves, lenWaves = len,
startPer = start)
dp <- addCorGen(dtOld = dp, nvars = nperiods, idvar = "cluster",
rho = rho, corstr = corstr, dist = "normal",
param1 = "mu", param2 = "s2", cnames = "cteffect")
dd <- genCluster(dp, cLevelVar = "timeID", numIndsVar = "m",
level1ID = "id")
dd <- addColumns(defa, dd)
dd[]
}</code></pre>
</div>
<div id="constant-between-period-iccs" class="section level3">
<h3>Constant between-period ICC’s</h3>
<p>In this first scenario, the assumption is that the within-period ICC’s are larger than the between-period ICC’s and the between-period ICC’s are constant. This can be generated with random effects that have a correlation matrix with compound symmetry (or is exchangeable). In this case, we will have 60 clusters and 7 time periods:</p>
<pre class="r"><code>set.seed(4119)
dcs <- genDD(defc, defa, 60, 7, 4, 1, 2, 0.6, "cs")
# correlation of "unobserved" random effects
round(cor(dcast(dcs[, .SD[1], keyby = .(cluster, period)],
formula = cluster ~ period, value.var = "cteffect")[, 2:7]), 2)</code></pre>
<pre><code>## 0 1 2 3 4 5
## 0 1.00 0.60 0.49 0.60 0.60 0.51
## 1 0.60 1.00 0.68 0.64 0.62 0.64
## 2 0.49 0.68 1.00 0.58 0.54 0.62
## 3 0.60 0.64 0.58 1.00 0.63 0.66
## 4 0.60 0.62 0.54 0.63 1.00 0.63
## 5 0.51 0.64 0.62 0.66 0.63 1.00</code></pre>
<p><br></p>
<div id="linear-mixed-effects-model" class="section level4">
<h4>Linear mixed-effects model</h4>
<p>It is possible to use <code>lmer</code> to correctly estimate the variance components and other parameters that underlie the data generating process used in this case. The cluster-level period-specific effects are specified in the model as “cluster/period”, which indicates that the period effects are <em>nested</em> within the cluster.</p>
<pre class="r"><code>library(lme4)
lmerfit <- lmer(Y ~ period + rx + (1 | cluster/period) , data = dcs)
as.data.table(VarCorr(lmerfit))</code></pre>
<pre><code>## grp var1 var2 vcov sdcor
## 1: period:cluster (Intercept) <NA> 0.05827349 0.2413990
## 2: cluster (Intercept) <NA> 0.07816476 0.2795796
## 3: Residual <NA> <NA> 2.02075355 1.4215321</code></pre>
<p>Reading from the <code>vcov</code> column in the <code>lmer</code> output above, we can extract the <em>period:cluster</em> variance (<span class="math inline">\(\sigma_w^2\)</span>), the <em>cluster</em> variance (<span class="math inline">\(\sigma^2_v\)</span>), and the <em>residual</em> (individual level) variance (<span class="math inline">\(\sigma^2_e\)</span>). Using these three variance components, we can estimate the correlation of the cluster level effects (<span class="math inline">\(\rho\)</span>), the within-period ICC (<span class="math inline">\(ICC_{tt}\)</span>), and the between-period ICC (<span class="math inline">\(ICC_{tt^\prime}\)</span>). (See the <a href="#addendum">addendum</a> below for a more detailed description of the derivations.)</p>
</div>
<div id="correlation-rho-of-cluster-specific-effects-over-time" class="section level4">
<h4>Correlation (<span class="math inline">\(\rho\)</span>) of cluster-specific effects over time</h4>
<p>In this post, don’t confuse <span class="math inline">\(\rho\)</span> with the ICC. <span class="math inline">\(\rho\)</span> is the correlation between the cluster-level period-specific random effects. Here I am just showing that it is function of the decomposed variance estimates provided in the <code>lmer</code> output:</p>
<p><span class="math display">\[
\rho = \frac{\sigma^2_v}{\sigma^2_v + \sigma^2_w}
\]</span></p>
<pre class="r"><code>vs <- as.data.table(VarCorr(lmerfit))$vcov
vs[2]/sum(vs[1:2]) </code></pre>
<pre><code>## [1] 0.5728948</code></pre>
<p><br></p>
</div>
<div id="within-period-icc" class="section level4">
<h4>Within-period ICC</h4>
<p>The within-period ICC is the ratio of total cluster variance relative to total variance:</p>
<p><span class="math display">\[ICC_{tt} = \frac{\sigma^2_v + \sigma^2_w}{\sigma^2_v + \sigma^2_w+\sigma^2_e}\]</span></p>
<pre class="r"><code>sum(vs[1:2])/sum(vs)</code></pre>
<pre><code>## [1] 0.06324808</code></pre>
<p><br></p>
</div>
<div id="between-period-icc" class="section level4">
<h4>Between-period ICC</h4>
<p>The between-period <span class="math inline">\(ICC_{tt^\prime}\)</span> is really just the within-period <span class="math inline">\(ICC_{tt}\)</span> adjusted by <span class="math inline">\(\rho\)</span> (see the <a href="#addendum">addendum</a>):</p>
<p><span class="math display">\[ICC_{tt^\prime} = \frac{\sigma^2_v}{\sigma^2_v + \sigma^2_w+\sigma^2_e}\]</span></p>
<pre class="r"><code>vs[2]/sum(vs) </code></pre>
<pre><code>## [1] 0.0362345</code></pre>
<p><br></p>
</div>
<div id="bayesian-model" class="section level4">
<h4>Bayesian model</h4>
<p>Now, I’ll fit a Bayesian hierarchical model, as I did <a href="https://www.rdatagen.net/post/estimating-treatment-effects-and-iccs-for-stepped-wedge-designs/">earlier</a> with the simplest constant ICC data generation process. The specification of the model in <code>stan</code> in this instance is slightly more involved as the number of parameters has increased. In the simpler case, I only had to estimate a scalar parameter for <span class="math inline">\(\sigma_b\)</span> and a single ICC parameter. In this model definition (<code>nested_cor_cs.stan</code>) <span class="math inline">\(\mathbf{b_c}\)</span> is a vector so there is a need to specify the variance-covariance matrix <span class="math inline">\(\sigma^2 \mathbf{R}\)</span>, which has dimensions <span class="math inline">\(T \times T\)</span> (defined in the <code>transformed parameters</code> block). There are <span class="math inline">\(T\)</span> random effects for each cluster, rather than one. And finally, instead of one ICC value, there are two - the within- and between-period ICC’s (defined in the <code>generated quantities</code> block).</p>
<pre class="stan"><code>data {
int<lower=0> N; // number of unique individuals
int<lower=1> K; // number of predictors
int<lower=1> J; // number of clusters
int<lower=0> T; // number of periods
int<lower=1,upper=J> jj[N]; // group for individual
int<lower=1> tt[N]; // period for individual
matrix[N, K] x; // matrix of predctors
vector[N] y; // matrix of outcomes
}
parameters {
vector[K] beta; // model fixed effects
real<lower=0> sigmalev1; // cluster variance (sd)
real<lower=-1,upper=1> rho; // correlation
real<lower=0> sigma; // individual level varianc (sd)
matrix[J, T] ran; // site level random effects (by period)
}
transformed parameters{
cov_matrix[T] Sigma;
vector[N] yhat;
vector[T] mu0;
for (t in 1:T)
mu0[t] = 0;
// Random effects with exchangeable correlation
for (j in 1:(T-1))
for (k in (j+1):T) {
Sigma[j,k] = pow(sigmalev1,2) * rho;
Sigma[k,j] = Sigma[j, k];
}
for (i in 1:T)
Sigma[i,i] = pow(sigmalev1,2);
for (i in 1:N)
yhat[i] = x[i]*beta + ran[jj[i], tt[i]];
}
model {
sigma ~ uniform(0, 10);
sigmalev1 ~ uniform(0, 10);
rho ~ uniform(-1, 1);
for (j in 1:J)
ran[j] ~ multi_normal(mu0, Sigma);
y ~ normal(yhat, sigma);
}
generated quantities {
real sigma2;
real sigmalev12;
real iccInPeriod;
real iccBetPeriod;
sigma2 = pow(sigma, 2);
sigmalev12 = pow(sigmalev1, 2);
iccInPeriod = sigmalev12/(sigmalev12 + sigma2);
iccBetPeriod = iccInPeriod * rho;
}</code></pre>
<p>Model estimation requires creating the data set (in the form of an <code>R list</code>), compiling the <code>stan</code> model, and then sampling from the posterior to generate distributions of all parameters and generated quantities. I should include conduct a diagnostic review (e.g. to assess convergence), but you’ll have to trust me that everything looked reasonable.</p>
<pre class="r"><code>library(rstan)
options(mc.cores = parallel::detectCores())
x <- as.matrix(dcs[ ,.(1, period, rx)])
K <- ncol(x)
N <- dcs[, length(unique(id))]
J <- dcs[, length(unique(cluster))]
T <- dcs[, length(unique(period))]
jj <- dcs[, cluster]
tt <- dcs[, period] + 1
y <- dcs[, Y]
testdat <- list(N=N, K=K, J=J, T=T, jj=jj, tt=tt, x=x, y=y)
rt <- stanc("nested_cor_cs.stan")
sm <- stan_model(stanc_ret = rt, verbose=FALSE)
fit.cs <- sampling(sm, data=testdat, seed = 32748,
iter = 5000, warmup = 1000,
control = list(max_treedepth = 15))</code></pre>
<p>Here is a summary of results for <span class="math inline">\(\rho\)</span>, <span class="math inline">\(ICC_{tt}\)</span>, and <span class="math inline">\(ICC_{tt^\prime}\)</span>. I’ve included a comparison of the means of the posterior distributions with the <code>lmer</code> estimates, followed by a more complete (visual) description of the posterior distributions of the Bayesian estimates:</p>
<pre class="r"><code>mb <- sapply(
rstan::extract(fit.cs, pars=c("rho", "iccInPeriod", "iccBetPeriod")),
function(x) mean(x)
)
cbind(bayesian=round(mb,3),
lmer = round(c(vs[2]/sum(vs[1:2]),
sum(vs[1:2])/sum(vs),
vs[2]/sum(vs)),3)
)</code></pre>
<pre><code>## bayesian lmer
## rho 0.576 0.573
## iccInPeriod 0.065 0.063
## iccBetPeriod 0.037 0.036</code></pre>
<p><img src="https://www.rdatagen.net/post/2019-08-06-bayes-model-to-estimate-stepped-wedge-trial-with-non-trivial-icc-structure.en_files/figure-html/unnamed-chunk-12-1.png" width="480" /></p>
</div>
</div>
<div id="decaying-between-period-icc-over-time" class="section level3">
<h3>Decaying between-period ICC over time</h3>
<p>Now we enter somewhat uncharted territory, since there is no obvious way in <code>R</code> using the <code>lme4</code> or <code>nlme</code> packages to decompose the variance estimates when the random effects have correlation that decays over time. This is where we might have to rely on a Bayesian approach to do this. (I understand that <code>SAS</code> can accommodate this, but I can’t bring myself to go there.)</p>
<p>We start where we pretty much always do - generating the data. Everything is the same, except that the cluster-random effects are correlated over time; we specify a correlation structure of <em>ar1</em> (auto-regressive).</p>
<pre class="r"><code>set.seed(4119)
dar1 <- genDD(defc, defa, 60, 7, 4, 1, 2, 0.6, "ar1")
# correlation of "unobserved" random effects
round(cor(dcast(dar1[, .SD[1], keyby = .(cluster, period)],
formula = cluster ~ period, value.var = "cteffect")[, 2:7]), 2)</code></pre>
<pre><code>## 0 1 2 3 4 5
## 0 1.00 0.60 0.22 0.20 0.18 0.06
## 1 0.60 1.00 0.64 0.45 0.30 0.23
## 2 0.22 0.64 1.00 0.61 0.32 0.30
## 3 0.20 0.45 0.61 1.00 0.61 0.49
## 4 0.18 0.30 0.32 0.61 1.00 0.69
## 5 0.06 0.23 0.30 0.49 0.69 1.00</code></pre>
<p>The model file is similar to <code>nested_cor_cs.stan</code>, except that the specifications of the variance-covariance matrix and ICC’s are now a function of <span class="math inline">\(\rho^{|t^\prime - t|}\)</span>:</p>
<pre class="stan"><code>transformed parameters{
⋮
for (j in 1:T)
for (k in 1:T)
Sigma[j,k] = pow(sigmalev1,2) * pow(rho,abs(j-k));
⋮
}
generated quantities {
⋮
for (j in 1:T)
for (k in 1:T)
icc[j, k] = sigmalev12/(sigmalev12 + sigma2) * pow(rho,abs(j-k));
⋮
}</code></pre>
<p>The stan compilation and sampling code is not shown here - they are the same before. The posterior distribution of <span class="math inline">\(\rho\)</span> is similar to what we saw previously.</p>
<pre class="r"><code>print(fit.ar1, pars=c("rho"))</code></pre>
<pre><code>## Inference for Stan model: nested_cor_ar1.
## 4 chains, each with iter=5000; warmup=1000; thin=1;
## post-warmup draws per chain=4000, total post-warmup draws=16000.
##
## mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat
## rho 0.58 0 0.08 0.41 0.53 0.58 0.64 0.73 2302 1
##
## Samples were drawn using NUTS(diag_e) at Fri Jun 28 14:13:41 2019.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at
## convergence, Rhat=1).</code></pre>
<p>Now, however, we have to consider a full range of ICC estimates. Here is a plot of the posterior distribution of all ICC’s with the means of each posterior directly below. The diagonal represents the within-period (constant) ICCs, and the off-diagonals are the between-period ICC’s.</p>
<p><img src="https://www.rdatagen.net/post/2019-08-06-bayes-model-to-estimate-stepped-wedge-trial-with-non-trivial-icc-structure.en_files/figure-html/unnamed-chunk-16-1.png" width="576" /></p>
</div>
<div id="an-alternative-bayesian-model-unstructured-correlation" class="section level3">
<h3>An alternative Bayesian model: unstructured correlation</h3>
<p>Now, there is no particular reason to expect that the particular decay model (with an AR1 structure) would be the best model. We could try to fit an even more general model, one with minimal structure. For example if we put no restrictions on the correlation matrix <span class="math inline">\(\mathbf{R}\)</span>, but assumed a constant variance of <span class="math inline">\(\sigma_b^2\)</span>, we might achieve a better model fit. (We could go even further and relax the assumption that the variance across time changes as well, but I’ll leave that to you if you want to try it.)</p>
<p>In this case, we need to define <span class="math inline">\(\mathbf{R}\)</span> and specify a prior distribution (I use the Lewandowski, Kurowicka, and Joe - LKJ prior, as suggested by <code>Stan</code> documentation) and define the ICC’s in terms of <span class="math inline">\(\mathbf{R}\)</span>. Here are the relevant snippets of the <code>stan</code> model (everything else is the same as before):</p>
<pre class="stan"><code>parameters {
⋮
corr_matrix[T] R; // correlation matrix
⋮
}
transformed parameters{
⋮
Sigma = pow(sigmalev1,2) * R;
⋮
}
model {
⋮
R ~ lkj_corr(1); // LKJ prior on the correlation matrix
⋮
}
generated quantities {
⋮
for (j in 1:T)
for (k in 1:T)
icc[j, k] = sigmalev12/(sigmalev12 + sigma2) * R[j, k];
⋮
}</code></pre>
<p>Here are the means of the ICC posterior distributions alongside the means from the previous <em>auto-regressive</em> model.</p>
<p><img src="https://www.rdatagen.net/post/2019-08-06-bayes-model-to-estimate-stepped-wedge-trial-with-non-trivial-icc-structure.en_files/figure-html/unnamed-chunk-18-1.png" width="864" /></p>
<p>Looking at the unstructured model estimates on the right, it does appear that a decay model might be reasonable. (No surprise there, because in reality, it <em>is</em> totally reasonable; that’s how we generated the data.) We can use package <code>bridgesampling</code> which estimates marginal log likelihoods (across the prior distributions of the parameters). The marginal likelihoods are used in calculating the Bayes Factor, which is the basis for comparing two competing models. Here, the log-likelihood is reported. If the unstructured model is indeed an improvement (and it could very well be, because it has more parameters), the we would expect the marginal log-likelihood for the second model to be greater (less negative) than the log-likelihood for the auto-regressive model. If fact, the opposite true, suggesting the auto-regressive model is the preferred one (out of these two):</p>
<pre class="r"><code>library(bridgesampling)
bridge_sampler(fit.ar1, silent = TRUE)</code></pre>
<pre class="r"><code>## Bridge sampling estimate of the log marginal likelihood: -5132.277
## Estimate obtained in 6 iteration(s) via method "normal"</code></pre>
<pre class="r"><code>bridge_sampler(fit.ar1.nc, silent = TRUE)</code></pre>
<pre class="r"><code>## Bridge sampling estimate of the log marginal likelihood: -5137.081
## Estimate obtained in 269 iteration(s) via method "normal".</code></pre>
<p> </p>
<p><a name="addendum"></a></p>
<p> </p>
</div>
<div id="addendum---interpreting-lmer-variance-estimates" class="section level2">
<h2>Addendum - interpreting lmer variance estimates</h2>
<p>In order to show how the <code>lmer</code> variance estimates relate to the theoretical variances and correlations in the case of a constant between-period ICC, here is a simulation based on 1000 clusters. The key parameters are <span class="math inline">\(\sigma^2_b = 0.15\)</span>, <span class="math inline">\(\sigma^2_e = 2\)</span>, and <span class="math inline">\(\rho = 0.6\)</span>. And based on these values, the theoretical ICC’s are: <span class="math inline">\(ICC_{within} = 0.15/2.15 = 0.698\)</span>, and <span class="math inline">\(ICC_{bewteen} = 0.698 * 0.6 = 0.042\)</span>.</p>
<pre class="r"><code>set.seed(4119)
dcs <- genDD(defc, defa, 1000, 7, 4, 1, 2, 0.6, "cs")</code></pre>
<p>The underlying correlation matrix of the cluster-level effects is what we would expect:</p>
<pre class="r"><code>round(cor(dcast(dcs[, .SD[1], keyby = .(cluster, period)],
formula = cluster ~ period, value.var = "cteffect")[, 2:7]), 2)</code></pre>
<pre><code>## 0 1 2 3 4 5
## 0 1.00 0.59 0.59 0.61 0.61 0.61
## 1 0.59 1.00 0.61 0.60 0.61 0.64
## 2 0.59 0.61 1.00 0.59 0.61 0.61
## 3 0.61 0.60 0.59 1.00 0.59 0.62
## 4 0.61 0.61 0.61 0.59 1.00 0.60
## 5 0.61 0.64 0.61 0.62 0.60 1.00</code></pre>
<p>Here are the variance estimates from the mixed-effects model:</p>
<pre class="r"><code>lmerfit <- lmer(Y ~ period + rx + (1 | cluster/period) , data = dcs)
as.data.table(VarCorr(lmerfit))</code></pre>
<pre><code>## grp var1 var2 vcov sdcor
## 1: period:cluster (Intercept) <NA> 0.05779349 0.2404028
## 2: cluster (Intercept) <NA> 0.09143749 0.3023863
## 3: Residual <NA> <NA> 1.98894356 1.4102991</code></pre>
<p>The way <code>lmer</code> implements the nested random effects , the cluster period-specific effect <span class="math inline">\(b_{ct}\)</span> is decomposed into <span class="math inline">\(v_c\)</span>, a cluster level effect, and <span class="math inline">\(w_{ct}\)</span>, a cluster time-specific effect:</p>
<p><span class="math display">\[
b_{ct} = v_c + w_{ct}
\]</span></p>
<p>Since both <span class="math inline">\(v_c\)</span> and <span class="math inline">\(w_{ct}\)</span> are normally distributed (<span class="math inline">\(v_c \sim N(0,\sigma_v^2)\)</span> and <span class="math inline">\(w_{ct} \sim N(0,\sigma_w^2)\)</span>), <span class="math inline">\(var(b_{ct}) = \sigma^2_b = \sigma^2_v + \sigma^2_w\)</span>.</p>
<p>Here is the observed estimate of <span class="math inline">\(\sigma^2_v + \sigma^2_w\)</span>:</p>
<pre class="r"><code>vs <- as.data.table(VarCorr(lmerfit))$vcov
sum(vs[1:2])</code></pre>
<pre><code>## [1] 0.149231</code></pre>
<p>An estimate of <span class="math inline">\(\rho\)</span> can be extracted from the <code>lmer</code> model variance estimates:</p>
<p><span class="math display">\[
\begin{aligned}
\rho &= cov(b_{ct}, b_{ct^\prime}) \\
&= cov(v_{c} + w_{ct}, v_{c} + w_{ct^\prime}) \\
&= var(v_c) + cov(w_{ct}) \\
&= \sigma^2_v
\end{aligned}
\]</span></p>
<p><span class="math display">\[
\begin{aligned}
var(b_{ct}) &= var(v_{c}) + var(w_{ct}) \\
&= \sigma^2_v + \sigma^2_w
\end{aligned}
\]</span></p>
<p><span class="math display">\[
\begin{aligned}
cor(b_{ct}, b_{ct^\prime}) &= \frac{cov(b_{ct}, b_{ct^\prime})}{\sqrt{var(b_{ct}) var(b_{ct^\prime})} } \\
\rho &= \frac{\sigma^2_v}{\sigma^2_v + \sigma^2_w}
\end{aligned}
\]</span></p>
<pre class="r"><code>vs[2]/sum(vs[1:2])</code></pre>
<pre><code>## [1] 0.6127246</code></pre>
<p>And here are the estimates of within and between-period ICC’s:</p>
<p><span class="math display">\[ICC_{tt} = \frac{\sigma^2_b}{\sigma^2_b+\sigma^2_e} =\frac{\sigma^2_v + \sigma^2_w}{\sigma^2_v + \sigma^2_w+\sigma^2_e}\]</span></p>
<pre class="r"><code>sum(vs[1:2])/sum(vs)</code></pre>
<pre><code>## [1] 0.06979364</code></pre>
<p><span class="math display">\[
\begin{aligned}
ICC_{tt^\prime} &= \left( \frac{\sigma^2_b}{\sigma^2_b+\sigma^2_e}\right) \rho \\
\\
&= \left( \frac{\sigma^2_v + \sigma^2_w}{\sigma^2_v + \sigma^2_w+\sigma^2_e}\right) \rho \\
\\
&=\left( \frac{\sigma^2_v + \sigma^2_w}{\sigma^2_v + \sigma^2_w+\sigma^2_e} \right) \left( \frac{\sigma^2_v}{\sigma^2_v + \sigma^2_w} \right) \\
\\
&= \frac{\sigma^2_v}{\sigma^2_v + \sigma^2_w+\sigma^2_e}
\end{aligned}
\]</span></p>
<pre class="r"><code>vs[2]/sum(vs)</code></pre>
<pre><code>## [1] 0.04276428</code></pre>
</div>
Estimating treatment effects (and ICCs) for stepped-wedge designs
https://www.rdatagen.net/post/estimating-treatment-effects-and-iccs-for-stepped-wedge-designs/
Tue, 16 Jul 2019 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/estimating-treatment-effects-and-iccs-for-stepped-wedge-designs/<p>In the last two posts, I introduced the notion of time-varying intra-cluster correlations in the context of stepped-wedge study designs. (See <a href="https://www.rdatagen.net/post/intra-cluster-correlations-over-time/">here</a> and <a href="https://www.rdatagen.net/post/varying-intra-cluster-correlations-over-time/">here</a>). Though I generated lots of data for those posts, I didn’t fit any models to see if I could recover the estimates and any underlying assumptions. That’s what I am doing now.</p>
<p>My focus here is on the simplest case, where the ICC’s are constant over time and between time. Typically, I would just use a mixed-effects model to estimate the treatment effect and account for variability across clusters, which is easily done in <code>R</code> using the <code>lme4</code> package; if the outcome is continuous the function <code>lmer</code> is appropriate. I thought, however, it would also be interesting to use the <code>rstan</code> package to fit a Bayesian hierarchical model.</p>
<p>While it is always fun to explore new methods, I have a better justification for trying this approach: as far as I can tell, <code>lme4</code> (or <code>nlme</code> for that matter) cannot handle the cases with more complex patterns of between-period intra-cluster correlation that I focused on last time. A Bayesian hierarchical model should be up to the challenge. I thought that it would be best to start with a simple case before proceeding to the situation where I have no clear option in <code>R</code>. I’ll do that next time.</p>
<div id="data-generation" class="section level3">
<h3>Data generation</h3>
<p>I know I am repeating myself a little bit, but it is important to be clear about the data generation process that I am talking about here.</p>
<p><span class="math display">\[Y_{ic} = \mu + \beta_1X_{c} + b_c + e_{ic},\]</span></p>
<p>where <span class="math inline">\(Y_{ic}\)</span> is a continuous outcome for subject <span class="math inline">\(i\)</span> in cluster <span class="math inline">\(c\)</span>, and <span class="math inline">\(X_c\)</span> is a treatment indicator for cluster <span class="math inline">\(c\)</span> (either 0 or 1). The underlying structural parameters are <span class="math inline">\(\mu\)</span>, the grand mean, and <span class="math inline">\(\beta_1\)</span>, the treatment effect. The unobserved random effects are, <span class="math inline">\(b_c \sim N(0, \sigma^2_b)\)</span>, the normally distributed group level effect, and <span class="math inline">\(e_{ic} \sim N(0, \sigma^2_e)\)</span>, the normally distributed individual-level effect.</p>
<pre class="r"><code>library(simstudy)
defc <- defData( varname = "ceffect", formula = 0.0, variance = 0.15,
dist = "normal", id = "cluster")
defc <- defData(defc, varname = "m", formula = 15, dist = "nonrandom")
defa <- defDataAdd(varname = "Y",
formula = "0 + 0.10 * period + 1 * rx + ceffect",
variance = 2, dist = "normal")
genDD <- function(defc, defa, nclust, nperiods, waves, len, start) {
dc <- genData(nclust, defc)
dp <- addPeriods(dc, nperiods, "cluster")
dp <- trtStepWedge(dp, "cluster", nWaves = waves, lenWaves = len,
startPer = start)
dd <- genCluster(dp, cLevelVar = "timeID", numIndsVar = "m",
level1ID = "id")
dd <- addColumns(defa, dd)
dd[]
}
set.seed(2822)
dx <- genDD(defc, defa, 60, 7, 4, 1, 2)
dx</code></pre>
<pre><code>## cluster period ceffect m timeID startTrt rx id Y
## 1: 1 0 -0.05348668 15 1 2 0 1 -0.1369149
## 2: 1 0 -0.05348668 15 1 2 0 2 -1.0030891
## 3: 1 0 -0.05348668 15 1 2 0 3 3.1169339
## 4: 1 0 -0.05348668 15 1 2 0 4 -0.8109585
## 5: 1 0 -0.05348668 15 1 2 0 5 0.2285751
## ---
## 6296: 60 6 0.10844859 15 420 5 1 6296 0.4171770
## 6297: 60 6 0.10844859 15 420 5 1 6297 1.5127632
## 6298: 60 6 0.10844859 15 420 5 1 6298 0.5194967
## 6299: 60 6 0.10844859 15 420 5 1 6299 -0.3120285
## 6300: 60 6 0.10844859 15 420 5 1 6300 2.0493244</code></pre>
</div>
<div id="using-lmer-to-estimate-treatment-effect-and-iccs" class="section level3">
<h3>Using lmer to estimate treatment effect and ICC’s</h3>
<p>As I <a href="https://www.rdatagen.net/post/intra-cluster-correlations-over-time/">derived earlier</a>, the within- and between-period ICC’s under this data generating process are:</p>
<p><span class="math display">\[ICC = \frac{\sigma^2_b}{\sigma^2_b + \sigma^2_e}\]</span></p>
<p>Using a linear mixed-effects regression model we can estimate the fixed effects (the time trend and the treatment effect) as well as the random effects (cluster- and individual-level variation, <span class="math inline">\(\sigma^2_b\)</span> and <span class="math inline">\(\sigma^2_e\)</span>). The constant ICC can be estimated directly from the variance estimates.</p>
<pre class="r"><code>library(lme4)
library(sjPlot)
lmerfit <- lmer(Y ~ period + rx + (1 | cluster) , data = dx)
tab_model(lmerfit, show.icc = FALSE, show.dev = FALSE,
show.p = FALSE, show.r2 = FALSE,
title = "Linear mixed-effects model")</code></pre>
<table style="border-collapse:collapse; border:none;">
<caption style="font-weight: bold; text-align:left;">
Linear mixed-effects model
</caption>
<tr>
<th style="border-top: double; text-align:center; font-style:normal; font-weight:bold; padding:0.2cm; text-align:left; ">
</th>
<th colspan="2" style="border-top: double; text-align:center; font-style:normal; font-weight:bold; padding:0.2cm; ">
Y
</th>
</tr>
<tr>
<td style=" text-align:center; border-bottom:1px solid; font-style:italic; font-weight:normal; text-align:left; ">
Predictors
</td>
<td style=" text-align:center; border-bottom:1px solid; font-style:italic; font-weight:normal; ">
Estimates
</td>
<td style=" text-align:center; border-bottom:1px solid; font-style:italic; font-weight:normal; ">
CI
</td>
</tr>
<tr>
<td style=" padding:0.2cm; text-align:left; vertical-align:top; text-align:left; ">
(Intercept)
</td>
<td style=" padding:0.2cm; text-align:left; vertical-align:top; text-align:center; ">
0.09
</td>
<td style=" padding:0.2cm; text-align:left; vertical-align:top; text-align:center; ">
-0.03 – 0.21
</td>
</tr>
<tr>
<td style=" padding:0.2cm; text-align:left; vertical-align:top; text-align:left; ">
period
</td>
<td style=" padding:0.2cm; text-align:left; vertical-align:top; text-align:center; ">
0.08
</td>
<td style=" padding:0.2cm; text-align:left; vertical-align:top; text-align:center; ">
0.05 – 0.11
</td>
</tr>
<tr>
<td style=" padding:0.2cm; text-align:left; vertical-align:top; text-align:left; ">
rx
</td>
<td style=" padding:0.2cm; text-align:left; vertical-align:top; text-align:center; ">
1.03
</td>
<td style=" padding:0.2cm; text-align:left; vertical-align:top; text-align:center; ">
0.90 – 1.17
</td>
</tr>
<tr>
<td colspan="3" style="font-weight:bold; text-align:left; padding-top:.8em;">
Random Effects
</td>
</tr>
<tr>
<td style=" padding:0.2cm; text-align:left; vertical-align:top; text-align:left; padding-top:0.1cm; padding-bottom:0.1cm;">
σ<sup>2</sup>
</td>
<td style=" padding:0.2cm; text-align:left; vertical-align:top; padding-top:0.1cm; padding-bottom:0.1cm; text-align:left;" colspan="2">
2.07
</td>
<tr>
<td style=" padding:0.2cm; text-align:left; vertical-align:top; text-align:left; padding-top:0.1cm; padding-bottom:0.1cm;">
τ<sub>00</sub> <sub>cluster</sub>
</td>
<td style=" padding:0.2cm; text-align:left; vertical-align:top; padding-top:0.1cm; padding-bottom:0.1cm; text-align:left;" colspan="2">
0.15
</td>
<tr>
<td style=" padding:0.2cm; text-align:left; vertical-align:top; text-align:left; padding-top:0.1cm; padding-bottom:0.1cm;">
N <sub>cluster</sub>
</td>
<td style=" padding:0.2cm; text-align:left; vertical-align:top; padding-top:0.1cm; padding-bottom:0.1cm; text-align:left;" colspan="2">
60
</td>
<tr>
<td style=" padding:0.2cm; text-align:left; vertical-align:top; text-align:left; padding-top:0.1cm; padding-bottom:0.1cm; border-top:1px solid;">
Observations
</td>
<td style=" padding:0.2cm; text-align:left; vertical-align:top; padding-top:0.1cm; padding-bottom:0.1cm; text-align:left; border-top:1px solid;" colspan="2">
6300
</td>
</tr>
</table>
<p>Not surprisingly, this model recovers the parameters used in the data generation process. Here is the ICC estimate based on this sample:</p>
<pre class="r"><code>(vars <- as.data.frame(VarCorr(lmerfit))$vcov)</code></pre>
<pre><code>## [1] 0.1540414 2.0691434</code></pre>
<pre class="r"><code>(iccest <- round(vars[1]/(sum(vars)), 3))</code></pre>
<pre><code>## [1] 0.069</code></pre>
</div>
<div id="bayesian-hierarchical-model" class="section level3">
<h3>Bayesian hierarchical model</h3>
<p>To estimate the same model using Bayesian methods, I’m turning to <code>rstan</code>. If Bayesian methods are completely foreign to you or you haven’t used <code>rstan</code> before, there are obviously incredible resources out on the internet and in bookstores. (See <a href="https://mc-stan.org/users/interfaces/rstan">here</a>, for example.) While I have done some Bayesian modeling in the past and have read some excellent books on the topic (including <a href="https://xcelab.net/rm/statistical-rethinking/"><em>Statistical Rethinking</em></a> and <a href="https://sites.google.com/site/doingbayesiandataanalysis/what-s-new-in-2nd-ed"><em>Doing Bayesian Data Analysis</em></a>, though I have not read <a href="http://www.stat.columbia.edu/~gelman/book/"><em>Bayesian Data Analysis</em></a> and I know I should.)</p>
<p>To put things simplistically, the goal of this method is to generate a posterior distribution <span class="math inline">\(P(\theta | observed \ data)\)</span>, where <span class="math inline">\(\theta\)</span> is a vector of model parameters of interest. The <em>Bayes theorem</em> provides the underlying machinery for all of this to happen:</p>
<p><span class="math display">\[P(\theta | observed \ data) = \frac{P(observed \ data | \theta)}{P(observed \ data)} P(\theta)\]</span>
<span class="math inline">\(P(observed \ data | \theta)\)</span> is the data <em>likelihood</em> and <span class="math inline">\(P(\theta)\)</span> is the prior distribution. Both need to be specified in order to generate the desired posterior distribution. The general (again, highly simplistic) idea is that draws of <span class="math inline">\(\theta\)</span> are repeatedly made from the prior distribution, and each time the likelihood is estimated which updates the probability of <span class="math inline">\(\theta\)</span>. At the completion of the iterations, we are left with a posterior distribution of <span class="math inline">\(\theta\)</span> (conditional on the observed data).</p>
<p>This is my first time working with <code>Stan</code>, so it is a bit of an experiment. While things have worked out quite well in this case, I may be doing things in an unconventional (i.e. not quite correct) way, so treat this as more conceptual than tutorial - though it’ll certainly get you started.</p>
</div>
<div id="defining-the-model" class="section level3">
<h3>Defining the model</h3>
<p>In Stan, the model is specified in a separate <code>stan</code> program that is written using the Stan probabilistic programming language. The code can be saved as an external file and referenced when you want to sample data from the posterior distribution. In this case, I’ve save the following code in a file named <code>nested.stan</code>.</p>
<p>This <code>stan</code> file includes at least 3 “blocks”: <em>data</em>, <em>parameters</em>, and <em>model</em>. The data block defines the data that will be provided by the user, which includes the outcome and predictor data, as well as other information required for model estimation. The data are passed from <code>R</code> using a <code>list</code>.</p>
<p>The parameters of the model are defined explicitly in the parameter block; in this case, we have regression parameters, random effects, and variance parameters. The transformed parameter block provides the opportunity to create parameters that depend on data and pre-defined parameters. They have no prior distributions <em>per se</em>, but can be used to simplify model block statements, or perhaps make the model estimation more efficient.</p>
<p>Since this is a Bayesian model, each of the parameters will have a prior distribution that can be specified in the model block; if there is no explicit specification of a prior for a parameter, Stan will use a default (non- or minimally-informative) prior distribution. The outcome model is also defined here.</p>
<p>There is also the interesting possibility of defining derived values in a block called <em>generated quantities</em>. These quantities will be functions of previously defined parameters and data. In this case, we might be interested in estimating the ICC along with an uncertainty interval; since the ICC is a function of cluster- and individual-level variation, we can derive and ICC estimate for each of the iterations. At the end of the sequence of iterations, we will have a posterior distribution of the ICC.</p>
<p>Here is the <code>nested.stan</code> file used for this analysis:</p>
<pre class="stan"><code>data {
int<lower=0> N; // number of individuals
int<lower=1> K; // number of predictors
int<lower=1> J; // number of clusters
int<lower=1,upper=J> jj[N]; // group for individual
matrix[N, K] x; // predictor matrix
vector[N] y; // outcome vector
}
parameters {
vector[K] beta; // intercept, time trend, rx effect
real<lower=0> sigmalev1; // cluster level standard deviation
real<lower=0> sigma; // individual level sd
vector[J] ran; // cluster level effects
}
transformed parameters{
vector[N] yhat;
for (i in 1:N)
yhat[i] = x[i]*beta + ran[jj[i]];
}
model {
ran ~ normal(0, sigmalev1);
y ~ normal(yhat, sigma);
}
generated quantities {
real<lower=0> sigma2;
real<lower=0> sigmalev12;
real<lower=0> icc;
sigma2 = pow(sigma, 2);
sigmalev12 = pow(sigmalev1, 2);
icc = sigmalev12/(sigmalev12 + sigma2);
}
</code></pre>
</div>
<div id="estimating-the-model" class="section level3">
<h3>Estimating the model</h3>
<p>Once the definition has been created, the next steps are to create the data set (as an R <code>list</code>) and call the functions to run the MCMC algorithm. The first function (<code>stanc</code>) converts the <code>.stan</code> file into <code>C++</code> code. The function <code>stan_model</code> converts the <code>C++</code> code into a stanmodel object. And the function <code>sampling</code> draws samples from the stanmodel object created in the second step.</p>
<pre class="r"><code>library(rstan)
options(mc.cores = parallel::detectCores())
x <- as.matrix(dx[ ,.(1, period, rx)])
K <- ncol(x)
N <- dx[, length(unique(id))]
J <- dx[, length(unique(cluster))]
jj <- dx[, cluster]
y <- dx[, Y]
testdat <- list(N, K, J, jj, x, y)
rt <- stanc("Working/stan_icc/nested.stan")
sm <- stan_model(stanc_ret = rt, verbose=FALSE)
fit <- sampling(sm, data=testdat, seed = 3327, iter = 5000, warmup = 1000)</code></pre>
</div>
<div id="looking-at-the-diagnostics" class="section level3">
<h3>Looking at the diagnostics</h3>
<p>Once the posterior distribution has been generated, it is important to investigate to see how well-behaved the algorithm performed. One way to do this is look at a series of <em>trace</em> plots that provide insight into how stable the algorithm was as it moved around the parameter space. In this example, I used 5000 draws but threw out the first 1000. Typically, the early draws show much more variability, so it is usual to ignore the “burn-in” phase when analyzing the posterior distribution.</p>
<p>The process didn’t actually generate 5000 draws, but rather 20,000. The process was simultaneously run four separate times. The idea is if things are behaving well, the parallel processes (called chains) should mix quite well - it should be difficult to distinguish between the chains. In the plot below each chain is represented by a different color.</p>
<p>I think it is prudent to ensure that all parameters behaved reasonably, but here I am providing trace plots to the variance estimates, the effect size estimate, and the ICC.</p>
<pre class="r"><code>library(ggthemes)
pname <- c("sigma2", "sigmalev12", "beta[3]", "icc")
muc <- rstan::extract(fit, pars=pname, permuted=FALSE, inc_warmup=FALSE)
mdf <- data.table(melt(muc))
mdf[parameters == "beta[3]", parameters := "beta[3] (rx effect)"]
ggplot(mdf,aes(x=iterations, y=value, color=chains)) +
geom_line() +
facet_wrap(~parameters, scales = "free_y") +
theme(legend.position = "none",
panel.grid = element_blank()) +
scale_color_ptol()</code></pre>
<p><img src="https://www.rdatagen.net/post/2019-07-16-estimating-treatment-effects-and-iccs-for-stepped-wedge-designs.en_files/figure-html/unnamed-chunk-7-1.png" width="672" /></p>
</div>
<div id="evaluating-the-posterior-distribution" class="section level3">
<h3>Evaluating the posterior distribution</h3>
<p>Since these trace plots look fairly stable, it is reasonable to look at the posterior distribution. A summary of the distribution reports the means and percentiles for the parameters of interest. I am reprinting the results from <code>lmer</code> so you can see that the Bayesian estimates are pretty much identical to the mixed-effect model:</p>
<pre class="r"><code>print(fit, pars=c("beta", "sigma2", "sigmalev12", "icc"))</code></pre>
<pre><code>## Inference for Stan model: nested.
## 4 chains, each with iter=5000; warmup=1000; thin=1;
## post-warmup draws per chain=4000, total post-warmup draws=16000.
##
## mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat
## beta[1] 0.09 0 0.06 -0.03 0.05 0.09 0.13 0.21 3106 1
## beta[2] 0.08 0 0.02 0.05 0.07 0.08 0.09 0.11 9548 1
## beta[3] 1.03 0 0.07 0.90 0.99 1.03 1.08 1.16 9556 1
## sigma2 2.07 0 0.04 2.00 2.05 2.07 2.10 2.14 24941 1
## sigmalev12 0.16 0 0.03 0.11 0.14 0.16 0.18 0.24 13530 1
## icc 0.07 0 0.01 0.05 0.06 0.07 0.08 0.11 13604 1
##
## Samples were drawn using NUTS(diag_e) at Wed Jun 26 16:18:31 2019.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at
## convergence, Rhat=1).</code></pre>
<table style="border-collapse:collapse; border:none;">
<caption style="font-weight: bold; text-align:left;">
Linear mixed-effects model
</caption>
<tr>
<th style="border-top: double; text-align:center; font-style:normal; font-weight:bold; padding:0.2cm; text-align:left; ">
</th>
<th colspan="2" style="border-top: double; text-align:center; font-style:normal; font-weight:bold; padding:0.2cm; ">
Y
</th>
</tr>
<tr>
<td style=" text-align:center; border-bottom:1px solid; font-style:italic; font-weight:normal; text-align:left; ">
Predictors
</td>
<td style=" text-align:center; border-bottom:1px solid; font-style:italic; font-weight:normal; ">
Estimates
</td>
<td style=" text-align:center; border-bottom:1px solid; font-style:italic; font-weight:normal; ">
CI
</td>
</tr>
<tr>
<td style=" padding:0.2cm; text-align:left; vertical-align:top; text-align:left; ">
(Intercept)
</td>
<td style=" padding:0.2cm; text-align:left; vertical-align:top; text-align:center; ">
0.09
</td>
<td style=" padding:0.2cm; text-align:left; vertical-align:top; text-align:center; ">
-0.03 – 0.21
</td>
</tr>
<tr>
<td style=" padding:0.2cm; text-align:left; vertical-align:top; text-align:left; ">
period
</td>
<td style=" padding:0.2cm; text-align:left; vertical-align:top; text-align:center; ">
0.08
</td>
<td style=" padding:0.2cm; text-align:left; vertical-align:top; text-align:center; ">
0.05 – 0.11
</td>
</tr>
<tr>
<td style=" padding:0.2cm; text-align:left; vertical-align:top; text-align:left; ">
rx
</td>
<td style=" padding:0.2cm; text-align:left; vertical-align:top; text-align:center; ">
1.03
</td>
<td style=" padding:0.2cm; text-align:left; vertical-align:top; text-align:center; ">
0.90 – 1.17
</td>
</tr>
<tr>
<td colspan="3" style="font-weight:bold; text-align:left; padding-top:.8em;">
Random Effects
</td>
</tr>
<tr>
<td style=" padding:0.2cm; text-align:left; vertical-align:top; text-align:left; padding-top:0.1cm; padding-bottom:0.1cm;">
σ<sup>2</sup>
</td>
<td style=" padding:0.2cm; text-align:left; vertical-align:top; padding-top:0.1cm; padding-bottom:0.1cm; text-align:left;" colspan="2">
2.07
</td>
<tr>
<td style=" padding:0.2cm; text-align:left; vertical-align:top; text-align:left; padding-top:0.1cm; padding-bottom:0.1cm;">
τ<sub>00</sub> <sub>cluster</sub>
</td>
<td style=" padding:0.2cm; text-align:left; vertical-align:top; padding-top:0.1cm; padding-bottom:0.1cm; text-align:left;" colspan="2">
0.15
</td>
<tr>
<td style=" padding:0.2cm; text-align:left; vertical-align:top; text-align:left; padding-top:0.1cm; padding-bottom:0.1cm;">
N <sub>cluster</sub>
</td>
<td style=" padding:0.2cm; text-align:left; vertical-align:top; padding-top:0.1cm; padding-bottom:0.1cm; text-align:left;" colspan="2">
60
</td>
<tr>
<td style=" padding:0.2cm; text-align:left; vertical-align:top; text-align:left; padding-top:0.1cm; padding-bottom:0.1cm; border-top:1px solid;">
Observations
</td>
<td style=" padding:0.2cm; text-align:left; vertical-align:top; padding-top:0.1cm; padding-bottom:0.1cm; text-align:left; border-top:1px solid;" colspan="2">
6300
</td>
</tr>
</table>
<p>The ability to produce a density plot that shows the posterior distribution of the ICC is a pretty compelling reason to use Bayesian methods. The density plot provides an quick way to assess uncertainty of estimates for parameters that might not even be directly included in a linear mixed-effects model:</p>
<pre class="r"><code>plot_dens <- function(fit, pars, p = c(0.05, 0.95),
fill = "grey80", xlab = NULL) {
qs <- quantile(extract(fit, pars = pars)[[1]], probs = p)
x.dens <- density(extract(fit, pars = pars)[[1]])
df.dens <- data.frame(x = x.dens$x, y = x.dens$y)
p <- stan_dens(fit, pars = c(pars), fill = fill, alpha = .1) +
geom_area(data = subset(df.dens, x >= qs[1] & x <= qs[2]),
aes(x=x,y=y), fill = fill, alpha = .4)
if (is.null(xlab)) return(p)
else return(p + xlab(xlab))
}
plot_dens(fit, "icc", fill = "#a1be97")</code></pre>
<p><img src="https://www.rdatagen.net/post/2019-07-16-estimating-treatment-effects-and-iccs-for-stepped-wedge-designs.en_files/figure-html/unnamed-chunk-10-1.png" width="672" /></p>
<p>Next time, I will expand the <code>stan</code> model to generate parameter estimates for cases where the within-period and between-period ICC’s are not necessarily constant. I will also explore how we compare models in the context of Bayesian models, because we won’t always know the underlying data generating process!</p>
</div>
More on those stepped-wedge design assumptions: varying intra-cluster correlations over time
https://www.rdatagen.net/post/varying-intra-cluster-correlations-over-time/
Tue, 09 Jul 2019 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/varying-intra-cluster-correlations-over-time/<p>In my last <a href="https://www.rdatagen.net/post/intra-cluster-correlations-over-time/">post</a>, I wrote about <em>within-</em> and <em>between-period</em> intra-cluster correlations in the context of stepped-wedge cluster randomized study designs. These are quite important to understand when figuring out sample size requirements (and models for analysis, which I’ll be writing about soon.) Here, I’m extending the constant ICC assumption I presented last time around by introducing some complexity into the correlation structure. Much of the code I am using can be found in last week’s post, so if anything seems a little unclear, hop over <a href="https://www.rdatagen.net/post/intra-cluster-correlations-over-time/">here</a>.</p>
<div id="different-within--and-between-period-iccs" class="section level3">
<h3>Different within- and between-period ICC’s</h3>
<p>In a scenario with constant within- and between-period ICC’s, the correlated data can be induced using a single cluster-level effect like <span class="math inline">\(b_c\)</span> in this model:</p>
<p><span class="math display">\[
Y_{ict} = \mu + \beta_0t + \beta_1X_{ct} + b_{c} + e_{ict}
\]</span></p>
<p>More complexity can be added if, instead of a single cluster level effect, we have a vector of correlated cluster/time specific effects <span class="math inline">\(\mathbf{b_c}\)</span>. These cluster-specific random effects <span class="math inline">\((b_{c1}, b_{c2}, \ldots, b_{cT})\)</span> replace <span class="math inline">\(b_c\)</span>, and the slightly modified data generating model is</p>
<p><span class="math display">\[
Y_{ict} = \mu + \beta_0t + \beta_1X_{ct} + b_{ct} + e_{ict}
\]</span></p>
<p>The vector <span class="math inline">\(\mathbf{b_c}\)</span> has a multivariate normal distribution <span class="math inline">\(N_T(0, \sigma^2_b \mathbf{R})\)</span>. This model assumes a common covariance structure across all clusters, <span class="math inline">\(\sigma^2_b \mathbf{R}\)</span>, where the general version of <span class="math inline">\(\mathbf{R}\)</span> is</p>
<p><span class="math display">\[
\mathbf{R} =
\left(
\begin{matrix}
1 & r_{12} & r_{13} & \cdots & r_{1T} \\
r_{21} & 1 & r_{23} & \cdots & r_{2T} \\
r_{31} & r_{32} & 1 & \cdots & r_{3T} \\
\vdots & \vdots & \vdots & \vdots & \vdots \\
r_{T1} & r_{T2} & r_{T3} & \cdots & 1
\end{matrix}
\right )
\]</span></p>
<div id="within-period-cluster-correlation" class="section level4">
<h4>Within-period cluster correlation</h4>
<p>The covariance of any two individuals <span class="math inline">\(i\)</span> and <span class="math inline">\(j\)</span> in the same cluster <span class="math inline">\(c\)</span> and same period <span class="math inline">\(t\)</span> is</p>
<p><span class="math display">\[
\begin{aligned}
cov(Y_{ict}, Y_{jct}) &= cor(\mu + \beta_0t + \beta_1X_{ct} + b_{ct} + e_{ict}, \ \mu + \beta_0t + \beta_1X_{ct} + b_{ct} + e_{jct}) \\
\\
&= cov(b_{ct}, b_{ct}) + cov(e_{ict}, e_{jct}) \\
\\
&=var(b_{ct}) + 0 \\
\\
&= \sigma^2_b r_{tt} \\
\\
&= \sigma^2_b \qquad \qquad \qquad \text{since } r_{tt} = 1, \ \forall t \in \ ( 1, \ldots, T)
\end{aligned}
\]</span></p>
<p>And I showed in the previous post that <span class="math inline">\(var(Y_{ict}) = var(Y_{jct}) = \sigma^2_b + \sigma^2_e\)</span>, so the within-period intra-cluster correlation is what we saw last time:</p>
<p><span class="math display">\[ICC_{tt} = \frac{\sigma^2_b}{\sigma^2_b+\sigma^2_e}\]</span></p>
</div>
<div id="between-period-cluster-correlation" class="section level4">
<h4>Between-period cluster correlation</h4>
<p>The covariance of any two individuals in the same cluster but two <em>different</em> time periods <span class="math inline">\(t\)</span> and <span class="math inline">\(t^{\prime}\)</span> is:</p>
<p><span class="math display">\[
\begin{aligned}
cov(Y_{ict}, Y_{jct^{\prime}}) &= cor(\mu + \beta_0t + \beta_1X_{ct} + b_{ct} + e_{ict}, \ \mu + \beta_0t + \beta_1X_{ct^{\prime}} + b_{ct^{\prime}} + e_{jct^{\prime}}) \\
\\
&= cov(b_{ct}, b_{ct^{\prime}}) + cov(e_{ict}, e_{jct^{\prime}}) \\
\\
&= \sigma^2_br_{tt^{\prime}}
\end{aligned}
\]</span></p>
<p>Based on this, the between-period intra-cluster correlation is</p>
<p><span class="math display">\[ ICC_{tt^\prime} =\frac{\sigma^2_b}{\sigma^2_b+\sigma^2_e} r_{tt^{\prime}}\]</span></p>
</div>
<div id="adding-structure-to-matrix-mathbfr" class="section level4">
<h4>Adding structure to matrix <span class="math inline">\(\mathbf{R}\)</span></h4>
<p>This paper by <a href="https://journals.sagepub.com/doi/full/10.1177/0962280217734981"><em>Kasza et al</em></a>, which describes various stepped-wedge models, suggests a structured variation of <span class="math inline">\(\mathbf{R}\)</span> that is a function of two parameters, <span class="math inline">\(r_0\)</span> and <span class="math inline">\(r\)</span>:</p>
<p><span class="math display">\[
\mathbf{R} = \mathbf{R}(r_0, r) =
\left(
\begin{matrix}
1 & r_0r & r_0r^2 & \cdots & r_0r^{T-1} \\
r_0r & 1 & r_0 r & \cdots & r_0 r^{T-2} \\
r_0r^2 & r_0 r & 1 & \cdots & r_0 r^{T-3} \\
\vdots & \vdots & \vdots & \vdots & \vdots \\
r_0r^{T-1} & r_0r^{T-2} & r_0 r^{T-3} & \cdots & 1
\end{matrix}
\right )
\]</span></p>
<p>How we specify <span class="math inline">\(r_0\)</span> and <span class="math inline">\(r\)</span> reflects different assumptions about the between-period intra-cluster correlations. I describe two particular cases below.</p>
</div>
</div>
<div id="constant-correlation-over-time" class="section level3">
<h3>Constant correlation over time</h3>
<p>In this first case, the correlation between individuals in the same cluster but different time periods is less than the correlation between individuals in the same cluster and same time period. In other words, <span class="math inline">\(ICC_{tt} \ne ICC_{tt^\prime}\)</span>. However the between-period correlation is constant, or <span class="math inline">\(ICC_{tt^\prime}\)</span> are constant for all <span class="math inline">\(t\)</span> and <span class="math inline">\(t^\prime\)</span>. We have these correlations when <span class="math inline">\(r_0 = \rho\)</span> and <span class="math inline">\(r = 1\)</span>, giving</p>
<p><span class="math display">\[
\mathbf{R} = \mathbf{R}(\rho, 1) =
\left(
\begin{matrix}
1 & \rho & \rho & \cdots & \rho \\
\rho & 1 & \rho & \cdots & \rho \\
\rho & \rho & 1 & \cdots & \rho \\
\vdots & \vdots & \vdots & \vdots & \vdots \\
\rho & \rho & \rho & \cdots & 1
\end{matrix}
\right )
\]</span></p>
<p>To simulate under this scenario, I am setting <span class="math inline">\(\sigma_b^2 = 0.15\)</span>, <span class="math inline">\(\sigma_e^2 = 2.0\)</span>, and <span class="math inline">\(\rho = 0.6\)</span>. We would expect the following ICC’s:</p>
<p><span class="math display">\[
\begin{aligned}
ICC_{tt} &= \frac{0.15}{0.15+2.00} = 0.0698 \\
\\
ICC_{tt^\prime} &= \frac{0.15}{0.15+2.00}\times0.6 = 0.0419
\end{aligned}
\]</span></p>
<p>Here is the code to define and generate the data:</p>
<pre class="r"><code>defc <- defData(varname = "mu", formula = 0,
dist = "nonrandom", id = "cluster")
defc <- defData(defc, "s2", formula = 0.15, dist = "nonrandom")
defa <- defDataAdd(varname = "Y",
formula = "0 + 0.10 * period + 1 * rx + cteffect",
variance = 2, dist = "normal")
dc <- genData(100, defc)
dp <- addPeriods(dc, 7, "cluster")
dp <- trtStepWedge(dp, "cluster", nWaves = 4, lenWaves = 1, startPer = 2)
dp <- addCorGen(dtOld = dp, nvars = 7, idvar = "cluster",
rho = 0.6, corstr = "cs", dist = "normal",
param1 = "mu", param2 = "s2", cnames = "cteffect")
dd <- genCluster(dp, cLevelVar = "timeID", numIndsVar = 100,
level1ID = "id")
dd <- addColumns(defa, dd)</code></pre>
<p>As I did in my previous post, I’ve generated 200 data sets, estimated the <em>within-</em> and <em>between-period</em> ICC’s for each data set, and computed the average for each. The plot below shows the expected values in gray and the estimated values in purple and green.</p>
<p><img src="https://www.rdatagen.net/img/post-iccvary/p2.png" width="800" /></p>
</div>
<div id="declining-correlation-over-time" class="section level3">
<h3>Declining correlation over time</h3>
<p>In this second case, we make an assumption that the correlation between individuals in the same cluster degrades over time. Here, the correlation between two individuals in adjacent time periods is stronger than the correlation between individuals in periods further apart. That is <span class="math inline">\(ICC_{tt^\prime} > ICC_{tt^{\prime\prime}}\)</span> if <span class="math inline">\(|t^\prime - t| < |t^{\prime\prime} - t|\)</span>. This structure can be created by setting <span class="math inline">\(r_0 = 1\)</span> and <span class="math inline">\(r=\rho\)</span>, giving us an auto-regressive correlation matrix <span class="math inline">\(R\)</span>:</p>
<p><span class="math display">\[
\mathbf{R} = \mathbf{R}(1, \rho) =
\left(
\begin{matrix}
1 & \rho & \rho^2 & \cdots & \rho^{T-1} \\
\rho & 1 & \rho & \cdots & \rho^{T-2} \\
\rho^2 & \rho & 1 & \cdots & \rho^{T-3} \\
\vdots & \vdots & \vdots & \vdots & \vdots \\
\rho^{T-1} & \rho^{T-2} & \rho^{T-3} & \cdots & 1
\end{matrix}
\right )
\]</span></p>
<p>I’ve generated data using the same variance assumptions as above. The only difference in this case is that the <code>corstr</code> argument in the call to <code>addCorGen</code> is “ar1” rather than “cs” (which was used above). Here are a few of the expected correlations:</p>
<p><span class="math display">\[
\begin{aligned}
ICC_{t,t} &= \frac{0.15}{0.15+2.00} = 0.0698 \\
\\
ICC_{t,t+1} &= \frac{0.15}{0.15+2.00}\times 0.6^{1} = 0.0419 \\
\\
ICC_{t,t+2} &= \frac{0.15}{0.15+2.00}\times 0.6^{2} = 0.0251 \\
\\
\vdots
\\
ICC_{t, t+6} &= \frac{0.15}{0.15+2.00}\times 0.6^{6} = 0.0032
\end{aligned}
\]</span></p>
<p>And here is the code:</p>
<pre class="r"><code>defc <- defData(varname = "mu", formula = 0,
dist = "nonrandom", id = "cluster")
defc <- defData(defc, "s2", formula = 0.15, dist = "nonrandom")
defa <- defDataAdd(varname = "Y",
formula = "0 + 0.10 * period + 1 * rx + cteffect",
variance = 2, dist = "normal")
dc <- genData(100, defc)
dp <- addPeriods(dc, 7, "cluster")
dp <- trtStepWedge(dp, "cluster", nWaves = 4, lenWaves = 1, startPer = 2)
dp <- addCorGen(dtOld = dp, nvars = 7, idvar = "cluster",
rho = 0.6, corstr = "ar1", dist = "normal",
param1 = "mu", param2 = "s2", cnames = "cteffect")
dd <- genCluster(dp, cLevelVar = "timeID", numIndsVar = 10,
level1ID = "id")
dd <- addColumns(defa, dd)</code></pre>
<p>And here are the observed average estimates (based on 200 datasets) alongside the expected values:</p>
<p><img src="https://www.rdatagen.net/img/post-iccvary/p3.png" width="800" /></p>
</div>
<div id="random-slope" class="section level3">
<h3>Random slope</h3>
<p>In this last case, I am exploring what the ICC’s look like in the context of random effects model that includes a cluster-specific intercept <span class="math inline">\(b_c\)</span> and a cluster-specific slope <span class="math inline">\(s_c\)</span>:</p>
<p><span class="math display">\[
Y_{ict} = \mu + \beta_0 t + \beta_1 X_{ct} + b_c + s_c t + e_{ict}
\]</span></p>
<p>Both <span class="math inline">\(b_c\)</span> and <span class="math inline">\(s_c\)</span> are normally distributed with mean 0, and variances <span class="math inline">\(\sigma_b^2\)</span> and <span class="math inline">\(\sigma_s^2\)</span>, respectively. (In this example <span class="math inline">\(\sigma_b^2\)</span> and <span class="math inline">\(\sigma_s^2\)</span> are uncorrelated, but that may not necessarily be the case.)</p>
<p>Because of the random slopes, the variance of the <span class="math inline">\(Y\)</span>’s increase over time:</p>
<p><span class="math display">\[
var(Y_{ict}) = \sigma^2_b + t^2 \sigma^2_s + \sigma^2_e
\]</span></p>
<p>The same is true for the within- and between-period covariances:</p>
<p><span class="math display">\[
\begin{aligned}
cov(Y_{ict}, Y_{jct}) &= \sigma^2_b + t^2 \sigma^2_s \\
\\
cov(Y_{ict}, Y_{jct^\prime}) &= \sigma^2_b + tt^\prime \sigma^2_s \\
\end{aligned}
\]</span></p>
<p>The ICC’s that follow from these various variances and covariances are:</p>
<p><span class="math display">\[
\begin{aligned}
ITT_{tt} &= \frac{\sigma^2_b + t^2 \sigma^2_s}{\sigma^2_b + t^2 \sigma^2_s + \sigma^2_e}\\
\\
ITT_{tt^\prime} & = \frac{\sigma^2_b + tt^\prime \sigma^2_s}{\left[(\sigma^2_b + t^2 \sigma^2_s + \sigma^2_e)(\sigma^2_b + {t^\prime}^2 \sigma^2_s + \sigma^2_e)\right]^\frac{1}{2}}
\end{aligned}
\]</span></p>
<p>In this example, <span class="math inline">\(\sigma^2_s = 0.01\)</span> (and the other variances remain as before), so</p>
<p><span class="math display">\[ ITT_{33} = \frac{0.15 + 3^2 \times 0.01}{0.15 + 3^2 \times 0.01 + 2} =0.1071\]</span>
and</p>
<p><span class="math display">\[ ITT_{36} = \frac{0.15 + 3 \times 6 \times 0.01}{\left[(0.15 + 3^2 \times 0.01 + 2)(0.15 + 6^2 \times 0.01 + 2)\right ]^\frac{1}{2}} =0.1392\]</span></p>
<p>Here’s the data generation:</p>
<pre class="r"><code>defc <- defData(varname = "ceffect", formula = 0, variance = 0.15,
dist = "normal", id = "cluster")
defc <- defData(defc, "cteffect", formula = 0, variance = 0.01,
dist = "normal")
defa <- defDataAdd(varname = "Y",
formula = "0 + ceffect + 0.10 * period + cteffect * period + 1 * rx",
variance = 2, dist = "normal")
dc <- genData(100, defc)
dp <- addPeriods(dc, 7, "cluster")
dp <- trtStepWedge(dp, "cluster", nWaves = 4, lenWaves = 1, startPer = 2)
dd <- genCluster(dp, cLevelVar = "timeID", numIndsVar = 10,
level1ID = "id")
dd <- addColumns(defa, dd)</code></pre>
<p>And here is the comparison between observed and expected ICC’s. The estimates are quite variable, so there appears to be slight bias. However, if I generated more than 200 data sets, the mean would likely converge closer to the expected values.</p>
<p><img src="https://www.rdatagen.net/img/post-iccvary/p4.png" width="800" /></p>
<p>In the next post (or two), I plan on providing some examples of fitting models to the data I’ve generated here. In some cases, fairly standard linear mixed effects models in <code>R</code> may be adequate, but in others, we may need to look elsewhere.</p>
<p>
<p><small><font color="darkkhaki">
References:</p>
<p>Kasza, J., K. Hemming, R. Hooper, J. N. S. Matthews, and A. B. Forbes. “Impact of non-uniform correlation structure on sample size and power in multiple-period cluster randomised trials.” <em>Statistical methods in medical research</em> (2017): 0962280217734981.</p>
</font></small>
</p>
</div>
Planning a stepped-wedge trial? Make sure you know what you're assuming about intra-cluster correlations ...
https://www.rdatagen.net/post/intra-cluster-correlations-over-time/
Tue, 25 Jun 2019 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/intra-cluster-correlations-over-time/<p>A few weeks ago, I was at the annual meeting of the <a href="https://rethinkingclinicaltrials.org/">NIH Collaboratory</a>, which is an innovative collection of collaboratory cores, demonstration projects, and NIH Institutes and Centers that is developing new models for implementing and supporting large-scale health services research. A study I am involved with - <em>Primary Palliative Care for Emergency Medicine</em> - is one of the demonstration projects in this collaboratory.</p>
<p>The second day of this meeting included four panels devoted to the design and analysis of embedded pragmatic clinical trials, and focused on the challenges of conducting rigorous research in the real-world context of a health delivery system. The keynote address that started off the day was presented by David Murray of NIH, who talked about the challenges and limitations of cluster randomized trials. (I’ve written before on issues related to clustered randomized trials, including <a href="https://www.rdatagen.net/post/what-matters-more-in-a-cluster-randomized-trial-number-or-size/">here</a>.)</p>
<p>In particular, Dr. Murray talked a great deal about stepped-wedge designs, which have become a quite popular tool in health services research. (I described stepped-wedge designs <a href="https://www.rdatagen.net/post/alternatives-to-stepped-wedge-designs/">here</a>.) A big takeaway from the talk was that we must be cognizant of the underlying assumptions of the models used to estimate treatment effects; being unaware can lead to biased estimates of treatment effects, or more likely, biased estimates of uncertainty.</p>
<div id="intra-cluster-correlations" class="section level3">
<h3>Intra-cluster correlations</h3>
<p>If outcomes of subjects in a study are correlated in any way (e.g. they received care from the same health care provider), we do not learn as much information from each individual study participant as we would in the case where there is no correlation. In a parallel designed cluster randomized trial (where half of the clusters receive an intervention and the other half do not), we expect that the outcomes will be correlated <em>within</em> each cluster, though not <em>across</em> clusters. (This is not true if the clusters are themselves clustered, in which case we would have a 2-level clustered study.) This intra-cluster correlation (ICC) increases sample size requirements and reduces precision/power.</p>
<p>A common way to model correlation explicitly in a cluster randomized trial is to conceive of a random effects model like this:</p>
<p><span class="math display">\[(1) \qquad \qquad Y_{ic} = \mu + \beta_1X_{c} + b_c + e_{ic},\]</span></p>
<p>where <span class="math inline">\(Y_{ic}\)</span> is a continuous outcome for subject <span class="math inline">\(i\)</span> in cluster <span class="math inline">\(c\)</span>, and <span class="math inline">\(X_c\)</span> is a treatment indicator for cluster <span class="math inline">\(c\)</span> (either 0 or 1). The underlying structural parameters are <span class="math inline">\(\mu\)</span>, the grand mean, and <span class="math inline">\(\beta_1\)</span>, the treatment effect. The unobserved random effects are, <span class="math inline">\(b_c \sim N(0, \sigma^2_b)\)</span>, the normally distributed group level effect, and <span class="math inline">\(e_{ic} \sim N(0, \sigma^2_e)\)</span>, the normally distributed individual-level effect. (This is often referred to as the “error” term, but that doesn’t adequately describe what is really unmeasured individual variation.)</p>
<p>The correlation between any two subjects <span class="math inline">\(i\)</span> and <span class="math inline">\(j\)</span> in the <em>same</em> cluster <span class="math inline">\(c\)</span> is:</p>
<p><span class="math display">\[ cor(Y_{ic}, Y_{jc}) = \frac{cov(Y_{ic}, Y_{jc})} {\sqrt {var(Y_{ic})var(Y_{jc})}} \]</span></p>
<p><span class="math inline">\(cov(Y_{ic}, Y_{jc})\)</span> can be written in terms of the parameters in the underlying data generating process:</p>
<p><span class="math display">\[
\begin{aligned}
cov(Y_{ic}, Y_{jc}) &= cov(\mu + \beta_1X_c + b_c + e_{ic}, \mu + \beta_1X_c + b_c + e_{jc}) \\
&=cov(b_c, b_c) + cov(e_{ic},e_{jc} ) \\
&=\sigma^2_b + 0 \\
&=\sigma^2_b
\end{aligned}
\]</span></p>
<p>The terms simplify since the cluster level effects are independent of the individual level effects (and all the fixed effects in the model) and the individual level effects are independent of each other. The within-period intra-cluster co-variance depends only on the between cluster variation.</p>
<p>The total variance of the outcomes <span class="math inline">\(Y_{ic}\)</span> is:</p>
<p><span class="math display">\[
\begin{aligned}
var(Y_{ic}) &= var(\mu + \beta_1X_c + b_c + e_{ic}) \\
&= var(b_c) + var(e_{ic}) \\
&= \sigma^2_b + \sigma^2_e
\end{aligned}
\]</span></p>
<p>Substituting all of this into the original equation gives us the intra-cluster correlation for any two subjects in the cluster:</p>
<p><span class="math display">\[
\begin{aligned}
cor(Y_{ic}, Y_{jc}) &= \frac{cov(Y_{ic}, Y_{jc})} {\sqrt {var(Y_{ic})var(Y_{jc})}} \\
\\
ICC &= \frac{\sigma^2_b}{\sigma^2_b + \sigma^2_e}
\end{aligned}
\]</span></p>
<p>So, the correlation between any two subjects in a cluster increases as the variation <em>between</em> clusters increases.</p>
</div>
<div id="cluster-randomization-when-time-matters" class="section level3">
<h3>Cluster randomization when time matters</h3>
<p>Moving beyond the parallel design to the stepped-wedge design, time starts to play a very important role. It is important to ensure that we do not confound treatment and time effects; we have to be careful that we do not attribute the general changes over time to the intervention. This is accomplished by introducing a time trend into the model. (Actually, it seems more common to include a time-specific effect so that each time period has its own effect. However, for simulation purposes, I will will assume a linear trend.)</p>
<p>In the stepped-wedge design, we are essentially estimating within-cluster treatment effects by comparing the cluster with itself pre- and post-intervention. To estimate sample size and precision (or power), it is no longer sufficient to consider a single ICC, because there are now multiple ICC’s - the within-period ICC and the between-period ICC’s. The within-period ICC is what we defined in the parallel design (since we effectively treated all observations as occurring in the same period.) Now we also need to consider the expected correlation of two individuals in the <em>same</em> cluster in <em>different</em> time periods.</p>
<p>If we do not properly account for within-period ICC and the between-period ICC’s in either the planning or analysis stages, we run the risk of generating biased estimates.</p>
<p>My primary aim is to describe possible data generating processes for the stepped wedge design and what implications they have for both the within-period and between-period ICC’s. I will generate data to confirm that observed ICC’s match up well with the theoretical expectations. This week I will consider the simplest model, one that is frequently used but whose assumptions may not be realistic in many applications. In a follow-up post, I will consider more flexible data generating processes.</p>
</div>
<div id="constant-iccs-over-time" class="section level3">
<h3>Constant ICC’s over time</h3>
<p>Here is probably the simplest model that can be conceived for a process underlying the stepped-wedge design:</p>
<p><span class="math display">\[
(2) \qquad \qquad Y_{ict} = \mu + \beta_0t + \beta_1X_{ct} + b_c + e_{ict}
\]</span></p>
<p>As before, the unobserved random effects are <span class="math inline">\(b_c \sim N(0, \sigma^2_b)\)</span> and <span class="math inline">\(e_{ict} \sim N(0, \sigma^2_e)\)</span>. The key differences between this model compared to the parallel design is the time trend and time-dependent treatment indicator. The time trend accounts for the fact that the outcome may change over time regardless of the intervention. And since the cluster will be in both the control and intervention states we need to have an time-dependent intervention indicator. (This model is a slight variation on the <em>Hussey and Hughes</em> model, which includes a time-specific effect <span class="math inline">\(\beta_t\)</span> rather than a linear time trend. This paper by <a href="https://journals.sagepub.com/doi/full/10.1177/0962280217734981"><em>Kasza et al</em></a> describes this stepped-wedge model, and several others, in much greater detail.)</p>
<p>The <em>within-period</em> ICC from this is model is:</p>
<p><span class="math display">\[
\begin{aligned}
cor(Y_{ict}, Y_{jct}) &= cor(\mu + \beta_0t + \beta_1X_{ct} + b_c + e_{ict}, \ \mu + \beta_0t + \beta_1X_{ct} + b_c + e_{jct}) \\
\\
ICC_{tt}&= \frac{\sigma^2_b}{\sigma^2_b + \sigma^2_e}
\end{aligned}
\]</span></p>
<p>I have omitted the intermediary steps, but the logic is the same as in the parallel design case. The within-period ICC under this model is also the same as the ICC in the parallel design.</p>
<p>More importantly, in this case the <em>between-period</em> ICC turns out to be the same as the <em>within-period</em> ICC. For the <em>between-period</em> ICC, we are estimating the expected correlation between any two subjects <span class="math inline">\(i\)</span> and <span class="math inline">\(j\)</span> in cluster <span class="math inline">\(c\)</span>, one in time period <span class="math inline">\(t\)</span> and the other in time period <span class="math inline">\(t^\prime\)</span> <span class="math inline">\((t \ne t^\prime)\)</span>:</p>
<p><span class="math display">\[
\begin{aligned}
cor(Y_{ict}, Y_{jct^\prime}) &= cor(\mu + \beta_0t + \beta_1X_{ct} + b_c + e_{ict}, \ \mu + \beta_0t^\prime + \beta_1X_{ct^\prime} + b_c + e_{jct^\prime}) \\
\\
ICC_{tt^\prime}&= \frac{\sigma^2_b}{\sigma^2_b + \sigma^2_e}
\end{aligned}
\]</span></p>
<p>Under this seemingly reasonable (and popular) model, we are making a big assumption that the within-period ICC and between-period ICC’s are equal and constant throughout the study. This may or may not be reasonable - but it is important to acknowledge the assumption and to make sure we justify that choice.</p>
</div>
<div id="generating-data-to-simulate-a-stepped-wedge-design" class="section level3">
<h3>Generating data to simulate a stepped-wedge design</h3>
<p>I’ve generated data from a stepped-wedge design <a href="https://www.rdatagen.net/post/simstudy-update-stepped-wedge-treatment-assignment/">before</a> on this blog, but will repeat the details here. For the data definitions, we define the variance of the cluster-specific effects, the cluster sizes, and the outcome model.</p>
<pre class="r"><code>defc <- defData(varname = "ceffect", formula = 0, variance = 0.15,
dist = "normal", id = "cluster")
defc <- defData(defc, "m", formula = 10, dist = "nonrandom")
defa <- defDataAdd(varname = "Y",
formula = "0 + 0.10 * period + 1 * rx + ceffect",
variance = 2, dist = "normal")</code></pre>
<p>The data generation follows this sequence: cluster data, temporal data, stepped-wedge treatment assignment, and individual (within cluster) data:</p>
<pre class="r"><code>dc <- genData(100, defc)
dp <- addPeriods(dc, 7, "cluster")
dp <- trtStepWedge(dp, "cluster", nWaves = 4, lenWaves = 1, startPer = 2)
dd <- genCluster(dp, cLevelVar = "timeID", "m", "id")
dd <- addColumns(defa, dd)
dd</code></pre>
<pre><code>## cluster period ceffect m timeID startTrt rx id Y
## 1: 1 0 -0.073 10 1 2 0 1 -2.12
## 2: 1 0 -0.073 10 1 2 0 2 -1.79
## 3: 1 0 -0.073 10 1 2 0 3 1.53
## 4: 1 0 -0.073 10 1 2 0 4 -1.44
## 5: 1 0 -0.073 10 1 2 0 5 2.25
## ---
## 6996: 100 6 0.414 10 700 5 1 6996 1.28
## 6997: 100 6 0.414 10 700 5 1 6997 0.30
## 6998: 100 6 0.414 10 700 5 1 6998 0.94
## 6999: 100 6 0.414 10 700 5 1 6999 1.43
## 7000: 100 6 0.414 10 700 5 1 7000 0.58</code></pre>
<p>It is always useful (and important) to visualize the data (regardless of whether they are simulated or real). This is the summarized cluster-level data. The clusters are grouped together in waves defined by starting point. In this case, there are 25 clusters per wave. The light blue represents pre-intervention periods, and the dark blue represents intervention periods.</p>
<p><img src="https://www.rdatagen.net/post/2019-06-25-intra-cluster-correlations-over-time.en_files/figure-html/unnamed-chunk-4-1.png" width="672" /></p>
</div>
<div id="estimating-the-between-period-within-cluster-correlation" class="section level3">
<h3>Estimating the between-period within-cluster correlation</h3>
<p>I want to estimate the observed between-period within cluster correlation without imposing any pre-conceived structure. In particular, I want to see if the data generated by the process defined in equation (2) above does indeed lead to constant within- and between-period ICC’s. In a future post, I will estimate the ICC using a model, but for now, I’d prefer to estimate the ICC’s directly from the data.</p>
<p>A 1982 paper by <a href="https://academic.oup.com/aje/article/116/4/722/52694"><em>Bernard Rosner</em></a> provides a non-parametric estimate of the <em>between-period</em> ICC. He gives this set of equations to find the correlation coefficient <span class="math inline">\(\rho_{tt^\prime}\)</span> for two time periods <span class="math inline">\(t\)</span> and <span class="math inline">\(t^\prime\)</span>. In the equations, <span class="math inline">\(m_{ct}\)</span> represents the cluster size for cluster <span class="math inline">\(c\)</span> in time period <span class="math inline">\(t\)</span>, and <span class="math inline">\(K\)</span> represents the number of clusters:</p>
<p><span class="math display">\[
\rho_{tt^\prime} = \frac{\sum_{c=1}^K \sum_{i=1}^{m_{ct}} \sum_{j=1}^{m_{ct^\prime}} (Y_{ict}-\mu_t)(Y_{jct^\prime}-\mu_{t^\prime})} {\left[ \left ( \sum_{c=1}^K m_{ct^\prime} \sum_{i=1}^{m_{ct}} (Y_{ict}-\mu_t)^2 \right ) \left ( \sum_{c=1}^K m_{ct} \sum_{j=1}^{m_{ct^\prime}} (Y_{jct^\prime}-\mu_{t^\prime})^2 \right )\right] ^ \frac {1}{2}}
\]</span></p>
<p><span class="math display">\[
\mu_t = \frac{\sum_{c=1}^K m_{ct} m_{ct^\prime} \mu_{ct}}{\sum_{c=1}^K m_{ct} m_{ct^\prime}} \ \ , \ \ \mu_{t^\prime} = \frac{\sum_{c=1}^K m_{ct} m_{ct^\prime} \mu_{ct^\prime}}{\sum_{c=1}^K m_{ct} m_{ct^\prime}}
\]</span></p>
<p><span class="math display">\[
\mu_{ct} = \frac{\sum_{i=1}^{m_{ct}} Y_{ict}}{m_{ct}} \ \ , \ \ \mu_{ct^\prime} = \frac{\sum_{j=1}^{m_{ct^\prime}} Y_{jct^\prime}}{m_{ct^\prime}}
\]</span></p>
<p>I’ve implemented the algorithm in <code>R</code>, and the code is included in the addendum. One issue that came up is that as the intervention is phased in over time, the treatment effect is present for each at different times. The algorithm breaks down as a result. However, the between-period ICC can be calculated for each wave, and then we can average across the four waves.</p>
<p>The <em>within-period</em> ICC is estimated using a linear mixed effects model applied to each period separately, so that we estimate period-specific within-period ICC’s. The expected (constant) ICC is <span class="math inline">\(0.07 = \left(\frac{0.15}{0.15 + 2}\right)\)</span>.</p>
<p>The function <code>iccs</code> (shown below in the addendum) returns both the estimated <em>within-</em> and <em>between-cluster</em> ICC’s for a single data set. Here is the within-period ICC for the first period (actually period 0) and the between-period ICC’s using period 0:</p>
<pre class="r"><code>set.seed(47463)
iccs(dd, byWave = T)[,c(22, 0:6)]</code></pre>
<pre><code>## wp0 bp01 bp02 bp03 bp04 bp05 bp06
## 1: 0.041 0.068 0.073 0.08 0.067 0.054 0.053</code></pre>
<p>ICC estimates are quite variable and we can’t tell anything about the distribution from any single data set. Generating multiple replications lets us see if the estimates are close, on average, to our assumption of constant ICC’s. Here is a function to generate a single data set:</p>
<pre class="r"><code>genDD <- function(defc, defa, nclust, nperiods, waves, len, start) {
dc <- genData(nclust, defc)
dp <- addPeriods(dc, nperiods, "cluster")
dp <- trtStepWedge(dp, "cluster", nWaves = waves,
lenWaves = len, startPer = start)
dd <- genCluster(dp, cLevelVar = "timeID", "m", "id")
dd <- addColumns(defa, dd)
return(dd[])
}</code></pre>
<p>And here is a function to estimate 200 sets of ICC’s for 200 data sets:</p>
<pre class="r"><code>icc <- mclapply(1:200,
function(x) iccs(genDD(defc, defa, 100, 7, 4, 1, 2), byWave = T),
mc.cores = 4
)
observed <- sapply(rbindlist(icc), function(x) mean(x))</code></pre>
<p>Averages of all the <em>within-</em> and <em>between-period</em> ICC’s were in fact quite close to the “true” value of 0.07 based on a relatively small number of replications. The plot shows the observed averages along side the expected value (shown in gray) for each of the periods generated in the data. There is little variation across both the <em>within-</em> and <em>between-period</em> ICC’s.</p>
<p><img src="https://www.rdatagen.net/img/post-iccvary/p1.png" width="800" /></p>
<p>I’ll give you a little time to absorb this. Next time, I will consider alternative data generating processes where the the ICC’s are not necessarily constant.</p>
<p>
<p><small><font color="darkkhaki">
References:</p>
<p>Kasza, J., K. Hemming, R. Hooper, J. N. S. Matthews, and A. B. Forbes. “Impact of non-uniform correlation structure on sample size and power in multiple-period cluster randomised trials.” <em>Statistical methods in medical research</em> (2017): 0962280217734981.</p>
<p>Rosner, Bernard. “On the estimation and testing of inter-class correlations: the general case of multiple replicates for each variable.” <em>American journal of epidemiology</em> 116, no. 4 (1982): 722-730.</p>
</font></small>
</p>
<p> </p>
</div>
<div id="addendum-r-code-for-simulations" class="section level3">
<h3>Addendum: R code for simulations</h3>
<pre class="r"><code>library(lme4)
library(parallel)
Covar <- function(dx, clust, period1, period2, x_0, x_1) {
v0 <- dx[ctemp == clust & period == period1, Y - x_0]
v1 <- dx[ctemp == clust & period == period2, Y - x_1]
sum(v0 %*% t(v1))
}
calcBP <- function(dx, period1, period2) {
# dx <- copy(d2)
# create cluster numbers starting from 1
tt <- dx[, .N, keyby = cluster]
nclust <- nrow(tt)
dx[, ctemp := rep(1:nclust, times = tt$N)]
dx <- dx[period %in% c(period1, period2)]
## Grand means
dg <- dx[, .(m=.N, mu = mean(Y)), keyby = .(ctemp, period)]
dg <- dcast(dg, formula = ctemp ~ period, value.var = c("m","mu"))
setnames(dg, c("ctemp", "m_0", "m_1", "mu_0", "mu_1"))
x_0 <- dg[, sum(m_0 * m_1 * mu_0)/sum(m_0 * m_1)]
x_1 <- dg[, sum(m_0 * m_1 * mu_1)/sum(m_0 * m_1)]
## Variance (denominator)
dss_0 <- dx[period == period1, .(ss_0 = sum((Y - x_0)^2)),
keyby = ctemp]
dss_0[, m_1 := dg[, m_1]]
v_0 <- dss_0[, sum(m_1 * ss_0)]
dss_1 <- dx[period == period2, .(ss_1 = sum((Y - x_1)^2)),
keyby = ctemp]
dss_1[, m_0 := dg[, m_0]]
v_1 <- dss_1[, sum(m_0 * ss_1)]
## Covariance
v0v1 <- sapply(1:nclust,
function(x) Covar(dx, x, period1, period2, x_0, x_1))
bp.icc <- sum(v0v1)/sqrt(v_0 * v_1)
bp.icc
}
btwnPerICC <- function(dd, period1, period2, byWave = FALSE) {
if (byWave) {
waves <- dd[, unique(startTrt)]
bpICCs <- sapply(waves, function(x)
calcBP(dd[startTrt==x], period1, period2))
return(mean(bpICCs))
} else {
calcBP(dd, period1, period2)
}
}
withinPerICC <- function(dx) {
lmerfit <- lmer(Y~rx + (1|cluster), data = dx)
vars <- as.data.table(VarCorr(lmerfit))[, vcov]
vars[1]/sum(vars)
}
genPairs <- function(n) {
x <- combn(x = c(1:n-1), 2)
lapply(seq_len(ncol(x)), function(i) x[,i])
}
iccs <- function(dd, byWave = FALSE) {
nperiods <- dd[, length(unique(period))]
bperiods <- genPairs(nperiods)
names <-
unlist(lapply(bperiods, function(x) paste0("bp", x[1], x[2])))
bp.icc <- sapply(bperiods,
function(x) btwnPerICC(dd, x[1], x[2], byWave))
system(paste("echo ."))
bdd.per <- lapply(1:nperiods - 1, function(x) dd[period == x])
wp.icc <- lapply(bdd.per,
function(x) withinPerICC(x))
wp.icc <- unlist(wp.icc)
nameswp <- sapply(1:nperiods - 1, function(x) paste0("wp", x))
do <- data.table(t(c(bp.icc, wp.icc)))
setnames(do, c(names, nameswp))
return(do[])
}</code></pre>
</div>
Don't get too excited - it might just be regression to the mean
https://www.rdatagen.net/post/regression-to-the-mean/
Tue, 11 Jun 2019 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/regression-to-the-mean/<p>It is always exciting to find an interesting pattern in the data that seems to point to some important difference or relationship. A while ago, one of my colleagues shared a figure with me that looked something like this:</p>
<p><img src="https://www.rdatagen.net/post/2019-06-11-regression-to-the-mean.en_files/figure-html/unnamed-chunk-2-1.png" width="672" /></p>
<p>It looks like something is going on. On average low scorers in the first period increased a bit in the second period, and high scorers decreased a bit. Something <strong>is</strong> going on, but nothing specific to the data in question; it is just probability working its magic.</p>
<p>What my colleague had shown me is a classic example of <em>regression to the mean</em>. In the hope of clarifying the issue, I created a little simulation for her to show I could recreate this scenario with arbitrary data. And now I share it with you.</p>
<div id="what-is-regression-to-the-mean" class="section level3">
<h3>What <em>is</em> regression to the mean?</h3>
<p>A simple picture may clarify what underlies regression to the mean. An individual’s measured responses over time are a function of various factors. In this first scenario, the responses are driven entirely by short term factors:</p>
<p><img src="https://www.rdatagen.net/img/post-regression-to-mean/shortcauses.png" width="500" /></p>
<p>Responses in the two different time periods depend only on proximal causes. These could include an individual’s mood (which changes over time) or maybe something unrelated to the individual that would induce measurement error. (If the short term factor is not measured, this what is typically considered random noise or maybe “error”; I prefer to refer to this quantity as something like unexplained variation or individual level effects.) When these are the only factors influencing the responses, we would expect the responses in each period to be uncorrelated.</p>
<p>Regression to the mean manifests itself when we focus on sub-groups at extreme ends of the distribution. Here, we consider a sub-group of individuals with high levels of response in the first period. Since factors that led to these high values will not necessarily be present in the second period, we would expect the distribution of values for the sub-group in the <strong>second</strong> period to look like the distribution in the <em>full sample</em> (including high, moderate, and low responders) from the <strong>first</strong> period. Alternatively, if we think about the second period alone, we would expect the high value sub-group (from the first period) to look just like the rest of the sample. Either way we look at it, the sub-group mean in the second period will necessarily be lower than the mean of that same sub-group in the first period.</p>
<p>A simulation might clarify this. <span class="math inline">\(p_1\)</span> and <span class="math inline">\(p_2\)</span> are the short term factors influencing the period one outcome <span class="math inline">\(x_1\)</span> and period two outcome <span class="math inline">\(x_2\)</span>, respectively. The indicator <span class="math inline">\(h_1 = 1\)</span> if the period one response falls in the top <span class="math inline">\(20\%\)</span> of responses:</p>
<pre class="r"><code>d <- defData(varname = "p1", formula = 0, variance = 1, dist = "normal")
d <- defData(d, varname = "p2", formula = 0, variance = 1, dist = "normal")
d <- defData(d, varname = "x1", formula = "0 + p1", dist = "nonrandom")
d <- defData(d, varname = "x2", formula = "0 + p2", dist = "nonrandom")
d <- defData(d, varname = "h1", formula = "x1 > quantile(x1, .80) ",
dist = "nonrandom")</code></pre>
<pre class="r"><code>set.seed(2371)
dd <- genData(1000, d)</code></pre>
<p>The average (and sd) for the full sample in period one and period two are pretty much the same:</p>
<pre class="r"><code>dd[, .(mu.x1 = mean(x1), sd.x1 = sd(x1),
mu.x2 = mean(x2), sd.x2 = sd(x2))]</code></pre>
<pre><code>## mu.x1 sd.x1 mu.x2 sd.x2
## 1: 0.02 1 -0.07 1</code></pre>
<p>The mean of the sub-group of the sample who scored in the top 20% in period one is obviously higher than the full sample period one average since this is how we defined the sub-group. However, the period two distribution for this sub-group looks like the <em>overall</em> sample in period two. Again, this is due to the fact that the distribution of <span class="math inline">\(p_2\)</span> is the <em>same</em> for the period one high scoring sub-group and everyone else:</p>
<pre class="r"><code>cbind(dd[h1 == TRUE, .(muh.x1 = mean(x1), sdh.x1 = sd(x1),
muh.x2 = mean(x2), sdh.x2 = sd(x2))],
dd[, .(mu.x2 = mean(x2), sd.x2 = sd(x2))])</code></pre>
<pre><code>## muh.x1 sdh.x1 muh.x2 sdh.x2 mu.x2 sd.x2
## 1: 1 0.5 -0.08 1 -0.07 1</code></pre>
</div>
<div id="a-more-realistic-scenario" class="section level3">
<h3>A more realistic scenario</h3>
<p>It is unlikely that the repeated measures <span class="math inline">\(x_1\)</span> and <span class="math inline">\(x_2\)</span> will be uncorrelated, and more plausible that they share some common factor or factors; someone who tends to score high in the first may tend to score high in the second. For example, an individual’s underlying health status could influence outcomes over both measurement periods. Here is the updated DAG:</p>
<p><img src="https://www.rdatagen.net/img/post-regression-to-mean/causes.png" width="500" /></p>
<p>Regression to the mean is really a phenomenon driven by the relative strength of the longer term underlying factors and shorter term proximal factors. If the underlying factors dominate the more proximal ones, then the we would expect to see less regression to the mean. (In the extreme case where there no proximal factors, only longer term, underlying ones, there will be no regression to the mean.)</p>
<p>Back to the simulation. (This time <span class="math inline">\(p_1\)</span> and <span class="math inline">\(p_2\)</span> are reflected in the variance of the two responses, so they do not appear explicitly in the data definitions.)</p>
<pre class="r"><code>library(parallel)
d <- defData(varname = "U", formula = "-1;1", dist = "uniform")
d <- defData(d, varname = "x1", formula = "0 + 2*U", variance = 1)
d <- defData(d, varname = "x2", formula = "0 + 2*U", variance = 1)
d <- defData(d, varname = "h1", formula = "x1 > quantile(x1, .80) ",
dist = "nonrandom")
set.seed(2371)
dd <- genData(1000, d)</code></pre>
<p>When we look at the means of the period one high scoring sub-group in periods one and two, it appears that there is at least <em>some</em> regression to the mean, but it is not absolute, because the underlying factors <span class="math inline">\(U\)</span> have a fairly strong effect on the responses in both periods:</p>
<pre><code>## muh.x1 sdh.x1 muh.x2 sdh.x2 mu.x2 sd.x2
## 1: 2 0.6 1 1 -0.02 1</code></pre>
</div>
<div id="regression-to-the-mean-under-different-scenarios" class="section level3">
<h3>Regression to the mean under different scenarios</h3>
<p>To conclude, I want to illustrate how the relative strength of <span class="math inline">\(U\)</span>, <span class="math inline">\(p_1\)</span>, and <span class="math inline">\(p_2\)</span> affect the regression to the mean. (The code to generate the plot immediately follows.) Under each simulation scenario I generated 1000 data sets of 200 individuals each, and averaged across the 1000 replications to show the mean <span class="math inline">\(x_1\)</span> and <span class="math inline">\(x_2\)</span> measurements <em>for the high scorers only in period one</em>. In all cases, period one scores are to the right and the arrow points to the period two scores. The longer the arrow, the more extensive the regression to the mean.</p>
<p><img src="https://www.rdatagen.net/post/2019-06-11-regression-to-the-mean.en_files/figure-html/unnamed-chunk-9-1.png" width="672" /></p>
<p>As the effect of <span class="math inline">\(U\)</span> grows (moving down from box to box in the plot), regression to the mean decreases. And within each box, as we decrease the strength of the proximal <span class="math inline">\(p\)</span> factors (by decreasing the variance of the <span class="math inline">\(p_1\)</span> and <span class="math inline">\(p_2\)</span>), regression to the mean also decreases.</p>
</div>
<div id="addendum-code-to-generate-replications-and-plot" class="section level3">
<h3>Addendum: code to generate replications and plot</h3>
<pre class="r"><code>rtomean <- function(n, d) {
dd <- genData(n, d)
data.table(x1 = dd[x1 >= h1, mean(x1)] , x2 = dd[x1 >= h1, mean(x2)])
}
repl <- function(xvar, nrep, ucoef, d) {
d <- updateDef(d, "x1", newvariance = xvar)
d <- updateDef(d, "x2", newvariance = xvar)
dif <- rbindlist(mclapply(1:nrep, function(x) rtomean(200, d)))
mudif <- unlist(lapply(dif, mean))
data.table(ucoef, xvar, x1 = mudif[1], x2 = mudif[2])
}
dres <- list()
i <- 0
for (ucoef in c(0, 1, 2, 3)) {
i <- i + 1
uform <- genFormula( c(0, ucoef), "U")
d <- updateDef(d, "x1", newformula = uform)
d <- updateDef(d, "x2", newformula = uform)
dr <- mclapply(seq(1, 4, by = 1), function(x) repl(x, 1000, ucoef, d))
dres[[i]] <- rbindlist(dr)
}
dres <- rbindlist(dres)
ggplot(data = dres, aes(x = x1, xend = x2, y = xvar, yend = xvar)) +
geom_point(aes(x=x1, y = xvar), color = "#824D99", size = 1) +
geom_segment(arrow = arrow(length = unit(.175, "cm")),
color = "#824D99") +
scale_y_continuous(limits = c(0.5, 4.5), breaks = 1:4,
name = "variance of measurements") +
scale_x_continuous(limits = c(-0.1, 3), name = "mean") +
facet_grid(ucoef ~ .) +
theme(panel.grid.minor = element_blank(),
panel.grid.major.y = element_blank())</code></pre>
</div>
simstudy update - stepped-wedge design treatment assignment
https://www.rdatagen.net/post/simstudy-update-stepped-wedge-treatment-assignment/
Tue, 28 May 2019 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/simstudy-update-stepped-wedge-treatment-assignment/<p><code>simstudy</code> has just been updated (version 0.1.13 on <a href="https://cran.rstudio.com/web/packages/simstudy/">CRAN</a>), and includes one interesting addition (and a couple of bug fixes). I am working on a post (or two) about intra-cluster correlations (ICCs) and stepped-wedge study designs (which I’ve written about <a href="https://www.rdatagen.net/post/alternatives-to-stepped-wedge-designs/">before</a>), and I was getting tired of going through the convoluted process of generating data from a time-dependent treatment assignment process. So, I wrote a new function, <code>trtStepWedge</code>, that should simplify things.</p>
<p>I will take the opportunity of this brief announcement to provide a quick example.</p>
<div id="data-definition" class="section level3">
<h3>Data definition</h3>
<p>Stepped-wedge designs are a special class of cluster randomized trial where each cluster is observed in both treatment arms (as opposed to the classic parallel design where only some of the clusters receive the treatment). This is a special case of a cross-over design, where the cross-over is only in one direction: control (or pre-intervention) to intervention.</p>
<p>In this example, the data generating process looks like this:</p>
<p><span class="math display">\[Y_{ict} = \beta_0 + b_c + \beta_1 * t + \beta_2*X_{ct} + e_{ict}\]</span></p>
<p>where <span class="math inline">\(Y_{ict}\)</span> is the outcome for individual <span class="math inline">\(i\)</span> in cluster <span class="math inline">\(c\)</span> in time period <span class="math inline">\(t\)</span>, <span class="math inline">\(b_c\)</span> is a cluster-specific effect, <span class="math inline">\(X_{ct}\)</span> is the intervention indicator that has a value 1 during periods where the cluster is under the intervention, and <span class="math inline">\(e_{ict}\)</span> is the individual-level effect. Both <span class="math inline">\(b_c\)</span> and <span class="math inline">\(e_{ict}\)</span> are normally distributed with mean 0 and variances <span class="math inline">\(\sigma^2_{b}\)</span> and <span class="math inline">\(\sigma^2_{e}\)</span>, respectively. <span class="math inline">\(\beta_1\)</span> is the time trend, and <span class="math inline">\(\beta_2\)</span> is the intervention effect.</p>
<p>We need to define the cluster-level variables (i.e. the cluster effect and the cluster size) as well as the individual specific outcome. In this case each cluster will have 15 individuals per period, and <span class="math inline">\(\sigma^2_b = 0.20\)</span>. In addition, <span class="math inline">\(\sigma^2_e = 1.75\)</span>.</p>
<pre class="r"><code>library(simstudy)
library(ggplot2)
defc <- defData(varname = "ceffect", formula = 0, variance = 0.20,
dist = "normal", id = "cluster")
defc <- defData(defc, "m", formula = 15, dist = "nonrandom")
defa <- defDataAdd(varname = "Y",
formula = "0 + ceffect + 0.1*period + trt*1.5",
variance = 1.75, dist = "normal")</code></pre>
<p>In this case, there will be 30 clusters and 24 time periods. With 15 individuals per cluster per period, there will be 360 observations for each cluster, and 10,800 in total. (There is no reason the cluster sizes need to be deterministic, but I just did that to simplify things a bit.)</p>
<p>Cluster-level intervention assignment is done after generating the cluster-level and time-period data. The call to <code>trtStepWedge</code> includes 3 key arguments that specify the number of waves, the length of each wave, and the period during which the first clusters begin the intervention.</p>
<p><code>nWaves</code> indicates how many clusters share the same starting period for the intervention. In this case, we have 5 waves, with 6 clusters each. <code>startPer</code> is the first period of the first wave. The earliest starting period is 0, the first period. Here, the first wave starts the intervention during period 4. <code>lenWaves</code> indicates the length between starting points for each wave. Here, a length of 4 means that the starting points will be 4, 8, 12, 16, and 20.</p>
<p>Once the treatment assignments are made, the individual records are created and the outcome data are generated in the last step.</p>
<pre class="r"><code>set.seed(608477)
dc <- genData(30, defc)
dp <- addPeriods(dc, 24, "cluster", timevarName = "t")
dp <- trtStepWedge(dp, "cluster", nWaves = 5, lenWaves = 4,
startPer = 4, grpName = "trt")
dd <- genCluster(dp, cLevelVar = "timeID", "m", "id")
dd <- addColumns(defa, dd)
dd</code></pre>
<pre><code>## cluster period ceffect m timeID startTrt trt id Y
## 1: 1 0 0.628 15 1 4 0 1 1.52
## 2: 1 0 0.628 15 1 4 0 2 0.99
## 3: 1 0 0.628 15 1 4 0 3 -0.12
## 4: 1 0 0.628 15 1 4 0 4 2.09
## 5: 1 0 0.628 15 1 4 0 5 -2.34
## ---
## 10796: 30 23 -0.098 15 720 20 1 10796 1.92
## 10797: 30 23 -0.098 15 720 20 1 10797 5.92
## 10798: 30 23 -0.098 15 720 20 1 10798 4.12
## 10799: 30 23 -0.098 15 720 20 1 10799 4.57
## 10800: 30 23 -0.098 15 720 20 1 10800 3.66</code></pre>
<p>It is easiest to understand the stepped-wedge design by looking at it. Here, we average the outcomes by each cluster for each period and plot the results.</p>
<pre class="r"><code>dSum <- dd[, .(Y = mean(Y)), keyby = .(cluster, period, trt, startTrt)]
ggplot(data = dSum,
aes(x = period, y = Y, group = interaction(cluster, trt))) +
geom_line(aes(color = factor(trt))) +
facet_grid(factor(startTrt, labels = c(1 : 5)) ~ .) +
scale_x_continuous(breaks = seq(0, 23, by = 4), name = "week") +
scale_color_manual(values = c("#b8cce4", "#4e81ba")) +
theme(panel.grid = element_blank(),
legend.position = "none") </code></pre>
<p><img src="https://www.rdatagen.net/post/2019-05-28-simstudy-update-stepped-wedge-treatment-assignment.en_files/figure-html/unnamed-chunk-4-1.png" width="672" /></p>
<p>Key elements of the data generation process are readily appreciated by looking at the graph: (1) the cluster-specific effects, reflected in the variable starting points at period 0, (2) the general upward time trend, and (3), the stepped-wedge intervention scheme.</p>
<p>Since <code>trtStepWedge</code> is a new function, it is still a work in progress. Feel free to get in touch to give me feedback on any enhancements that folks might find useful.</p>
</div>
Generating and modeling over-dispersed binomial data
https://www.rdatagen.net/post/overdispersed-binomial-data/
Tue, 14 May 2019 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/overdispersed-binomial-data/<p>A couple of weeks ago, I was inspired by a study to <a href="https://www.rdatagen.net/post/what-matters-more-in-a-cluster-randomized-trial-number-or-size/">write</a> about a classic design issue that arises in cluster randomized trials: should we focus on the number of clusters or the size of those clusters? This trial, which is concerned with preventing opioid use disorder for at-risk patients in primary care clinics, has also motivated this second post, which concerns another important issue - over-dispersion.</p>
<div id="a-count-outcome" class="section level3">
<h3>A count outcome</h3>
<p>In this study, one of the primary outcomes is the number of days of opioid use over a six-month follow-up period (to be recorded monthly by patient-report and aggregated for the six-month measure). While one might get away with assuming that the outcome is continuous, it really is not; it is a <em>count</em> outcome, and the possible range is 0 to 180. There are two related questions here - what model will be used to analyze the data once the study is complete? And, how should we generate simulated data to estimate the power of the study?</p>
<p>In this particular study, the randomization is at the physician level so that all patients in a particular physician practice will be in control or treatment. (For the purposes of simplification here, I am going to assume there is no treatment effect, so that all variation in the outcome is due to physicians and patients only.) One possibility is to assume the outcome <span class="math inline">\(Y_{ij}\)</span> for patient <span class="math inline">\(i\)</span> in group <span class="math inline">\(j\)</span> has a binomial distribution with 180 different “experiments” - every day we ask did the patient use opioids? - so that we say <span class="math inline">\(Y_{ij} \sim Bin(180, \ p_{ij})\)</span>.</p>
</div>
<div id="the-probability-parameter" class="section level3">
<h3>The probability parameter</h3>
<p>The key parameter here is <span class="math inline">\(p_{ij}\)</span>, the probability that patient <span class="math inline">\(i\)</span> (in group <span class="math inline">\(j\)</span>) uses opioids on any given day. Given the binomial distribution, the number of days of opioid use we expect to observe for patient <span class="math inline">\(i\)</span> is <span class="math inline">\(180p_{ij}\)</span>. There are at least three ways to think about how to model this probability (though there are certainly more):</p>
<ul>
<li><span class="math inline">\(p_{ij} = p\)</span>: everyone shares the same probability The collection of all patients will represent a sample from <span class="math inline">\(Bin(180, p)\)</span>.</li>
<li><span class="math inline">\(p_{ij} = p_j\)</span>: the probability of the outcome is determined by the cluster or group alone. The data within the cluster will have a binomial distribution, but the collective data set will <em>not</em> have a strict binomial distribution and will be over-dispersed.</li>
<li><span class="math inline">\(p_{ij}\)</span> is unique for each individual. Once again the collective data are over-dispersed, potentially even more so.</li>
</ul>
</div>
<div id="modeling-the-outcome" class="section level3">
<h3>Modeling the outcome</h3>
<p>The correct model depends, of course, on the situation at hand. What data generation process fits what we expect to be the case? Hopefully, there are existing data to inform the likely model. If not, it may by most prudent to be conservative, which usually means assuming more variation (unique <span class="math inline">\(p_{ij}\)</span>) rather than less (<span class="math inline">\(p_{ij} = p\)</span>).</p>
<p>In the first case, the probability (and counts) can be estimated using a generalized linear model (GLM) with a binomial distribution. In the second, one solution (that I will show here) is a generalized linear mixed effects model (GLMM) with a binomial distribution and a group level random effect. In the third case, a GLMM with a negative a <em>negative binomial</em> distribution would be more likely to properly estimate the variation. (I have described other ways to think about these kind of data <a href="https://www.rdatagen.net/post/a-small-update-to-simstudy-neg-bin/">here</a> and <a href="https://www.rdatagen.net/post/binary-beta-beta-binomial/">here</a>.)</p>
</div>
<div id="case-1-binomial-distribution" class="section level3">
<h3>Case 1: binomial distribution</h3>
<p>Even though there is no clustering effect in this first scenario, let’s assume there are clusters. Each individual will have a probability of 0.4 of using opioids on any given day (log odds = -0.405):</p>
<pre class="r"><code>def <- defData(varname = "m", formula = 100, dist = "nonrandom", id = "cid")
defa <- defDataAdd(varname = "x", formula = -.405, variance = 180,
dist = "binomial", link = "logit")</code></pre>
<p>Generate the data:</p>
<pre class="r"><code>set.seed(5113373)
dc <- genData(200, def)
dd <- genCluster(dc, cLevelVar = "cid", numIndsVar = "m", level1ID = "id")
dd <- addColumns(defa, dd)</code></pre>
<p>Here is a plot of 20 of the 100 groups:</p>
<pre class="r"><code>dplot <- dd[cid %in% c(1:20)]
davg <- dplot[, .(avgx = mean(x)), keyby = cid]
ggplot(data=dplot, aes(y = x, x = factor(cid))) +
geom_jitter(size = .5, color = "grey50", width = 0.2) +
geom_point(data = davg, aes(y = avgx, x = factor(cid)),
shape = 21, fill = "firebrick3", size = 2) +
theme(panel.grid.major.y = element_blank(),
panel.grid.minor.y = element_blank(),
axis.ticks.x = element_blank(),
axis.text.x = element_blank()
) +
xlab("Group") +
scale_y_continuous(limits = c(0, 185), breaks = c(0, 60, 120, 180))</code></pre>
<p><img src="https://www.rdatagen.net/post/2019-05-14-overdispersed-binomial-data.en_files/figure-html/unnamed-chunk-4-1.png" width="672" /></p>
<p>Looking at the plot, we can see that a mixed effects model is probably not relevant.</p>
</div>
<div id="case-2-over-dispersion-from-clustering" class="section level3">
<h3>Case 2: over-dispersion from clustering</h3>
<pre class="r"><code>def <- defData(varname = "ceffect", formula = 0, variance = 0.08,
dist = "normal", id = "cid")
def <- defData(def, varname = "m", formula = "100", dist = "nonrandom")
defa <- defDataAdd(varname = "x", formula = "-0.405 + ceffect",
variance = 100, dist = "binomial", link = "logit")
dc <- genData(200, def)
dd <- genCluster(dc, cLevelVar = "cid", numIndsVar = "m", level1ID = "id")
dd <- addColumns(defa, dd)</code></pre>
<p><img src="https://www.rdatagen.net/post/2019-05-14-overdispersed-binomial-data.en_files/figure-html/unnamed-chunk-6-1.png" width="672" /></p>
<p>This plot suggests that variation <em>within</em> the groups is pretty consistent, though there is variation <em>across</em> the groups. This suggests that a binomial GLMM with a group level random effect would be appropriate.</p>
</div>
<div id="case-3-added-over-dispersion-due-to-individual-differences" class="section level3">
<h3>Case 3: added over-dispersion due to individual differences</h3>
<pre class="r"><code>defa <- defDataAdd(varname = "ieffect", formula = 0,
variance = .25, dist = "normal")
defa <- defDataAdd(defa, varname = "x",
formula = "-0.405 + ceffect + ieffect",
variance = 180, dist = "binomial", link = "logit")
dd <- genCluster(dc, cLevelVar = "cid", numIndsVar = "m", level1ID = "id")
dd <- addColumns(defa, dd)</code></pre>
<p><img src="https://www.rdatagen.net/post/2019-05-14-overdispersed-binomial-data.en_files/figure-html/unnamed-chunk-8-1.png" width="672" /></p>
<p>In this last case, it is not obvious what model to use. Since there is variability within and between groups, it is probably safe to use a negative binomial model, which is most conservative.</p>
</div>
<div id="estimating-the-parameters-under-a-negative-binomial-assumption" class="section level3">
<h3>Estimating the parameters under a negative binomial assumption</h3>
<p>We can fit the data we just generated (with a 2-level mixed effects model) using a <em>single-level</em> mixed effects model with the assumption of a negative binomial distribution to estimate the parameters we can use for one last simulated data set. Here is the model fit:</p>
<pre class="r"><code>nbfit <- glmer.nb(x ~ 1 + (1|cid), data = dd,
control = glmerControl(optimizer="bobyqa"))
broom::tidy(nbfit)</code></pre>
<pre><code>## # A tibble: 2 x 6
## term estimate std.error statistic p.value group
## <chr> <dbl> <dbl> <dbl> <dbl> <chr>
## 1 (Intercept) 4.29 0.0123 347. 0 fixed
## 2 sd_(Intercept).cid 0.172 NA NA NA cid</code></pre>
<p>And to generate the negative binomial data using <code>simstudy</code>, we need a dispersion parameter, which can be extracted from the estimated model:</p>
<pre class="r"><code>(theta <- 1/getME(nbfit, "glmer.nb.theta"))</code></pre>
<pre><code>## [1] 0.079</code></pre>
<pre class="r"><code>revar <- lme4::getME(nbfit, name = "theta")^2
revar</code></pre>
<pre><code>## cid.(Intercept)
## 0.03</code></pre>
<p>Generating the data from the estimated model allows us to see how well the negative binomial model fit the dispersed binomial data that we generated. A plot of the two data sets should look pretty similar, at least with respect to the distribution of the cluster means and within-cluster individual counts.</p>
<pre class="r"><code>def <- defData(varname = "ceffect", formula = 0, variance = revar,
dist = "normal", id = "cid")
def <- defData(def, varname = "m", formula = "100", dist = "nonrandom")
defa <- defDataAdd(varname = "x", formula = "4.28 + ceffect",
variance = theta, dist = "negBinomial", link = "log")
dc <- genData(200, def)
ddnb <- genCluster(dc, cLevelVar = "cid", numIndsVar = "m",
level1ID = "id")
ddnb <- addColumns(defa, ddnb)</code></pre>
<p><img src="https://www.rdatagen.net/post/2019-05-14-overdispersed-binomial-data.en_files/figure-html/unnamed-chunk-12-1.png" width="960" /></p>
<p>The two data sets do look like they came from the same distribution. The one limitation of the negative binomial distribution is that the sample space is not limited to numbers between 0 and 180; in fact, the sample space is all non-negative integers. For at least two clusters shown, there are some individuals with counts that exceed 180 days, which of course is impossible. Because of this, it might be safer to use the over-dispersed binomial data as the generating process for a power calculation, but it would be totally fine to use the negative binomial model as the analysis model (in both the power calculation and the actual data analysis).</p>
</div>
<div id="estimating-power" class="section level3">
<h3>Estimating power</h3>
<p>One could verify that power is indeed reduced as we move from <em>Case 1</em> to <em>Case 3</em>. (I’ll leave that as an exercise for you - I think I’ve provided many examples in the past on how one might go about doing this. If, after struggling for a while, you aren’t successful, feel free to get in touch with me.)</p>
</div>
What matters more in a cluster randomized trial: number or size?
https://www.rdatagen.net/post/what-matters-more-in-a-cluster-randomized-trial-number-or-size/
Tue, 30 Apr 2019 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/what-matters-more-in-a-cluster-randomized-trial-number-or-size/<p>I am involved with a trial of an intervention designed to prevent full-blown opioid use disorder for patients who may have an incipient opioid use problem. Given the nature of the intervention, it was clear the only feasible way to conduct this particular study is to randomize at the physician rather than the patient level.</p>
<p>There was a concern that the number of patients eligible for the study might be limited, so that each physician might only have a handful of patients able to participate, if that many. A question arose as to whether we can make up for this limitation by increasing the number of physicians who participate? That is, what is the trade-off between number of clusters and cluster size?</p>
<p>This is a classic issue that confronts any cluster randomized trial - made more challenging by the potentially very small cluster sizes. A primary concern of the investigators is having sufficient power to estimate an intervention effect - how would this trade-off impact that? And as a statistician, I have concerns about bias and variance, which could have important implications depending on what you are interested in measuring.</p>
<div id="clustering-in-a-nutshell" class="section level2">
<h2>Clustering in a nutshell</h2>
<p>This is an immense topic - I won’t attempt to point you to the best resources, because there are so many out there. For me, there are two salient features of cluster randomized trials that present key challenges.</p>
<p>First, individuals in a cluster are not providing as much information as we might imagine. If we take an extreme example of a case where the outcome of everyone in a cluster is identical, we learn absolutely nothing by taking an additional subject from that cluster; in fact, all we need is one subject per cluster, because all the variation is across clusters, not within. Of course, that is overly dramatic, but the same principal is in play even when the outcomes of subjects in a cluster are moderately correlated. The impact of this phenomenon depends on the within cluster correlation relative to the between cluster correlation. The relationship of these two correlations is traditionally characterized by the intra-class coefficient (ICC), which is the ratio of the between-cluster variation to total variation.</p>
<p>Second, if there is high variability across clusters, that gets propagated to the variance of the estimate of the treatment effect. From study to study (which is what we are conceiving of in a frequentist frame of mind), we are not just sampling individuals from the clusters, but we are changing the sample of clusters that we are selecting from! So much variation going on. Of course, if all clusters are exactly the same (i.e. no variation between clusters), then it doesn’t really matter what clusters we are choosing from each time around, and we have no added variability as a result of sampling from different clusters. But, as we relax this assumption of no between-cluster variability, we add over-all variability to the process, which gets translated to our parameter estimates.</p>
<p>The cluster size/cluster number trade-off is driven largely by these two issues.</p>
</div>
<div id="simulation" class="section level2">
<h2>Simulation</h2>
<p>I am generating data from a cluster randomized trial that has the following underlying data generating process:</p>
<p><span class="math display">\[ Y_{ij} = 0.35 * R_j + c_j + \epsilon_{ij}\ ,\]</span>
where <span class="math inline">\(Y_{ij}\)</span> is the outcome for patient <span class="math inline">\(i\)</span> who is being treated by physician <span class="math inline">\(j\)</span>. <span class="math inline">\(R_j\)</span> represents the treatment indicator for physician <span class="math inline">\(j\)</span> (0 for control, 1 for treatment). <span class="math inline">\(c_j\)</span> is the physician-level random effect that is normally distributed <span class="math inline">\(N(0, \sigma^2_c)\)</span>. <span class="math inline">\(\epsilon_{ij}\)</span> is the individual-level effect, and <span class="math inline">\(\epsilon_{ij} \sim N(0, \sigma^2_\epsilon)\)</span>. The expected value of <span class="math inline">\(Y_{ij}\)</span> for patients treated by physicians in the control group is <span class="math inline">\(0\)</span>. And for the patients treated by physicians in the intervention <span class="math inline">\(E(Y_{ij}) = 0.35\)</span>.</p>
<div id="defining-the-simulation" class="section level3">
<h3>Defining the simulation</h3>
<p>The entire premise of this post is that we have a target number of study subjects (which in the real world example was set at 480), and the question is should we spread those subjects across a smaller or larger number of clusters? In all the simulations that follow, then, we have fixed the total number of subjects at 480. That means if we have 240 clusters, there will be only 2 in each one; and if we have 10 clusters, there will be 48 patients per cluster.</p>
<p>In the first example shown here, we are assuming an ICC = 0.10 and 60 clusters of 8 subjects each:</p>
<pre class="r"><code>library(simstudy)
Var <- iccRE(0.10, varWithin = 0.90, dist = "normal")
defC <- defData(varname = "ceffect", formula = 0, variance = Var,
dist = "normal", id = "cid")
defC <- defData(defC, "nperc", formula = "8",
dist = "nonrandom" )
defI <- defDataAdd(varname = "y", formula = "ceffect + 0.35 * rx",
variance = 0.90)</code></pre>
</div>
<div id="generating-a-single-data-set-and-estimating-parameters" class="section level3">
<h3>Generating a single data set and estimating parameters</h3>
<p>Based on the data definitions, I can now generate a single data set:</p>
<pre class="r"><code>set.seed(711216)
dc <- genData(60, defC)
dc <- trtAssign(dc, 2, grpName = "rx")
dd <- genCluster(dc, "cid", numIndsVar = "nperc", level1ID = "id" )
dd <- addColumns(defI, dd)
dd</code></pre>
<pre><code>## cid rx ceffect nperc id y
## 1: 1 0 0.71732 8 1 0.42
## 2: 1 0 0.71732 8 2 0.90
## 3: 1 0 0.71732 8 3 -1.24
## 4: 1 0 0.71732 8 4 2.37
## 5: 1 0 0.71732 8 5 0.71
## ---
## 476: 60 1 -0.00034 8 476 -1.12
## 477: 60 1 -0.00034 8 477 0.88
## 478: 60 1 -0.00034 8 478 0.47
## 479: 60 1 -0.00034 8 479 0.28
## 480: 60 1 -0.00034 8 480 -0.54</code></pre>
<p>We use a linear mixed effect model to estimate the treatment effect and variation across clusters:</p>
<pre class="r"><code>library(lmerTest)
lmerfit <- lmer(y~rx + (1 | cid), data = dd)</code></pre>
<p>Here are the estimates of the random and fixed effects:</p>
<pre class="r"><code>as.data.table(VarCorr(lmerfit))</code></pre>
<pre><code>## grp var1 var2 vcov sdcor
## 1: cid (Intercept) <NA> 0.14 0.38
## 2: Residual <NA> <NA> 0.78 0.88</code></pre>
<pre class="r"><code>coef(summary(lmerfit))</code></pre>
<pre><code>## Estimate Std. Error df t value Pr(>|t|)
## (Intercept) 0.008 0.089 58 0.09 0.929
## rx 0.322 0.126 58 2.54 0.014</code></pre>
<p>And here is the estimated ICC, which happens to be close to the “true” ICC of 0.10 (which is definitely not a sure thing given the relatively small sample size):</p>
<pre class="r"><code>library(sjstats)
icc(lmerfit)</code></pre>
<pre><code>##
## Intraclass Correlation Coefficient for Linear mixed model
##
## Family : gaussian (identity)
## Formula: y ~ rx + (1 | cid)
##
## ICC (cid): 0.1540</code></pre>
</div>
</div>
<div id="a-deeper-look-at-the-variation-of-estimates" class="section level2">
<h2>A deeper look at the variation of estimates</h2>
<p>In these simulations, we are primarily interested in investigating the effect of different numbers of clusters and different cluster sizes on power, variation, bias (and mean square error, which is a combined measure of variance and bias). This means replicating many data sets and studying the distribution of the estimates.</p>
<p>To do this, it is helpful to create a functions that generates the data:</p>
<pre class="r"><code>reps <- function(nclust) {
dc <- genData(nclust, defC)
dc <- trtAssign(dc, 2, grpName = "rx")
dd <- genCluster(dc, "cid", numIndsVar = "nperc", level1ID = "id" )
dd <- addColumns(defI, dd)
lmerTest::lmer(y ~ rx + (1 | cid), data = dd)
}</code></pre>
<p>And here is a function to check if p-values from model estimates are less than 0.05, which will come in handy later when estimating power:</p>
<pre class="r"><code>pval <- function(x) {
coef(summary(x))["rx", "Pr(>|t|)"] < 0.05
}</code></pre>
<p>Now we can generate 1000 data sets and fit a linear fixed effects model to each one, and store the results in an R <em>list</em>:</p>
<pre class="r"><code>library(parallel)
res <- mclapply(1:1000, function(x) reps(60))</code></pre>
<p>Extracting information from all 1000 model fits provides an estimate of power:</p>
<pre class="r"><code>mean(sapply(res, function(x) pval(x)))</code></pre>
<pre><code>## [1] 0.82</code></pre>
<p>And here are estimates of bias, variance, and root mean square error of the treatment effect estimates. We can see in this case, the estimated treatment effect is not particularly biased:</p>
<pre class="r"><code>RX <- sapply(res, function(x) getME(x, "fixef")["rx"])
c(true = 0.35, avg = mean(RX), var = var(RX),
bias = mean(RX - 0.35), rmse = sqrt(mean((RX - 0.35)^2)))</code></pre>
<pre><code>## true avg var bias rmse
## 0.35000 0.35061 0.01489 0.00061 0.12197</code></pre>
<p>And if we are interested in seeing how well we measure the between cluster variation, we can evaluate that as well. The true variance (used to generate the data), was 0.10, and the average of the estimates was 0.099, quite close:</p>
<pre class="r"><code>RE <- sapply(res, function(x) as.numeric(VarCorr(x)))
c(true = Var, avg = mean(RE), var = var(RE),
bias = mean(RE - Var), rmse = sqrt(mean((RE - Var)^2)))</code></pre>
<pre><code>## true avg var bias rmse
## 0.10000 0.10011 0.00160 0.00011 0.03996</code></pre>
</div>
<div id="replications-under-different-scenarios" class="section level2">
<h2>Replications under different scenarios</h2>
<p>Now we are ready to put all of this together for one final experiment to investigate the effects of the ICC and cluster number/size on power, variance, and bias. I generated 2000 data sets for each combination of assumptions about cluster sizes (ranging from 10 to 240) and ICC’s (ranging from 0 to 0.15). For each combination, I estimated the variance and bias for the treatment effect parameter estimates and the between-cluster variance. (I include the code in case any one needs to do something similar.)</p>
<pre class="r"><code>ps <- list()
pn <- 0
nclust <- c(10, 20, 30, 40, 48, 60, 80, 96, 120, 160, 240)
iccs <- c(0, 0.02, 0.05 , 0.10, 0.15)
for (s in seq_along(nclust)) {
for (i in seq_along(iccs)) {
newvar <- iccRE(iccs[i], varWithin = .90, dist = "normal")
newperc <- 480/nclust[s]
defC <- updateDef(defC, "ceffect", newvariance = newvar)
defC <- updateDef(defC, "nperc", newformula = newperc)
res <- mclapply(1:2000, function(x) reps(nclust[s]))
RX <- sapply(res, function(x) getME(x, "fixef")["rx"])
RE <- sapply(res, function(x) as.numeric(VarCorr(x)))
power <- mean(sapply(res, function(x) pval(x)))
pn <- pn + 1
ps[[pn]] <- data.table(nclust = nclust[s],
newperc,
icc=iccs[i],
newvar,
power,
biasRX = mean(RX - 0.35),
varRX = var(RX),
rmseRX = sqrt(mean((RX - 0.35)^2)),
avgRE = mean(RE),
biasRE = mean(RE - newvar),
varRE = var(RE),
rmseRE = sqrt(mean((RE - newvar)^2))
)
}
}
ps <- data.table::rbindlist(ps)</code></pre>
<p>First, we can take a look at the power. Clearly, for lower ICC’s, there is little marginal gain after a threshold between 60 and 80 clusters; with the higher ICC’s, a study might benefit with respect to power from adding more clusters (and reducing cluster size):</p>
<pre class="r"><code>library(ggthemes) # for Paul Tol's Color Schemes
library(scales)
ggplot(data = ps, aes(x = nclust, y = power, group = icc)) +
geom_smooth(aes(color = factor(icc)), se = FALSE) +
theme(panel.grid.minor = element_blank()) +
scale_color_ptol(name = "ICC", labels = number(iccs, accuracy = .01)) +
scale_x_continuous(name = "number of clusters", breaks = nclust)</code></pre>
<p><img src="https://www.rdatagen.net/post/2019-04-30-what-matters-more-in-a-cluster-randomized-trial-number-or-size.en_files/figure-html/unnamed-chunk-15-1.png" width="672" /></p>
<p>Not surprisingly, the same picture emerges (only in reverse) when looking at the variance of the estimate for treatment effect. Variance declines quite dramatically as we increase the number of clusters (again, reducing cluster size) up to about 60 or so, and little gain in precision beyond that:</p>
<p><img src="https://www.rdatagen.net/post/2019-04-30-what-matters-more-in-a-cluster-randomized-trial-number-or-size.en_files/figure-html/unnamed-chunk-16-1.png" width="672" /></p>
<p>If we are interested in measuring the variation across clusters (which was <span class="math inline">\(\sigma^2_c\)</span> in the model), then a very different picture emerges. First, the plot of RMSE (which is <span class="math inline">\(E[(\hat{\theta} - \theta)^2]^{\frac{1}{2}}\)</span>, where <span class="math inline">\(\theta = \sigma^2_c\)</span>), indicates that after some point, actually increasing the number of clusters after a certain point may be a bad idea.</p>
<p><img src="https://www.rdatagen.net/post/2019-04-30-what-matters-more-in-a-cluster-randomized-trial-number-or-size.en_files/figure-html/unnamed-chunk-17-1.png" width="672" /></p>
<p>The trends of RMSE are mirrored by the variance of <span class="math inline">\(\hat{\sigma^2_c}\)</span>:</p>
<p><img src="https://www.rdatagen.net/post/2019-04-30-what-matters-more-in-a-cluster-randomized-trial-number-or-size.en_files/figure-html/unnamed-chunk-18-1.png" width="672" /></p>
<p>I show the bias of the variance estimate, because it highlights the point that it is very difficult to get an unbiased estimate of <span class="math inline">\(\sigma^2_c\)</span> when the ICC is low, particularly with a large number of clusters with small cluster sizes. This may not be so surprising, because with small cluster sizes it may be more difficult to estimate the within-cluster variance, an important piece of the total variation.</p>
<p><img src="https://www.rdatagen.net/post/2019-04-30-what-matters-more-in-a-cluster-randomized-trial-number-or-size.en_files/figure-html/unnamed-chunk-19-1.png" width="672" /></p>
</div>
<div id="almost-an-addendum" class="section level2">
<h2>Almost an addendum</h2>
<p>I’ve focused entirely on the direct trade-off between the number of clusters and cluster size, because that was the question raised by the study that motivated this post. However, we may have a fixed number of clusters, and we might want to know if it makes sense to recruit more subjects from each cluster. To get a picture of this, I re-ran the simulations with 60 clusters, by evaluated power and variance of the treatment effect estimator at cluster sizes ranging from 5 to 60.</p>
<p>Under the assumptions used here, it also looks like there is a point after which little can be gained by adding subjects to each cluster (at least in terms of both power and precision of the estimate of the treatment effect):</p>
<p><img src="https://www.rdatagen.net/post/2019-04-30-what-matters-more-in-a-cluster-randomized-trial-number-or-size.en_files/figure-html/unnamed-chunk-21-1.png" width="672" /><img src="https://www.rdatagen.net/post/2019-04-30-what-matters-more-in-a-cluster-randomized-trial-number-or-size.en_files/figure-html/unnamed-chunk-21-2.png" width="672" /></p>
</div>
Even with randomization, mediation analysis can still be confounded
https://www.rdatagen.net/post/even-with-randomization-mediation-analysis-can-still-be-confounded/
Tue, 16 Apr 2019 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/even-with-randomization-mediation-analysis-can-still-be-confounded/<p>Randomization is super useful because it usually eliminates the risk that confounding will lead to a biased estimate of a treatment effect. However, this only goes so far. If you are conducting a meditation analysis in the hopes of understanding the underlying causal mechanism of a treatment, it is important to remember that the mediator has <em>not</em> been randomized, only the treatment. This means that the estimated mediation effect <em>is</em> still at risk of being confounded.</p>
<p>I never fail to mention this when a researcher tells me they are interested in doing a mediation analysis (and it seems like more and more folks are interested in including this analysis as part of their studies). So, when my son brought up the fact that the lead investigator on his experimental psychology project wanted to include a mediation analysis, I, of course, had to pipe up. “You have to be careful, you know.”</p>
<p>But, he wasn’t buying it, wondering why randomization didn’t take care of the confounding; surely, the potential confounders would be balanced across treatment groups. Maybe I’d had a little too much wine, as I considered he might have a point. But no - I’d quickly come to my senses - it doesn’t matter that the confounder is balanced across treatment groups (which it very well could be), it would still be unbalanced across the different levels of the mediator, which is what really matters if we are estimating the effect of the mediator.</p>
<p>I proposed to do a simulation of this phenomenon. My son was not impressed, but I went ahead and did it anyways, and I am saving it here in case he wants to take a look. Incidentally, this is effectively a brief follow-up to an <a href="https://www.rdatagen.net/post/causal-mediation/">earlier post</a> on mediation. So, if the way in which I am generating the data seems a bit opaque, you might want to take a <a href="https://www.rdatagen.net/post/causal-mediation/">look</a> at what I did earlier.</p>
<div id="the-data-generating-process" class="section level2">
<h2>The data generating process</h2>
<p>Here is a DAG that succinctly describes how I will generate the data. You can see clearly that <span class="math inline">\(U_2\)</span> is a confounder of the relationship between the mediator <span class="math inline">\(M\)</span> and the outcome <span class="math inline">\(Y\)</span>. (It should be noted that if we were only interested in is the causal effect of <span class="math inline">\(A\)</span> on <span class="math inline">\(Y\)</span>, <span class="math inline">\(U_2\)</span> is <em>not</em> a confounder, so we wouldn’t need to control for <span class="math inline">\(U_2\)</span>.)</p>
<p><img src="https://www.rdatagen.net/img/post-confoundmed/DAGmediation.png" /></p>
<p>As I did in the earlier simulation of mediation, I am simulating the potential outcomes so that we can see the “truth” that we are trying to measure.</p>
<pre class="r"><code>defU <- defData(varname = "U2", formula = 0,
variance = 1.5, dist = "normal")
defI <- defDataAdd(varname = "M0", formula = "-2 + U2",
dist = "binary", link = "logit")
defI <- defDataAdd(defI, varname = "M1", formula = "-1 + U2",
dist = "binary", link = "logit")
defA <- defReadAdd("DataConfoundMediation/mediation def.csv")</code></pre>
<table class="table table-condensed">
<thead>
<tr>
<th style="text-align:right;">
varname
</th>
<th style="text-align:right;">
formula
</th>
<th style="text-align:right;">
variance
</th>
<th style="text-align:right;">
dist
</th>
<th style="text-align:right;">
link
</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align:right;">
<span style="font-size: 16px">e0 </span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">0 </span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">1</span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">normal </span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">identity</span>
</td>
</tr>
<tr>
<td style="text-align:right;">
<span style="font-size: 16px">Y0M0</span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">2 + M0*2 + U2 + e0 </span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">0</span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">nonrandom</span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">identity</span>
</td>
</tr>
<tr>
<td style="text-align:right;">
<span style="font-size: 16px">Y0M1</span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">2 + M1*2 + U2 + e0 </span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">0</span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">nonrandom</span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">identity</span>
</td>
</tr>
<tr>
<td style="text-align:right;">
<span style="font-size: 16px">e1 </span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">0 </span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">1</span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">normal </span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">identity</span>
</td>
</tr>
<tr>
<td style="text-align:right;">
<span style="font-size: 16px">Y1M0</span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">8 + M0*5 + U2 + e1 </span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">0</span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">nonrandom</span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">identity</span>
</td>
</tr>
<tr>
<td style="text-align:right;">
<span style="font-size: 16px">Y1M1</span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">8 + M1*5 + U2 + e1 </span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">0</span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">nonrandom</span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">identity</span>
</td>
</tr>
<tr>
<td style="text-align:right;">
<span style="font-size: 16px">M </span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">(A==0) * M0 + (A==1) * M1 </span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">0</span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">nonrandom</span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">identity</span>
</td>
</tr>
<tr>
<td style="text-align:right;">
<span style="font-size: 16px">Y </span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">(A==0) * Y0M0 + (A==1) * Y1M1</span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">0</span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">nonrandom</span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">identity</span>
</td>
</tr>
</tbody>
</table>
<div id="getting-the-true-causal-effects" class="section level3">
<h3>Getting the “true”" causal effects</h3>
<p>With the definitions set, we can generate a very, very large data set (not infinite, but pretty close) to get at the “true” causal effects that we will try to recover using smaller (finite) data sets. I am calculating the causal mediated effects (for the treated and controls) and the causal direct effects (also for the treated and controls).</p>
<pre class="r"><code>set.seed(184049)
du <- genData(1000000, defU)
dtrue <- addCorFlex(du, defI, rho = 0.6, corstr = "cs")
dtrue <- trtAssign(dtrue, grpName = "A")
dtrue <- addColumns(defA, dtrue)
truth <- round(dtrue[, .(CMEc = mean(Y0M1 - Y0M0), CMEt= mean(Y1M1 - Y1M0),
CDEc = mean(Y1M0 - Y0M0), CDEt= mean(Y1M1 - Y0M1))], 2)
truth</code></pre>
<pre><code>## CMEc CMEt CDEc CDEt
## 1: 0.29 0.72 6.51 6.95</code></pre>
<p>And here we can see that although <span class="math inline">\(U_2\)</span> is balanced across treatment groups <span class="math inline">\(A\)</span>, <span class="math inline">\(U_2\)</span> is still associated with the mediator <span class="math inline">\(M\)</span>:</p>
<pre class="r"><code>dtrue[, mean(U2), keyby = A]</code></pre>
<pre><code>## A V1
## 1: 0 -0.00220
## 2: 1 -0.00326</code></pre>
<pre class="r"><code>dtrue[, mean(U2), keyby = M]</code></pre>
<pre><code>## M V1
## 1: 0 -0.287
## 2: 1 0.884</code></pre>
<p>Also - since <span class="math inline">\(U_2\)</span> is a confounder, we would expect it to be associated with the outcome <span class="math inline">\(Y\)</span>, which it is:</p>
<pre class="r"><code>dtrue[, cor(U2, Y)]</code></pre>
<pre><code>## [1] 0.42</code></pre>
</div>
<div id="recovering-the-estimate-from-a-small-data-set" class="section level3">
<h3>Recovering the estimate from a small data set</h3>
<p>We generate a smaller data set using the same process:</p>
<pre class="r"><code>du <- genData(1000, defU)
dd <- addCorFlex(du, defI, rho = 0.6, corstr = "cs")
dd <- trtAssign(dd, grpName = "A")
dd <- addColumns(defA,dd)</code></pre>
<p>We can estimate the causal effects using the <code>mediation</code> package, by specifying a “mediation” model and an “outcome model”. I am going to compare two approaches, one that controls for <span class="math inline">\(U_2\)</span> in both models, and a second that ignores the confounder in both.</p>
<pre class="r"><code>library(mediation)
### models that control for confounder
med.fitc <- glm(M ~ A + U2, data = dd, family = binomial("logit"))
out.fitc <- lm(Y ~ M*A + U2, data = dd)
med.outc <- mediate(med.fitc, out.fitc, treat = "A", mediator = "M",
robustSE = TRUE, sims = 500)
### models that ignore confounder
med.fitx <- glm(M ~ A, data = dd, family = binomial("logit"))
out.fitx <- lm(Y ~ M*A, data = dd)
med.outx <- mediate(med.fitx, out.fitx, treat = "A", mediator = "M",
robustSE = TRUE, sims = 500)</code></pre>
<p>It appears that the approach that adjusts for <span class="math inline">\(U_2\)</span> (middle row) provides a set of estimates closer to the truth (top row) than the approach that ignores <span class="math inline">\(U_2\)</span> (bottom row):</p>
<pre class="r"><code>dres <- rbind(
truth,
data.table(CMEc = med.outc$d0, CMEt = med.outc$d1,
CDEc = med.outc$z0, CDEt = med.outc$z1) ,
data.table(CMEc = med.outx$d0, CMEt = med.outx$d1,
CDEc = med.outx$z0, CDEt = med.outx$z1)
)
round(dres,2)</code></pre>
<pre><code>## CMEc CMEt CDEc CDEt
## 1: 0.29 0.72 6.51 6.95
## 2: 0.32 0.84 6.51 7.03
## 3: 0.53 1.07 6.32 6.85</code></pre>
<p>Of course, it is not prudent to draw conclusions from a single simulation. So, I generated 1000 data sets and recorded all the results. A visual summary of the results shows that the approach that ignores <span class="math inline">\(U_2\)</span> is biased with respect to the four causal effects, whereas including <span class="math inline">\(U_2\)</span> in the analysis yields unbiased estimates. In the plot, the averages of the estimates are the black points, the segments represent <span class="math inline">\(\pm \ 2 \ sd\)</span>, and the blue vertical lines represent the truth:</p>
<p><img src="https://www.rdatagen.net/img/post-confoundmed/estMediation.png" /></p>
<p>Almost as an addendum, using the almost infinitely large “true” data set, we can see that the total treatment effect of <span class="math inline">\(A\)</span> can be estimated from observed data <em>ignoring</em> <span class="math inline">\(U_2\)</span>, because as we saw earlier, <span class="math inline">\(U_2\)</span> is indeed balanced across both levels of <span class="math inline">\(A\)</span> due to randomization:</p>
<pre class="r"><code>c( est = coef(lm(Y ~ A, data = dtrue))["A"],
truth = round(dtrue[, .(TotalEff = mean(Y1M1 - Y0M0))], 2))</code></pre>
<pre><code>## $est.A
## [1] 7.24
##
## $truth.TotalEff
## [1] 7.24</code></pre>
</div>
</div>
Musings on missing data
https://www.rdatagen.net/post/musings-on-missing-data/
Tue, 02 Apr 2019 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/musings-on-missing-data/<p>I’ve been meaning to share an analysis I recently did to estimate the strength of the relationship between a young child’s ability to recognize emotions in others (e.g. teachers and fellow students) and her longer term academic success. The study itself is quite interesting (hopefully it will be published sometime soon), but I really wanted to write about it here as it involved the challenging problem of missing data in the context of heterogeneous effects (different across sub-groups) and clustering (by schools).</p>
<p>As I started to develop simulations to highlight key issues, I found myself getting bogged down in the data generation process. Once I realized I needed to be systematic about thinking how to generate various types of missingness, I thought maybe DAGs would help to clarify some of the issues (I’ve written a bit about DAGS <a href="https://www.rdatagen.net/post/dags-colliders-and-an-example-of-variance-bias-tradeoff/">before</a> and provided some links to some good references). I figured that I probably wasn’t the first to think of this, and a quick search confirmed that there is indeed a pretty rich literature on the topic. I first found this <a href="http://jakewestfall.org/blog/index.php/2017/08/22/using-causal-graphs-to-understand-missingness-and-how-to-deal-with-it/">blog post</a> by Jake Westfall, which, in addition to describing many of the key issues that I want to address here, provides some excellent references, including this paper by <a href="https://journals.sagepub.com/doi/pdf/10.1177/0962280210394469"><em>Daniel et al</em></a> and this one by <a href="http://papers.nips.cc/paper/4899-graphical-models-for-inference-with-missing-data.pdf"><em>Mohan et al</em></a>.</p>
<p>I think the value I can add here is to provide some basic code to get the data generation processes going, in case you want to explore missing data methods for yourself.</p>
<div id="thinking-systematically-about-missingness" class="section level2">
<h2>Thinking systematically about missingness</h2>
<p>In the world of missing data, it has proved to be immensely useful to classify different types of missing data. That is, there could various explanations of how the missingness came to be in a particular data set. This is important, because as in any other modeling problem, having an idea about the data generation process (in this case the missingness generation process) informs how you should proceed to get the “best” estimate possible using the data at hand.</p>
<p>Missingness can be recorded as a binary characteristic of a particular data point for a particular individual; the data point is missing or it is not. It seems to be the convention that the missingness indicator is <span class="math inline">\(R_{p}\)</span> (where <span class="math inline">\(p\)</span> is the variable), and <span class="math inline">\(R_{p} = 1\)</span> if the data point <span class="math inline">\(p\)</span> is missing and is <span class="math inline">\(0\)</span> otherwise.</p>
<p>We say data are <em>missing completely at random</em> (MCAR) when <span class="math inline">\(P(R)\)</span> is independent of all data, observed and missing. For example, if missingness depends on the flip of a coin, the data would be MCAR. Data are <em>missing at random</em> when <span class="math inline">\(P(R \ | \ D_{obs})\)</span> is independent of <span class="math inline">\(D_{mis}\)</span>, the missing data. In this case, if older people tend to have more missing data, and we’ve recorded age, then the data are MAR. And finally, data are <em>missing not at random</em> (MNAR) when <span class="math inline">\(P(R \ | \ D_{obs}) = f(D_{mis})\)</span>, or missingness is related to the unobserved data even after conditioning on observed data. If missingness is related to the health of a person at follow-up and the outcome measurement reflects the health of a person, then the data are MNAR.</p>
</div>
<div id="the-missingness-taxonomy-in-3-dags" class="section level2">
<h2>The missingness taxonomy in 3 DAGs</h2>
<p>The <a href="http://papers.nips.cc/paper/4899-graphical-models-for-inference-with-missing-data.pdf"><em>Mohan et al</em></a> paper suggests including the missing indicator <span class="math inline">\(R_p\)</span> directly in the DAG to clarify the nature of dependence between the variables and the missingness. If we have missingness in the outcome <span class="math inline">\(Y\)</span> (so that for at least one individual <span class="math inline">\(R_y = 1\)</span>), there is an induced observed variable <span class="math inline">\(Y^*\)</span> that equals <span class="math inline">\(Y\)</span> if <span class="math inline">\(R_y = 0\)</span>, and is missing if <span class="math inline">\(R_y = 1\)</span>. <span class="math inline">\(Y\)</span> represents the complete outcome data, which don’t observe if there is any missingness. The question is, can we estimate the joint distribution <span class="math inline">\(P(A, Y)\)</span> (or really any characteristic of the distribution, such as the mean of <span class="math inline">\(Y\)</span> at different levels of <span class="math inline">\(A\)</span>, which would give us a measure of causal effect) using the observed data <span class="math inline">\((A, R_y, Y^*)\)</span>? (For much of what follows, I am drawing directly from the <em>Mohan et al</em> paper.)</p>
<div id="mcar" class="section level3">
<h3>MCAR</h3>
<p><img src="https://www.rdatagen.net/img/post-missing/MCAR.png" /></p>
<p>First, consider when the missingness is MCAR, as depicted above. From the DAG, <span class="math inline">\(A \cup Y \perp \! \! \! \perp R_y\)</span>, since <span class="math inline">\(Y^*\)</span> is a “collider”. It follows that <span class="math inline">\(P(A, Y) = P(A, Y \ | \ R_y)\)</span>, or more specifically <span class="math inline">\(P(A, Y) = P(A, Y \ | \ R_y=0)\)</span>. And when <span class="math inline">\(R_y = 0\)</span>, by definition <span class="math inline">\(Y = Y^*\)</span>. So we end up with <span class="math inline">\(P(A, Y) = P(A, Y^* \ | \ R_y = 0)\)</span>. Using observed data only, we can “recover” the underlying relationship between <span class="math inline">\(A\)</span> and <span class="math inline">\(Y\)</span>.</p>
<p>A simulation my help to see this. First, we use the <code>simstudy</code> functions to define both the data generation and missing data processes:</p>
<pre class="r"><code>def <- defData(varname = "a", formula = 0, variance = 1, dist = "normal")
def <- defData(def, "y", formula = "1*a", variance = 1, dist = "normal")
defM <- defMiss(varname = "y", formula = 0.2, logit.link = FALSE)</code></pre>
<p>The complete data are generated first, followed by the missing data matrix, and ending with the observed data set.</p>
<pre class="r"><code>set.seed(983987)
dcomp <- genData(1000, def)
dmiss <- genMiss(dcomp, defM, idvars = "id")
dobs <- genObs(dcomp, dmiss, "id")
head(dobs)</code></pre>
<pre><code>## id a y
## 1: 1 0.171 0.84
## 2: 2 -0.882 0.37
## 3: 3 0.362 NA
## 4: 4 1.951 1.62
## 5: 5 0.069 -0.18
## 6: 6 -2.423 -1.29</code></pre>
<p>In this replication, about 22% of the <span class="math inline">\(Y\)</span> values are missing:</p>
<pre class="r"><code>dmiss[, mean(y)]</code></pre>
<pre><code>## [1] 0.22</code></pre>
<p>If <span class="math inline">\(P(A, Y) = P(A, Y^* \ | \ R_y = 0)\)</span>, then we would expect that the mean of <span class="math inline">\(Y\)</span> in the complete data set will equal the mean of <span class="math inline">\(Y^*\)</span> in the observed data set. And indeed, they appear quite close:</p>
<pre class="r"><code>round(c(dcomp[, mean(y)], dobs[, mean(y, na.rm = TRUE)]), 2)</code></pre>
<pre><code>## [1] 0.03 0.02</code></pre>
<p>Going beyond the mean, we can characterize the joint distribution of <span class="math inline">\(A\)</span> and <span class="math inline">\(Y\)</span> using a linear model (which we know is true, since that is how we generated the data). Since the outcome data are missing completely at random, we would expect that the relationship between <span class="math inline">\(A\)</span> and <span class="math inline">\(Y^*\)</span> to be very close to the true relationship represented by the complete (and not fully observed) data.</p>
<pre class="r"><code>fit.comp <- lm(y ~ a, data = dcomp)
fit.obs <- lm(y ~ a, data = dobs)
broom::tidy(fit.comp)</code></pre>
<pre><code>## # A tibble: 2 x 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) -0.00453 0.0314 -0.144 8.85e- 1
## 2 a 0.964 0.0313 30.9 2.62e-147</code></pre>
<pre class="r"><code>broom::tidy(fit.obs)</code></pre>
<pre><code>## # A tibble: 2 x 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) -0.0343 0.0353 -0.969 3.33e- 1
## 2 a 0.954 0.0348 27.4 4.49e-116</code></pre>
<p>And if we plot those lines over the actual data, they should be quite close, if not overlapping. In the plot below, the red points represent the true values of the missing data. We can see that missingness is scattered randomly across values of <span class="math inline">\(A\)</span> and <span class="math inline">\(Y\)</span> - this is what MCAR data looks like. The solid line represents the fitted regression line based on the full data set (assuming no data are missing) and the dotted line represents the fitted regression line using complete cases only.</p>
<pre class="r"><code>dplot <- cbind(dcomp, y.miss = dmiss$y)
ggplot(data = dplot, aes(x = a, y = y)) +
geom_point(aes(color = factor(y.miss)), size = 1) +
scale_color_manual(values = c("grey60", "#e67c7c")) +
geom_abline(intercept = coef(fit.comp)[1],
slope = coef(fit.comp)[2]) +
geom_abline(intercept = coef(fit.obs)[1],
slope = coef(fit.obs)[2], lty = 2) +
theme(legend.position = "none",
panel.grid = element_blank())</code></pre>
<p><img src="https://www.rdatagen.net/post/2019-04-02-musings-on-missing-data.en_files/figure-html/unnamed-chunk-7-1.png" width="672" /></p>
</div>
<div id="mar" class="section level3">
<h3>MAR</h3>
<p><img src="https://www.rdatagen.net/img/post-missing/MAR.png" /></p>
<p>This DAG is showing a MAR pattern, where <span class="math inline">\(Y \perp \! \! \! \perp R_y \ | \ A\)</span>, again because <span class="math inline">\(Y^*\)</span> is a collider. This means that <span class="math inline">\(P(Y | A) = P(Y | A, R_y)\)</span>. If we decompose <span class="math inline">\(P(A, Y) = P(Y | A)P(A)\)</span>, you can see how that independence is useful. Substituting <span class="math inline">\(P(Y | A, R_y)\)</span> for <span class="math inline">\(P(Y | A)\)</span> , <span class="math inline">\(P(A, Y) = P(Y | A, R_y)P(A)\)</span>. Going further, <span class="math inline">\(P(A, Y) = P(Y | A, R_y=0)P(A)\)</span>, which is equal to <span class="math inline">\(P(Y^* | A, R_y=0)P(A)\)</span>. Everything in this last decomposition is observable - <span class="math inline">\(P(A)\)</span> from the full data set and <span class="math inline">\(P(Y^* | A, R_y=0)\)</span> from the records with observed <span class="math inline">\(Y\)</span>’s only.</p>
<p>This implies that, conceptually at least, we can estimate the conditional probability distribution of observed-only <span class="math inline">\(Y\)</span>’s for each level of <span class="math inline">\(A\)</span>, and then pool the distributions across the fully observed distribution of <span class="math inline">\(A\)</span>. That is, under an assumption of data MAR, we can recover the joint distribution of the full data using observed data only.</p>
<p>To simulate, we keep the data generation process the same as under MCAR; the only thing that changes is the missingness generation process. <span class="math inline">\(P(R_y)\)</span> now depends on <span class="math inline">\(A\)</span>:</p>
<pre class="r"><code>defM <- defMiss(varname = "y", formula = "-2 + 1.5*a", logit.link = TRUE)</code></pre>
<p>After generating the data as before, the proportion of missingness is unchanged (though the pattern of missingness certainly is):</p>
<pre class="r"><code>dmiss[, mean(y)]</code></pre>
<pre><code>## [1] 0.22</code></pre>
<p>We do not expect the marginal distribution of <span class="math inline">\(Y\)</span> and <span class="math inline">\(Y^*\)</span> to be the same (only the distributions conditional on <span class="math inline">\(A\)</span> are close), so the means should be different:</p>
<pre class="r"><code>round(c(dcomp[, mean(y)], dobs[, mean(y, na.rm = TRUE)]), 2)</code></pre>
<pre><code>## [1] 0.03 -0.22</code></pre>
<p>However, since the conditional distribution of <span class="math inline">\((Y|A)\)</span> is equivalent to <span class="math inline">\((Y^*|A, R_y = 0)\)</span>, we would expect estimates from a regression model of <span class="math inline">\(E[Y] = \beta_0 + \beta_1A)\)</span> would yield estimates very close to <span class="math inline">\(E[Y^*] = \beta_0^{*} + \beta_1^{*}A\)</span>. That is, we would expect <span class="math inline">\(\beta_1^{*} \approx \beta_1\)</span>.</p>
<pre><code>## # A tibble: 2 x 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) -0.00453 0.0314 -0.144 8.85e- 1
## 2 a 0.964 0.0313 30.9 2.62e-147</code></pre>
<pre><code>## # A tibble: 2 x 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 0.00756 0.0369 0.205 8.37e- 1
## 2 a 0.980 0.0410 23.9 3.57e-95</code></pre>
<p>The overlapping lines in the plot confirm the close model estimates. In addition, you can see here that missingness is associated with higher values of <span class="math inline">\(A\)</span>.</p>
<p><img src="https://www.rdatagen.net/post/2019-04-02-musings-on-missing-data.en_files/figure-html/unnamed-chunk-13-1.png" width="672" /></p>
</div>
<div id="mnar" class="section level3">
<h3>MNAR</h3>
<p><img src="https://www.rdatagen.net/img/post-missing/MNAR.png" /></p>
<p>In MNAR, there is no way to separate <span class="math inline">\(Y\)</span> from <span class="math inline">\(R_y\)</span>. Reading from the DAG, <span class="math inline">\(P(Y) \neq P(Y^* | R_y)\)</span>, and <span class="math inline">\(P(Y|A) \neq P(Y^* | A, R_y)\)</span>. There is no way to recover the joint probability of <span class="math inline">\(P(X,Y)\)</span> with observed data. <em>Mohan et al</em> do show that under some circumstances, it <em>is</em> possible to use observed data to recover the true distribution under MNAR (particularly when there is missingness related to the exposure measurement <span class="math inline">\(A\)</span>), but not in this particular case.</p>
<p><a href="https://journals.sagepub.com/doi/pdf/10.1177/0962280210394469"><em>Daniel et al</em></a> have a different approach to determine whether the causal relationship of <span class="math inline">\(A\)</span> and <span class="math inline">\(Y\)</span> is identifiable under the different mechanisms. They do not use variable like <span class="math inline">\(Y^*\)</span>, but introduce external, nodes <span class="math inline">\(U_a\)</span> and <span class="math inline">\(U_y\)</span> representing unmeasured variability related to both exposure and outcome (panel <em>a</em> of the diagram below).</p>
<p><img src="https://www.rdatagen.net/img/post-missing/MNAR%20Daniel.png" /></p>
<p>In the case of MNAR, when you use complete cases only, you are effectively controlling for <span class="math inline">\(R_y\)</span> (panel <em>b</em>). Since <span class="math inline">\(Y\)</span> is a collider (and <span class="math inline">\(U_y\)</span> is an ancestor of <span class="math inline">\(Y\)</span>), this has the effect of inducing an association between <span class="math inline">\(A\)</span> and <span class="math inline">\(U_y\)</span>, the common causes of <span class="math inline">\(Y\)</span>. By doing this, we have introduced unmeasured confounding that cannot be corrected, because <span class="math inline">\(U_y\)</span>, by definition, always represents the portion of unmeasured variation of <span class="math inline">\(Y\)</span>.</p>
<p>In the simulation, I explicitly generate <span class="math inline">\(U_y\)</span>, so we can see if we observe this association:</p>
<pre class="r"><code>def <- defData(varname = "a", formula = 0, variance = 1, dist = "normal")
def <- defData(def, "u.y", formula = 0, variance = 1, dist = "normal")
def <- defData(def, "y", formula = "1*a + u.y", dist = "nonrandom")</code></pre>
<p>This time around, we generate missingness of <span class="math inline">\(Y\)</span> as a function of <span class="math inline">\(Y\)</span> itself:</p>
<pre class="r"><code>defM <- defMiss(varname = "y", formula = "-3 + 2*y", logit.link = TRUE)</code></pre>
<pre><code>## [1] 0.21</code></pre>
<p>Indeed, <span class="math inline">\(A\)</span> and <span class="math inline">\(U_y\)</span> are virtually uncorrelated in the full data set, but are negatively correlated in the cases where <span class="math inline">\(Y\)</span> is not missing, as theory would suggest:</p>
<pre class="r"><code>round(c(dcomp[, cor(a, u.y)], dobs[!is.na(y), cor(a, u.y)]), 2)</code></pre>
<pre><code>## [1] -0.04 -0.23</code></pre>
<p>The plot generated from these data shows diverging regression lines, the divergence a result of the induced unmeasured confounding.</p>
<p><img src="https://www.rdatagen.net/post/2019-04-02-musings-on-missing-data.en_files/figure-html/unnamed-chunk-18-1.png" width="672" /></p>
<p>In this MNAR example, we see that the missingness is indeed associated with higher values of <span class="math inline">\(Y\)</span>, although the proportion of missingness remains at about 21%, consistent with the earlier simulations.</p>
</div>
</div>
<div id="there-may-be-more-down-the-road" class="section level2">
<h2>There may be more down the road</h2>
<p>I’ll close here, but in the near future, I hope to explore various (slightly more involved) scenarios under which complete case analysis is adequate, or where something like multiple imputation is more useful. Also, I would like to get back to the original motivation for writing about missingness, which was to describe how I went about analyzing the child emotional intelligence data. Both of these will be much easier now that we have the basic tools to think about how missing data can be generated in a systematic way.</p>
<p>
<p><small><font color="darkkhaki">
References:</p>
<p>Daniel, Rhian M., Michael G. Kenward, Simon N. Cousens, and Bianca L. De Stavola. “Using causal diagrams to guide analysis in missing data problems.” Statistical methods in medical research 21, no. 3 (2012): 243-256.</p>
<p>Mohan, Karthika, Judea Pearl, and Jin Tian. “Graphical models for inference with missing data.” In Advances in neural information processing systems, pp. 1277-1285. 2013.</p>
Westfall, Jake. “Using causal graphs to understand missingness and how to deal with it.” Cookie Scientist (blog). August 22, 2017. Accessed March 25, 2019. <a href="http://jakewestfall.org/blog/" class="uri">http://jakewestfall.org/blog/</a>.
</font></small>
</p>
</div>
A case where prospective matching may limit bias in a randomized trial
https://www.rdatagen.net/post/a-case-where-prospecitve-matching-may-limit-bias/
Tue, 12 Mar 2019 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/a-case-where-prospecitve-matching-may-limit-bias/<p>Analysis is important, but study design is paramount. I am involved with the Diabetes Research, Education, and Action for Minorities (DREAM) Initiative, which is, among other things, estimating the effect of a group-based therapy program on weight loss for patients who have been identified as pre-diabetic (which means they have elevated HbA1c levels). The original plan was to randomize patients at a clinic to treatment or control, and then follow up with those assigned to the treatment group to see if they wanted to participate. The primary outcome is going to be measured using medical records, so those randomized to control (which basically means nothing special happens to them) will not need to interact with the researchers in any way.</p>
<p>The concern with this design is that only those patients randomized to the intervention arm of the study have an opportunity to make a choice about participating. In fact, in a pilot study, it was quite difficult to recruit some patients, because the group therapy sessions were frequently provided during working hours. So, even if the groups are balanced after randomization with respect to important (and unimportant characteristics) like age, gender, weight, baseline A1c levels, etc., the patients who actually receive the group therapy might look quite different from the patients who receive treatment as usual. The decision to actually participate in group therapy is not randomized, so it is possible (maybe even likely) that the group getting the therapy is older and more at risk for diabetes (which might make them more motivated to get involved) than those in the control group.</p>
<p>One solution is to analyze the outcomes for everyone randomized, regardless of whether or not they participate (as an <em>intent-to-treat</em> analysis). This estimate would answer the question about how effective the therapy would be in a setting where the intervention is made available; this intent-to-treat estimate does not say how effective the therapy is for the patients who actually choose to receive it. To answer this second question, some sort of <em>as-treated</em> analysis could be used. One analytic solution would be to use an instrumental variable approach. (I wrote about non-compliance in a series of posts starting <a href="https://www.rdatagen.net/post/cace-explored/">here</a>.)</p>
<p>However, we decided to address the issue of differential non-participation in the actual design of the study. In particular, we have modified the randomization process with the aim of eliminating any potential bias. The post-hoc IV analysis is essentially a post-hoc matched analysis (it estimates the treatment effect only for the compliers - those randomized to treatment who actually participate in treatment); we hope to construct the groups <em>prospectively</em> to arrive at the same estimate.</p>
<div id="the-matching-strategy" class="section level2">
<h2>The matching strategy</h2>
<p>The idea is quite simple. We will generate a list of patients based on a recent pre-diabetes diagnosis. From that list, we will draw a single individual and then find a match from the remaining individuals. The match will be based on factors that the researchers think might be related to the outcome, such as age, gender, and one or two other relevant baseline measures. (If the number of matching characteristics grows too large, matching may turn out to be difficult.) If no match is found, the first individual is removed from the study. If a match is found, the first individual is assigned to the therapy group, and the second to the control group. Now we repeat the process, drawing another individual from the list (which excludes the first pair and any patients who have been unmatched), and finding a match. The process is repeated until everyone on the list has been matched or placed on the unmatched list.</p>
<p>After the pairs have been created, the research study coordinators reach out to the individuals who have been randomized to the therapy group in an effort to recruit participants. If a patient declines, she and her matched pair are removed from the study (i.e. their outcomes will not be included in the final analysis). The researchers will work their way down the list until enough people have been found to participate.</p>
<p>We try to eliminate the bias due to differential dropout by removing the matched patient every time a patient randomized to therapy declines to participate. We are making a key assumption here: the matched patient of someone who agrees to participate would have also agreed to participate. We are also assuming that the matching criteria are sufficient to predict participation. While we will not completely remove bias, it may be the best we can do given the baseline information we have about the patients. It would be ideal if we could ask both members of the pair if they would be willing to participate, and remove them both if one declines. However, in this particular study, this is not feasible.</p>
</div>
<div id="the-matching-algorithm" class="section level2">
<h2>The matching algorithm</h2>
<p>I implemented this algorithm on a sample data set that includes gender, age, and BMI, the three characteristics we want to match. The data is read directly into an <code>R</code> data.table <code>dsamp</code>. I’ve printed the first six rows:</p>
<pre class="r"><code>dsamp <- fread("DataMatchBias/eligList.csv")
setkey(dsamp, ID)
dsamp[1:6]</code></pre>
<pre><code>## ID female age BMI
## 1: 1 1 24 27.14
## 2: 2 0 29 31.98
## 3: 3 0 47 25.28
## 4: 4 0 40 24.27
## 5: 5 1 29 30.61
## 6: 6 1 38 25.69</code></pre>
<p>The loop below selects a single record from dsamp and searches for a match. If a match is found, the selected record is added to <code>drand</code> (randomized to therapy) and the match is added to <code>dcntl</code>. If no match is found, the single record is added to <code>dused</code>, and nothing is added to <code>drand</code> or <code>dcntl</code>. Anytime a record is added to any of the three data tables, it is removed from <code>dsamp</code>. This process continues until <code>dsamp</code> has one or no records remaining.</p>
<p>The actual matching is done by a call to function <code>Match</code> from the <code>Matching</code> package. This function is typically used to match a group of exposed to unexposed (or treated to untreated) individuals, often using a propensity score. In this case, we are matching simultaneously on the three columns in <code>dsamp</code>. Ideally, we would want to have exact matches, but this is unrealistic for continuous measures. So, for age and BMI, we set the matching range to be 0.5 standard deviations. (We do match exactly on gender.)</p>
<pre class="r"><code>library(Matching)
set.seed(3532)
dsamp[, rx := 0]
dused <- NULL
drand <- NULL
dcntl <- NULL
while (nrow(dsamp) > 1) {
selectRow <- sample(1:nrow(dsamp), 1)
dsamp[selectRow, rx := 1]
myTr <- dsamp[, rx]
myX <- as.matrix(dsamp[, .(female, age, BMI)])
match.dt <- Match(Tr = myTr, X = myX,
caliper = c(0, 0.50, .50), ties = FALSE)
if (length(match.dt) == 1) { # no match
dused <- rbind(dused, dsamp[selectRow])
dsamp <- dsamp[-selectRow, ]
} else { # match
trt <- match.dt$index.treated
ctl <- match.dt$index.control
drand <- rbind(drand, dsamp[trt])
dcntl <- rbind(dcntl, dsamp[ctl])
dsamp <- dsamp[-c(trt, ctl)]
}
}</code></pre>
</div>
<div id="matching-results" class="section level2">
<h2>Matching results</h2>
<p>Here is a plot of all the pairs that were generated (connected by the blue segment), and includes the individuals without a match (red circles). We could get shorter line segments if we reduced the caliper values, but we would certainly increase the number of unmatched patients.</p>
<p><img src="https://www.rdatagen.net/post/2019-03-12-a-case-where-prospecitve-matching-may-limit-bias.en_files/figure-html/unnamed-chunk-3-1.png" width="960" /></p>
<p>The distributions of the matching variables (or least the means and standard deviations) appear quite close, as we can see by looking at the males and females separately.</p>
<div id="males" class="section level5">
<h5>Males</h5>
<pre><code>## rx N mu.age sd.age mu.bmi sd.bmi
## 1: 0 77 44.8 12.4 28.6 3.65
## 2: 1 77 44.6 12.4 28.6 3.71</code></pre>
</div>
<div id="females" class="section level5">
<h5>Females</h5>
<pre><code>## rx N mu.age sd.age mu.bmi sd.bmi
## 1: 0 94 47.8 11.1 29.7 4.63
## 2: 1 94 47.8 11.3 29.7 4.55</code></pre>
</div>
</div>
<div id="incorporating-the-design-into-the-analysis-plan" class="section level2">
<h2>Incorporating the design into the analysis plan</h2>
<p>The study - which is formally named <em>Integrated Community-Clinical Linkage Model to Promote Weight Loss among South Asians with Pre-Diabetes</em> - is still in its early stages, so no outcomes have been collected. But when it comes time to analyzing the results, the models used to estimate the effect of the intervention will have to take into consideration two important design factors: (1) the fact that the individuals in the treatment and control groups are not independent, because they were assigned to their respective groups in pairs, and (2) the fact that the individuals in the treatment groups will not be independent of each other, since the intervention is group-based, so this a partially cluster randomized trial. In a future post, I will explore this model in a bit more detail.</p>
<p>
<small><font color="darkkhaki">This study is supported by the National Institutes of Health National Institute of Diabetes and Digestive and Kidney Diseases R01DK11048. The views expressed are those of the author and do not necessarily represent the official position of the funding organizations.</font></small>
</p>
</div>
A example in causal inference designed to frustrate: an estimate pretty much guaranteed to be biased
https://www.rdatagen.net/post/dags-colliders-and-an-example-of-variance-bias-tradeoff/
Tue, 26 Feb 2019 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/dags-colliders-and-an-example-of-variance-bias-tradeoff/<p>I am putting together a brief lecture introducing causal inference for graduate students studying biostatistics. As part of this lecture, I thought it would be helpful to spend a little time describing directed acyclic graphs (DAGs), since they are an extremely helpful tool for communicating assumptions about the causal relationships underlying a researcher’s data.</p>
<p>The strength of DAGs is that they help us think how these underlying relationships in the data might lead to biases in causal effect estimation, and suggest ways to estimate causal effects that eliminate these biases. (For a real introduction to DAGs, you could take a look at this <a href="http://ftp.cs.ucla.edu/pub/stat_ser/r251.pdf">paper</a> by <em>Greenland</em>, <em>Pearl</em>, and <em>Robins</em> or better yet take a look at Part I of this <a href="https://www.hsph.harvard.edu/miguel-hernan/causal-inference-book/2015/">book</a> on causal inference by <em>Hernán</em> and <em>Robins</em>.)</p>
<p>As part of this lecture, I plan on including a (frustrating) example that illustrates a scenario where it may in fact be impossible to get an unbiased estimate of the causal effect of interest based on the data that has been collected. I thought I would share this little example here.</p>
<div id="the-scenario" class="section level2">
<h2>The scenario</h2>
<p>In the graph below we are interested in the causal effect of <span class="math inline">\(A\)</span> on an outcome <span class="math inline">\(Y\)</span>. We have also measured a covariate <span class="math inline">\(L\)</span>, thinking it might be related to some unmeasured confounder (in this case <span class="math inline">\(U_2\)</span>). Furthermore, there is another unmeasured variable <span class="math inline">\(U_1\)</span> unrelated to <span class="math inline">\(A\)</span>, but related to the measure <span class="math inline">\(L\)</span> and outcome <span class="math inline">\(Y\)</span>. These relationships are captured in this DAG:</p>
<p><img src="https://www.rdatagen.net/img/post-dag/firstDAG.png" /></p>
<p>It may help to be a bit more concrete about what these variables might represent. Say we are conducting an epidemiological study focused on whether or not exercise between the age of 50 and 60 has an effect on hypertension after 60. (So, <span class="math inline">\(A\)</span> is exercise and <span class="math inline">\(Y\)</span> is a measure of hypertension.) We are concerned that there might be confounding by some latent (unmeasured) factor related to an individual’s conscientiousness about their health; those who are more conscientious may exercise more, but they will also do other things to improve their health. In this case, we are able to measure whether or not the individual has a healthy diet (<span class="math inline">\(L\)</span>), and we hope that will address the issue of confounding. (Note we are making the assumption that conscientiousness is related to hypertension only through exercise or diet, probably not very realistic.)</p>
<p>But, it also turns out that an individual’s diet is also partly determined by where the individual lives; that is, characteristics of the area may play a role. Unfortunately, the location of the individual (or characteristics of the location) was not measured (<span class="math inline">\(U_1\)</span>). These same characteristics also affect location-specific hypertension levels.</p>
<p>Inspecting the original DAG, we see that <span class="math inline">\(U_2\)</span> is indeed confounding the relationship between <span class="math inline">\(A\)</span> and <span class="math inline">\(Y\)</span>. There is a back-door path <span class="math inline">\(A \rightarrow U_2 \rightarrow L \rightarrow Y\)</span> that needs to be blocked. We cannot just ignore this path. If we generate data and estimate the effect of <span class="math inline">\(A\)</span> on <span class="math inline">\(Y\)</span>, we will see that the estimate is quite biased. First, we generate data based on the DAG, assuming <span class="math inline">\(L\)</span>, and <span class="math inline">\(A\)</span> are binary, and <span class="math inline">\(Y\)</span> is continuous (though this is by no means necessary):</p>
<pre class="r"><code>d <- defData(varname = "U1", formula = 0.5,
dist = "binary")
d <- defData(d, varname = "U2", formula = 0.4,
dist = "binary")
d <- defData(d, varname = "L", formula = "-1.6 + 1 * U1 + 1 * U2",
dist = "binary", link = "logit")
d <- defData(d, varname = "A", formula = "-1.5 + 1.2 * U2",
dist = "binary", link="logit")
d <- defData(d, varname = "Y", formula = "0 + 1 * U1 + 1 * L + 0.5 * A",
variance = .5, dist = "normal")
set.seed(20190226)
dd <- genData(2500, d)
dd</code></pre>
<pre><code>## id U1 U2 L A Y
## 1: 1 0 1 1 1 1.13
## 2: 2 0 0 1 0 1.31
## 3: 3 1 0 0 0 1.20
## 4: 4 0 1 1 0 1.04
## 5: 5 0 0 0 0 -0.67
## ---
## 2496: 2496 0 0 0 0 0.29
## 2497: 2497 0 0 0 0 -0.24
## 2498: 2498 1 0 1 0 1.32
## 2499: 2499 1 1 1 1 3.44
## 2500: 2500 0 0 0 0 -0.78</code></pre>
<p>And here is the unadjusted model. The effect of <span class="math inline">\(A\)</span> is overestimated (the true effect is 0.5):</p>
<pre class="r"><code>broom::tidy(lm(Y ~ A, data = dd))</code></pre>
<pre><code>## # A tibble: 2 x 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 0.826 0.0243 34.0 2.54e-208
## 2 A 0.570 0.0473 12.0 1.53e- 32</code></pre>
</div>
<div id="adjusting-for-a-potential-confounder-that-is-also-a-collider" class="section level2">
<h2>Adjusting for a potential confounder that is also a collider</h2>
<p>While we are not able to measure <span class="math inline">\(U_2\)</span>, we have observed <span class="math inline">\(L\)</span>. We might think we are OK. But, alas, we are not. If we control for diet (<span class="math inline">\(L\)</span>), we are controlling a “collider”, which will open up an association between <span class="math inline">\(U_1\)</span> and <span class="math inline">\(U_2\)</span>. (I wrote about this before <a href="https://www.rdatagen.net/post/another-reason-to-be-careful-about-what-you-control-for/">here</a>.)</p>
<p><img src="https://www.rdatagen.net/img/post-dag/firstDAGcontrol1.png" /></p>
<p>The idea is that if I have a healthy diet but I am not particularly conscientious about my health, I probably live in an area encourages or provides access to better food. Therefore, conditioning on diet induces a (negative, in this case) correlation between location type and health conscientiousness. So, by controlling <span class="math inline">\(L\)</span> we’ve created a back-door path <span class="math inline">\(A \rightarrow U_2 \rightarrow U_1 \rightarrow Y\)</span>. Confounding remains, though it may be reduced considerably if the induced link between <span class="math inline">\(U_2\)</span> and <span class="math inline">\(U_1\)</span> is relatively weak.</p>
<pre class="r"><code>broom::tidy(lm(Y ~ L+ A, data = dd))</code></pre>
<pre><code>## # A tibble: 3 x 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 0.402 0.0231 17.4 6.58e- 64
## 2 L 1.26 0.0356 35.6 2.10e-224
## 3 A 0.464 0.0386 12.0 2.46e- 32</code></pre>
</div>
<div id="more-systematic-exploration-of-bias-and-variance-of-estimates" class="section level2">
<h2>More systematic exploration of bias and variance of estimates</h2>
<p>If we repeatedly generate samples (this time of size 500), we get a much better picture of the consequences of using different models to estimate the causal effect. The function below generates the data (using the same definitions as before), and then estimating three different models: (1) no adjustment, (2) incorrect adjustment for <span class="math inline">\(L\)</span>, the confounder/collider, and (3) the correct adjustment of the unmeasured confounder <span class="math inline">\(U_2\)</span>, which should be unbiased. The function returns the three estimates of the causal effect of <span class="math inline">\(A\)</span>:</p>
<pre class="r"><code>repFunc <- function(n, def) {
dd <- genData(n, def)
c1 <- coef(lm(Y ~ A, data = dd))["A"]
c2 <- coef(lm(Y ~ L + A, data = dd))["A"]
c3 <- coef(lm(Y ~ U2 + A, data = dd))["A"]
return(data.table(c1, c2, c3))
}</code></pre>
<p>This following code generates 2500 replications of the “experiment” and stores the final results in data.table <code>rdd</code>:</p>
<pre class="r"><code>RNGkind("L'Ecuyer-CMRG") # to set seed for parallel process
reps <- parallel::mclapply(1:2500,
function(x) repFunc(500, d),
mc.set.seed = TRUE)
rdd <- rbindlist(reps)
rdd[, rep := .I]
rdd</code></pre>
<pre><code>## c1 c2 c3 rep
## 1: 0.46 0.45 0.40 1
## 2: 0.56 0.45 0.41 2
## 3: 0.59 0.46 0.50 3
## 4: 0.74 0.68 0.61 4
## 5: 0.45 0.43 0.41 5
## ---
## 2496: 0.42 0.42 0.37 2496
## 2497: 0.57 0.54 0.53 2497
## 2498: 0.56 0.49 0.51 2498
## 2499: 0.53 0.45 0.43 2499
## 2500: 0.73 0.63 0.69 2500</code></pre>
<pre class="r"><code>rdd[, .(mean(c1 - 0.5), mean(c2 - 0.5), mean(c3-0.5))]</code></pre>
<pre><code>## V1 V2 V3
## 1: 0.062 -0.015 -0.0016</code></pre>
<pre class="r"><code>rdd[, .(var(c1), var(c2), var(c3))]</code></pre>
<pre><code>## V1 V2 V3
## 1: 0.011 0.0074 0.012</code></pre>
<p>As expected, the first two models are biased, whereas the third is not. Under these parameter and distribution assumptions, the variance of the causal effect estimate is larger for the unbiased estimate than for the model that incorrectly adjusts for diet (<span class="math inline">\(L\)</span>). So, we seem to have a bias/variance trade-off. In other cases, where we have binary outcome <span class="math inline">\(Y\)</span> or continuous exposures, this trade-off may be more or less extreme.</p>
<p>Here, we end with a look at the estimates, with the dashed line indicated at the true causal effect of <span class="math inline">\(A\)</span> on <span class="math inline">\(Y\)</span>:</p>
<p><img src="https://www.rdatagen.net/post/2019-02-26-dags-colliders-and-an-example-of-variance-bias-tradeoff.en_files/figure-html/unnamed-chunk-8-1.png" width="672" /></p>
</div>
Using the uniform sum distribution to introduce probability
https://www.rdatagen.net/post/a-fun-example-to-explore-probability/
Tue, 05 Feb 2019 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/a-fun-example-to-explore-probability/<p>I’ve never taught an intro probability/statistics course. If I ever did, I would certainly want to bring the underlying wonder of the subject to life. I’ve always found it almost magical the way mathematical formulation can be mirrored by computer simulation, the way proof can be guided by observed data generation processes, and the way DGPs can confirm analytic solutions.</p>
<p>I would like to begin such a course with a somewhat unusual but accessible problem that would evoke these themes from the start. The concepts would not necessarily be immediately comprehensible, but rather would pique the interest of the students.</p>
<p>I recently picked up copy of John Allen Paulos’ fun sort-of-memoir <a href="https://www.goodreads.com/book/show/24940376-a-numerate-life"><em>A Numerate Life</em></a>, and he reminded me of an interesting problem that might provide a good starting point for my imaginary course. The problem is great, because it is easy to understand, but challenging enough to raise some interesting issues. In this post, I sketch out a set of simulations and mathematical derivations that would motivate the ideas of marginal and conditional probability distributions.</p>
<div id="the-problem" class="section level3">
<h3>The problem</h3>
<p>Say we make repeated draws of independent uniform variables between 0 and 1, and add them up as we go along. The question is, on average, how many draws do we need to make so that the cumulative sum is greater than 1? We definitely need to make at least 2 draws, but it is possible (though almost certainly not the case) that we won’t get to 1 with even 100 draws. It turns that if we did this experiment over and over, the average number of draws would approach <span class="math inline">\(exp(1) = 2.718\)</span>. Can we prove this and confirm by simulation?</p>
</div>
<div id="re-formulating-the-question-more-formally" class="section level3">
<h3>Re-formulating the question more formally</h3>
<p>If <span class="math inline">\(U_k\)</span> is one draw of a uniform random variable (where <span class="math inline">\(k \in (1, 2, ...)\)</span>), and <span class="math inline">\(N\)</span> represents the number of draws, the probability of <span class="math inline">\(N\)</span> taking on a specific value (say <span class="math inline">\(n\)</span>) can be characterized like this:</p>
<p><span class="math display">\[
\footnotesize{P(N = n) = P\left( \; \sum_{k=1}^{n-1} {U_k} < 1 \; \& \; \sum_{k=1}^{n} {U_k} > 1\right)}
\]</span></p>
<p>Now, we need to understand this a little better to figure out how to figure out how to characterize the distribution of <span class="math inline">\(N\)</span>, which is what we are really interested in.</p>
</div>
<div id="cdf" class="section level3">
<h3>CDF</h3>
<p>This is where I would start to describe the concept of a probability distribution, and start by looking at the cumulative distribution function (CDF) <span class="math inline">\(P \left( \; \sum_{k=1}^n {U_k} < 1 \right)\)</span> for a fixed <span class="math inline">\(n\)</span>. It turns out that the CDF for this distribution (which actually has at least two names - <em>Irwin-Hall distribution</em> or <em>uniform sum distribution</em>) can be estimated with the following equation. (I certainly wouldn’t be bold enough to attempt to derive this formula in an introductory class):</p>
<p><span class="math display">\[
\footnotesize {
P\left(\sum_{k=1}^{n} {U_k} < x \right) = \frac{1}{n!} \sum_{k=0}^{\lfloor x \rfloor} (-1)^k {n \choose k} (x - k)^n
}
\]</span></p>
<p>Our first simulation would confirm that this specification of the CDF is correct. Before doing the simulation, we create an <code>R</code> function to calculate the theoretical cumulative probability for the range of <span class="math inline">\(x\)</span>.</p>
<pre class="r"><code>psumunif <- function(x, n) {
k <- c(0:floor(x))
(1/factorial(n)) * sum( (-1)^k * choose(n, k) * (x - k)^n )
}
ddtheory <- data.table(x = seq(0, 3, by = .1))
ddtheory[, cump := psumunif(x, 3), keyby = x]
ddtheory[x %in% seq(0, 3, by = .5)]</code></pre>
<pre><code>## x cump
## 1: 0.0 0.000
## 2: 0.5 0.021
## 3: 1.0 0.167
## 4: 1.5 0.500
## 5: 2.0 0.833
## 6: 2.5 0.979
## 7: 3.0 1.000</code></pre>
<p>Now, we generate some actual data. In this case, we are assuming three draws for each experiment, and we are “conducting” 200 different experiments. We generate non-correlated uniform data for each experiment:</p>
<pre class="r"><code>library(simstudy)
set.seed(02012019)
dd <- genCorGen(n = 200, nvars = 3, params1 = 0, params2 = 1,
dist = "uniform", rho = 0.0, corstr = "cs", cnames = "u")
dd</code></pre>
<pre><code>## id period u
## 1: 1 0 0.98
## 2: 1 1 0.44
## 3: 1 2 0.58
## 4: 2 0 0.93
## 5: 2 1 0.80
## ---
## 596: 199 1 0.44
## 597: 199 2 0.77
## 598: 200 0 0.21
## 599: 200 1 0.45
## 600: 200 2 0.83</code></pre>
<p>For each experiment, we calculate the sum the of the three draws, so that we have a data set with 200 observations:</p>
<pre class="r"><code>dsum <- dd[, .(x = sum(u)), keyby = id]
dsum</code></pre>
<pre><code>## id x
## 1: 1 2.0
## 2: 2 1.8
## 3: 3 1.6
## 4: 4 1.4
## 5: 5 1.6
## ---
## 196: 196 1.7
## 197: 197 1.3
## 198: 198 1.4
## 199: 199 1.7
## 200: 200 1.5</code></pre>
<p>We can plot the theoretical CDF versus the observed empirical CDF:</p>
<pre class="r"><code>ggplot(dsum, aes(x)) +
stat_ecdf(geom = "step", color = "black") +
geom_line(data=ddtheory, aes(x=x, y = cump), color ="red") +
scale_x_continuous(limits = c(0, 3)) +
ylab("cumulative probability") +
theme(panel.grid.minor = element_blank(),
axis.ticks = element_blank())</code></pre>
<p><img src="https://www.rdatagen.net/post/2019-02-05-a-fun-example-to-explore-probability_files/figure-html/unnamed-chunk-5-1.png" width="672" /></p>
<p>And here is another pair of curves using a set of experiments with only two draws for each experiment:</p>
<p><img src="https://www.rdatagen.net/post/2019-02-05-a-fun-example-to-explore-probability_files/figure-html/unnamed-chunk-6-1.png" width="672" /></p>
</div>
<div id="more-specifically-exploring-psum-1" class="section level3">
<h3>More specifically, exploring P(sum < 1)</h3>
<p>The problem at hand specifically asks us to evaluate <span class="math inline">\(\footnotesize{P\left(\sum_{k=1}^{n} {U_k} < 1 \right)}\)</span>. So this will be our first algebraic manipulation to derive the probability in terms of <span class="math inline">\(n\)</span>, starting with the analytic solution for the <em>CDF</em> I introduced above without derivation:</p>
<p><span class="math display">\[
\footnotesize{
\begin{aligned}
P\left(\sum_{k=1}^{n} {U_k} < 1 \right) &= \frac{1}{n!} \sum_{k=0}^1 (-1)^k {n \choose k} (x - k)^n \\
\\
&= \frac{1}{n!} \left [ (-1)^0 {n \choose 0} (1 - 0)^n + (-1)^1 {n \choose 1} (1 - 1)^n \right] \\
\\
&= \frac{1}{n!} \left [ 1 + 0 \right] \\
\\
&= \frac{1}{n!}
\end{aligned}
}
\]</span></p>
<p>We can look back at the plots to confirm that this solution is matched by the theoretical and empirical CDFs. For <span class="math inline">\(n=3\)</span>, we expect <span class="math inline">\(\footnotesize{P\left(\sum_{k=1}^{3} {U_k} < 1 \right)} = \frac{1}{3!} = 0.167\)</span>. And for <span class="math inline">\(n=2\)</span>, the expected probability is <span class="math inline">\(\frac{1}{2}\)</span>.</p>
</div>
<div id="deriving-pn" class="section level3">
<h3>Deriving <span class="math inline">\(P(N)\)</span></h3>
<p>I think until this point, things would be generally pretty accessible to a group of students thinking about these things for the first time. This next step, deriving <span class="math inline">\(P(N)\)</span> might present more of a challenge, because we have to deal with joint probabilities as well as conditional probabilities. While I don’t do so here, I think in a classroom setting I would delve more into the simulated data to illustrate each type of probability. The joint probability is merely a probability of multiple events occurring simultaneously. And the the conditional probability is a probability of an event for a subset of the data (the subset defined by the group of observations where another - conditional - event actually happened). Once those concepts were explained a bit, I would need a little courage to walk through the derivation. However, I think it would be worth it, because moving through each step highlights an important concept.</p>
<p><span class="math inline">\(N\)</span> is a new random variable that takes on the value <span class="math inline">\(n\)</span> if <span class="math inline">\(\sum_{k=1}^{n-1} {U_k} < 1\)</span> <em>and</em> <span class="math inline">\(\sum_{k=1}^{n} {U_k} > 1\)</span>. That is, the <span class="math inline">\(n{\text{th}}\)</span> value in a sequence uniform random variables <span class="math inline">\(\left [ U(0,1) \right]\)</span> is the threshold where the cumulative sum exceeds 1. <span class="math inline">\(P(N=n)\)</span> can be derived as follows:</p>
<p><span class="math display">\[
\footnotesize {
\begin{aligned}
P(N = n) &= P\left( \; \sum_{k=1}^{n-1} {U_k} < 1 \; \& \; \sum_{k=1}^{n} {U_k} > 1\right) \\
\\
&= P\left( \; \sum_{k=1}^{n} {U_k} > 1 \; \middle | \;\sum_{k=1}^{n-1} {U_k} < 1 \right) P\left( \; \sum_{k=1}^{n-1} {U_k} < 1 \right) \\
\\
&= \frac{1}{(n-1)!}P\left( \; \sum_{k=1}^{n} {U_k} > 1 \; \middle | \;\sum_{k=1}^{n-1} {U_k} < 1 \right) \\
\\
&= \frac{1}{(n-1)!}\left [ 1 - P\left( \; \sum_{k=1}^{n} {U_k} < 1 \; \middle | \;\sum_{k=1}^{n-1} {U_k} < 1 \right) \right ] \\
\\
&= \frac{1}{(n-1)!}\left [ 1 - \frac{P\left( \; \sum_{k=1}^{n} {U_k} < 1 \; \& \;\sum_{k=1}^{n-1} {U_k} < 1 \right)}{P\left( \; \sum_{k=1}^{n-1} {U_k} < 1 \right)} \right ] \\
\\
&= \frac{1}{(n-1)!}\left [ 1 - \frac{P\left( \; \sum_{k=1}^{n} {U_k} < 1 \; \right)}{P\left( \; \sum_{k=1}^{n-1} {U_k} < 1 \right)} \right ] \\
\\
&= \frac{1}{(n-1)!}\left [ 1 - \frac{1/n!}{1/(n-1)!} \right ]
\end{aligned}
}
\]</span></p>
<p>Now, once we get to this point, it is just algebraic manipulation to get to the final formulation. It is a pet peeve of mine when papers say that it is quite easily shown that some formula can be simplified into another without showing it; sometimes, it is not so simple. In this case, however, I actually think it is. So, here is a final solution of the probability:</p>
<p><span class="math display">\[
P(N = n) = \frac{n-1}{n!}
\]</span></p>
</div>
<div id="simulating-the-distribution-of-n" class="section level3">
<h3>Simulating the distribution of <span class="math inline">\(N\)</span></h3>
<p>We are almost there. To simulate <span class="math inline">\(P(N=n)\)</span>, we generate 1000 iterations of 7 draws. For each iteration, we check to see which draw pushes the cumulative sum over 1, and this is the observed value of <span class="math inline">\(N\)</span> for each iteration. Even though <span class="math inline">\(N\)</span> can conceivably be quite large, we stop at 7, since the probability of observing <span class="math inline">\(n=8\)</span> is vanishingly small, <span class="math inline">\(P(N=8) < 0.0002\)</span>.</p>
<pre class="r"><code>dd <- genCorGen(n = 1000, nvars = 7, params1 = 0, params2 = 1,
dist = "uniform", rho = 0.0, corstr = "cs", cnames = "U")
dd[, csum := cumsum(U), keyby = id]
dd[, under := 1*(csum < 1)]
dc <- dcast(dd, id ~ (period + 1), value.var = c("under", "csum" ))
dc[, n := 2 + sum(under_2, under_3, under_4, under_5, under_6, under_7),
keyby = id]
dc[, .(id, n, csum_1, csum_2, csum_3, csum_4, csum_5, csum_6, csum_7)]</code></pre>
<pre><code>## id n csum_1 csum_2 csum_3 csum_4 csum_5 csum_6 csum_7
## 1: 1 2 0.39 1.13 1.41 1.94 2.4 3.0 3.4
## 2: 2 3 0.31 0.75 1.25 1.74 2.0 2.5 3.0
## 3: 3 2 0.67 1.61 1.98 2.06 2.1 2.6 2.8
## 4: 4 3 0.45 0.83 1.15 1.69 2.6 2.7 3.7
## 5: 5 3 0.27 0.81 1.81 2.46 2.9 3.5 3.9
## ---
## 996: 996 2 0.64 1.26 2.24 2.70 3.1 4.0 4.6
## 997: 997 4 0.06 0.80 0.81 1.05 1.2 1.6 1.8
## 998: 998 5 0.32 0.53 0.71 0.73 1.2 1.2 1.2
## 999: 999 2 0.91 1.02 1.49 1.75 2.4 2.4 2.5
## 1000: 1000 2 0.87 1.10 1.91 2.89 3.3 4.1 4.4</code></pre>
<p>And this is what the data look like. On the left is the cumulative sum of each iteration (color coded by the threshold value), and on the right is the probability for each level of <span class="math inline">\(n\)</span>.</p>
<p><img src="https://www.rdatagen.net/post/2019-02-05-a-fun-example-to-explore-probability_files/figure-html/unnamed-chunk-8-1.png" width="672" /></p>
<p>Here are the observed and expected probabilities:</p>
<pre class="r"><code>expProbN <- function(n) {
(n-1)/(factorial(n))
}
rbind(prop.table(dc[, table(n)]), # observed
expProbN(2:7) # expected
)</code></pre>
<pre><code>## 2 3 4 5 6 7
## [1,] 0.49 0.33 0.14 0.039 0.0100 0.0010
## [2,] 0.50 0.33 0.12 0.033 0.0069 0.0012</code></pre>
</div>
<div id="expected-value-of-n" class="section level3">
<h3>Expected value of N</h3>
<p>The final piece to the puzzle requires a brief introduction to expected value, which for a discrete outcome (which is what we are dealing with even though the underlying process is a sum of potentially infinite continuous outcomes) is <span class="math inline">\(\sum_{n=0}^\infty \; n\times P(n)\)</span>:</p>
<p><span class="math display">\[
\footnotesize{
\begin{aligned}
E[N] &= \sum_{n=1}^{\infty} \; n P(n) \\
\\
&= \sum_{n=1}^{\infty} \; n \left ( \frac{n-1}{n!} \right) \\
\\
&= \sum_{n=2}^{\infty} \; n \left ( \frac{n-1}{n!} \right) \\
\\
&= \sum_{n=2}^{\infty} \; \frac{1}{(n-2)!} \\
\\
&= \sum_{n=0}^{\infty} \; \frac{1}{n!} \\
\\
&= \sum_{n=0}^{\infty} \; \frac{1^n}{n!} \\
\\
E[N] &= exp(1) \; \; \text{(since} \sum_{n=0}^{\infty} \; \frac{a^n}{n!} = e^a \text{)}
\end{aligned}
}
\]</span></p>
<p>We are now in a position to see if our observed average is what is predicted by theory:</p>
<pre class="r"><code>c(observed = dc[, mean(n)], expected = exp(1))</code></pre>
<pre><code>## observed expected
## 2.8 2.7</code></pre>
<p>I am assuming that all the students in the class will think this is pretty cool when they see this final result. And that will provide motivation to really learn all of these concepts (and more) over the subsequent weeks of the course.</p>
<p>One final note: I have evaded uncertainty or variability in all of this, which is obviously a key omission, and something I would need to address if I really had an opportunity to do something like this in a class. However, simulation provides ample opportunity to introduce that as well, so I am sure I could figure out a way to weave that in. Or maybe that could be the second class, though I probably won’t do a follow-up post.</p>
</div>
Correlated longitudinal data with varying time intervals
https://www.rdatagen.net/post/correlated-longitudinal-data-with-varying-time-intervals/
Tue, 22 Jan 2019 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/correlated-longitudinal-data-with-varying-time-intervals/<p>I was recently contacted to see if <code>simstudy</code> can create a data set of correlated outcomes that are measured over time, but at different intervals for each individual. The quick answer is there is no specific function to do this. However, if you are willing to assume an “exchangeable” correlation structure, where measurements far apart in time are just as correlated as measurements taken close together, then you could just generate individual-level random effects (intercepts and/or slopes) and pretty much call it a day. Unfortunately, the researcher had something more challenging in mind: he wanted to generate auto-regressive correlation, so that proximal measurements are more strongly correlated than distal measurements.</p>
<p>As is always the case with <code>R</code>, there are certainly multiple ways to do tackle this problem. I came up with this particular solution, which I thought I’d share. The idea is pretty simple: first, generate the time data with varying intervals, which <em>can</em> be done using <code>simstudy</code>; second, create an alternate data set of “latent” observations that include all time points, also doable with <code>simstudy</code>; last, merge the two in a way that gives you what you want.</p>
<div id="step-1-varying-time-intervals" class="section level3">
<h3>Step 1: varying time intervals</h3>
<p>The function <code>addPeriods</code> can create intervals of varying lengths. The function determines if the input data set includes the special fields <code>mInterval</code> and <code>vInterval</code>. If so, a <code>time</code> value is generated from a gamma distribution with mean <code>mInterval</code> and dispersion <code>vInterval</code>.</p>
<pre class="r"><code>maxTime <- 180 # limit follow-up time to 180 days
def1 <- defData(varname = "nCount", dist = "noZeroPoisson",
formula = 20)
def1 <- defData(def1, varname = "mInterval", dist = "nonrandom",
formula = 20)
def1 <- defData(def1, varname = "vInterval", dist = "nonrandom",
formula = 0.4)
set.seed(20190101)
dt <- genData(1000, def1)
dtPeriod <- addPeriods(dt)
dtPeriod <- dtPeriod[time <= maxTime]</code></pre>
<p>Here is a plot if time intervals for a small sample of the data set:</p>
<p><img src="https://www.rdatagen.net/post/2019-01-22-correlated-longitudinal-data-with-varying-time-intervals_files/figure-html/unnamed-chunk-3-1.png" width="672" /></p>
</div>
<div id="step-2-generate-correlated-data" class="section level3">
<h3>Step 2: generate correlated data</h3>
<p>In this step, I am creating 181 records for each individual (from period = 0 to period = 180). In order to create correlated data, I need to specify the mean and variance for each observation; in this example, the mean is a quadratic function of <code>time</code> and the variance is fixed at 9. I generate the correlated data using the <code>addCorGen</code> function, and specify an <em>AR-1</em> correlation structure with <span class="math inline">\(\rho = 0.4\)</span>,</p>
<pre class="r"><code>def2 <- defDataAdd(varname = "mu", dist = "nonrandom",
formula = "2 + (1/500) * (time) * (180 - time)")
def2 <- defDataAdd(def2, varname = "var", dist = "nonrandom", formula = 9)
dtY <- genData(1000)
dtY <- addPeriods(dtY, nPeriod = (maxTime + 1) )
setnames(dtY, "period", "time")
dtY <- addColumns(def2, dtY)
dtY <- addCorGen(dtOld = dtY, idvar = "id", nvars = (maxTime + 1),
rho = .4, corstr = "ar1", dist = "normal",
param1 = "mu", param2 = "var", cnames = "Y")
dtY[, `:=`(timeID = NULL, var = NULL, mu = NULL)]</code></pre>
<p>Here is a plot of a sample of individuals that shows the values of <span class="math inline">\(Y\)</span> at every single time point (not just the time points generated in step 1). The <span class="math inline">\(Y\)</span>’s are correlated within individual.</p>
<p><img src="https://www.rdatagen.net/post/2019-01-22-correlated-longitudinal-data-with-varying-time-intervals_files/figure-html/unnamed-chunk-5-1.png" width="672" /></p>
</div>
<div id="step-3" class="section level3">
<h3>Step 3</h3>
<p>Now we just do an inner-join, or perhaps it is a left join - hard to tell, because one data set is a subset of the other. In any case, the new data set includes all the rows from step 1 and the ones that match from step 2.</p>
<pre class="r"><code>setkey(dtY, id, time)
setkey(dtPeriod, id, time)
finalDT <- mergeData(dtY, dtPeriod, idvars = c("id", "time"))</code></pre>
<p>Here is a plot of the observed data for a sample of individuals:</p>
<p><img src="https://www.rdatagen.net/post/2019-01-22-correlated-longitudinal-data-with-varying-time-intervals_files/figure-html/unnamed-chunk-7-1.png" width="672" /></p>
</div>
<div id="addendum" class="section level3">
<h3>Addendum</h3>
<p>To verify that the data are indeed correlated with an <em>AR-1</em> structure, I first convert the complete (latent) data from step 2 from its <em>long</em> format to a <em>wide</em> format. The correlation is calculated from this <span class="math inline">\(1000 \times 181\)</span> matrix, where each row is an individual and each column is a value of <span class="math inline">\(Y\)</span> at a different time point. And since the correlation matrix, which has dimensions <span class="math inline">\(181 \times 181\)</span>, is too big to show, what you see is only the upper left hand corner of the matrix:</p>
<pre class="r"><code>round(cor(as.matrix(dcast(dtY, id ~ time,
value.var = "Y")[, -1]))[1:13, 1:13], 1)</code></pre>
<pre><code>## 0 1 2 3 4 5 6 7 8 9 10 11 12
## 0 1.0 0.4 0.2 0.1 0.0 0.0 0.1 0.0 0.0 0.0 0.1 0.0 0.0
## 1 0.4 1.0 0.4 0.1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
## 2 0.2 0.4 1.0 0.4 0.2 0.1 0.0 0.0 0.1 0.0 0.0 0.0 0.0
## 3 0.1 0.1 0.4 1.0 0.4 0.2 0.1 0.0 0.0 0.0 0.0 0.0 0.1
## 4 0.0 0.0 0.2 0.4 1.0 0.4 0.2 0.1 0.0 0.1 0.0 0.0 0.0
## 5 0.0 0.0 0.1 0.2 0.4 1.0 0.4 0.2 0.1 0.0 0.0 0.0 0.0
## 6 0.1 0.0 0.0 0.1 0.2 0.4 1.0 0.4 0.2 0.0 0.0 0.0 0.0
## 7 0.0 0.0 0.0 0.0 0.1 0.2 0.4 1.0 0.4 0.2 0.1 0.1 0.0
## 8 0.0 0.0 0.1 0.0 0.0 0.1 0.2 0.4 1.0 0.4 0.1 0.0 0.0
## 9 0.0 0.0 0.0 0.0 0.1 0.0 0.0 0.2 0.4 1.0 0.4 0.2 0.0
## 10 0.1 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.1 0.4 1.0 0.4 0.2
## 11 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.0 0.2 0.4 1.0 0.4
## 12 0.0 0.0 0.0 0.1 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.4 1.0</code></pre>
</div>
Considering sensitivity to unmeasured confounding: part 2
https://www.rdatagen.net/post/what-does-it-mean-if-findings-are-sensitive-to-unmeasured-confounding-ii/
Thu, 10 Jan 2019 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/what-does-it-mean-if-findings-are-sensitive-to-unmeasured-confounding-ii/<p>In <a href="https://www.rdatagen.net/post/what-does-it-mean-if-findings-are-sensitive-to-unmeasured-confounding/">part 1</a> of this 2-part series, I introduced the notion of <em>sensitivity to unmeasured confounding</em> in the context of an observational data analysis. I argued that an estimate of an association between an observed exposure <span class="math inline">\(D\)</span> and outcome <span class="math inline">\(Y\)</span> is sensitive to unmeasured confounding if we can conceive of a reasonable alternative data generating process (DGP) that includes some unmeasured confounder that will generate the same observed distribution the observed data. I further argued that reasonableness can be quantified or parameterized by the two correlation coefficients <span class="math inline">\(\rho_{UD}\)</span> and <span class="math inline">\(\rho_{UY}\)</span>, which measure the strength of the relationship of the unmeasured confounder <span class="math inline">\(U\)</span> with each of the observed measures. Alternative DGPs that are characterized by high correlation coefficients can be viewed as less realistic, and the observed data could be considered less sensitive to unmeasured confounding. On the other hand, DGPs characterized by lower correlation coefficients would be considered more sensitive.</p>
<p>I need to pause here for a moment to point out that something similar has been described much more thoroughly by a group at NYU’s <a href="https://steinhardt.nyu.edu/priism/">PRIISM</a> (see <a href="https://www.tandfonline.com/doi/abs/10.1080/19345747.2015.1078862">Carnegie, Harada & Hill</a> and <a href="https://onlinelibrary.wiley.com/doi/full/10.1002/sim.6973">Dorie et al</a>). In fact, this group of researchers has actually created an <code>R</code> package called <a href="https://cran.r-project.org/web/packages/treatSens/index.html">treatSens</a> to facilitate sensitivity analysis. I believe the discussion in these posts here is consistent with the PRIISM methodology, except <code>treatSens</code> is far more flexible (e.g. it can handle binary exposures) and provides more informative output than what I am describing. I am hoping that the examples and derivation of an equivalent DGP that I show here provide some additional insight into what sensitivity means.</p>
<p>I’ve been wrestling with these issues for a while, but the ideas for the derivation of an alternative DGP were actually motivated by this recent <a href="https://onlinelibrary.wiley.com/doi/full/10.1002/sim.7904">note</a> by <em>Fei Wan</em> on an unrelated topic. (Wan shows how a valid instrumental variable may appear to violate a key assumption even though it does not.) The key element of Wan’s argument for my purposes is how the coefficient estimates of an observed model relate to the coefficients of an alternative (possibly true) data generation process/model.</p>
<p>OK - now we are ready to walk through the derivation of alternative DGPs for an observed data set.</p>
<div id="two-dgps-same-data" class="section level3">
<h3>Two DGPs, same data</h3>
<p>Recall from Part 1 that we have an observed data model</p>
<p><span class="math display">\[ Y = k_0 + k_1D + \epsilon_Y\]</span>
where <span class="math inline">\(\epsilon_Y \sim N\left(0, \sigma^2_Y\right)\)</span>. We are wondering if there is another DGP that could have generated the data that we have actually observed:</p>
<p><span class="math display">\[
\begin{aligned}
D &= \alpha_0 + \alpha_1 U + \epsilon_D \\
Y &= \beta_0 + \beta_1 D + \beta_2 U + \epsilon_{Y^*},
\end{aligned}
\]</span></p>
<p>where <span class="math inline">\(U\)</span> is some unmeasured confounder, and <span class="math inline">\(\epsilon_D \sim N\left(0, \sigma^2_D\right)\)</span> and <span class="math inline">\(\epsilon_{Y^*} \sim N\left(0, \sigma^2_{Y^*}\right)\)</span>. Can we go even further and find an alternative DGP where <span class="math inline">\(D\)</span> has no direct effect on <span class="math inline">\(Y\)</span> at all?</p>
<p><span class="math display">\[
\begin{aligned}
D &= \alpha_0 + \alpha_1 U + \epsilon_D \\
Y &= \beta_0 + \beta_2 U + \epsilon_{Y^*},
\end{aligned}
\]</span></p>
</div>
<div id="alpha_1-and-sigma_epsilon_d2-derived-from-rho_ud" class="section level3">
<h3><span class="math inline">\(\alpha_1\)</span> (and <span class="math inline">\(\sigma_{\epsilon_D}^2\)</span>) derived from <span class="math inline">\(\rho_{UD}\)</span></h3>
<p>In a simple linear regression model with a single predictor, the coefficient <span class="math inline">\(\alpha_1\)</span> can be specified directly in terms <span class="math inline">\(\rho_{UD}\)</span>, the correlation between <span class="math inline">\(U\)</span> and <span class="math inline">\(D\)</span>:</p>
<p><span class="math display">\[ \alpha_1 = \rho_{UD} \frac{\sigma_D}{\sigma_U}\]</span>
We can estimate <span class="math inline">\(\sigma_D\)</span> from the observed data set, and we can reasonably assume that <span class="math inline">\(\sigma_U = 1\)</span> (since we could always normalize the original measurement of <span class="math inline">\(U\)</span>). Finally, we can specify a range of <span class="math inline">\(\rho_{UD}\)</span> (I am keeping everything positive here for simplicity), such that <span class="math inline">\(0 < \rho_{UD} < 0.90\)</span> (where I assume a correlation of <span class="math inline">\(0.90\)</span> is at or beyond the realm of reasonableness). By plugging these three parameters into the formula, we can generate a range of <span class="math inline">\(\alpha_1\)</span>’s.</p>
<p>Furthermore, we can derive an estimate of the variance for <span class="math inline">\(\epsilon_D\)</span> ( <span class="math inline">\(\sigma_{\epsilon_D}^2\)</span>) at each level of <span class="math inline">\(\rho_{UD}\)</span>:</p>
<p><span class="math display">\[
\begin{aligned}
Var(D) &= Var(\alpha_0 + \alpha_1 U + \epsilon_D) \\
\\
\sigma_D^2 &= \alpha_1^2 \sigma_U^2 + \sigma_{\epsilon_D}^2 \\
\\
\sigma_{\epsilon_D}^2 &= \sigma_D^2 - \alpha_1^2 \; \text{(since } \sigma_U^2=1)
\end{aligned}
\]</span></p>
<p>So, for each value of <span class="math inline">\(\rho_{UD}\)</span> that we generated, there is a corresponding pair <span class="math inline">\((\alpha_1, \; \sigma_{\epsilon_D}^2)\)</span>.</p>
</div>
<div id="determine-beta_2" class="section level3">
<h3>Determine <span class="math inline">\(\beta_2\)</span></h3>
<p>In the <a href="#addendum">addendum</a> I go through a bit of an elaborate derivation of <span class="math inline">\(\beta_2\)</span>, the coefficient of <span class="math inline">\(U\)</span> in the alternative outcome model. Here is the bottom line:</p>
<p><span class="math display">\[
\beta_2 = \frac{\alpha_1}{1-\frac{\sigma_{\epsilon_D}^2}{\sigma_D^2}}\left( k_1 - \beta_1\right)
\]</span></p>
<p>In the equation, we have <span class="math inline">\(\sigma^2_D\)</span> and <span class="math inline">\(k_1\)</span>, which are both estimated from the observed data and the pair of derived parameters <span class="math inline">\(\alpha_1\)</span> and <span class="math inline">\(\sigma_{\epsilon_D}^2\)</span> based on <span class="math inline">\(\rho_{UD}\)</span>. <span class="math inline">\(\beta_1\)</span>, the coefficient of <span class="math inline">\(D\)</span> in the outcome model is a free parameter, set externally. That is, we can choose to evaluate all <span class="math inline">\(\beta_2\)</span>’s the are generated when <span class="math inline">\(\beta_1 = 0\)</span>. More generally, we can set <span class="math inline">\(\beta_1 = pk_1\)</span>, where <span class="math inline">\(0 \le p \le 1\)</span>. (We could go negative if we want, though I won’t do that here.) If <span class="math inline">\(p=1\)</span> , <span class="math inline">\(\beta_1 = k_1\)</span> and <span class="math inline">\(\beta_2 = 0\)</span>; we end up with the original model with no confounding.</p>
<p>So, once we specify <span class="math inline">\(\rho_{UD}\)</span> and <span class="math inline">\(p\)</span>, we get the corresponding triplet <span class="math inline">\((\alpha_1, \; \sigma_{\epsilon_D}^2, \; \beta_2)\)</span>.</p>
</div>
<div id="determine-rho_uy" class="section level3">
<h3>Determine <span class="math inline">\(\rho_{UY}\)</span></h3>
<p>In this last step, we can identify the correlation of <span class="math inline">\(U\)</span> and <span class="math inline">\(Y\)</span>, <span class="math inline">\(\rho_{UY}\)</span>, that is associated with all the observed, specified, and derived parameters up until this point. We start by writing the alternative outcome model, and then replace <span class="math inline">\(D\)</span> with the alternative exposure model, and do some algebraic manipulation to end up with a re-parameterized alternative outcome model that has a single predictor:</p>
<p><span class="math display">\[
\begin{aligned}
Y &= \beta_0 + \beta_1 D + \beta_2 U + \epsilon_Y^* \\
&= \beta_0 + \beta_1 \left( \alpha_0 + \alpha_1 U + \epsilon_D \right) + \beta_2 U + \epsilon_Y^* \\
&=\beta_0 + \beta_1 \alpha_0 + \beta_1 \alpha_1 U + \beta_1 \epsilon_D + \beta_2 U +
\epsilon_Y^* \\
&=\beta_0^* + \left( \beta_1 \alpha_1 + \beta_2 \right)U + \epsilon_Y^+ \\
&=\beta_0^* + \beta_1^*U + \epsilon_Y^+ , \\
\end{aligned}
\]</span></p>
<p>where <span class="math inline">\(\beta_0^* = \beta_0 + \beta_1 \alpha_0\)</span>, <span class="math inline">\(\beta_1^* = \left( \beta_1 \alpha_1 + \beta_2 \right)\)</span>, and <span class="math inline">\(\epsilon_Y^+ = \beta_1 \epsilon_D + \epsilon_Y*\)</span>.</p>
<p>As before, the coefficient in a simple linear regression model with a single predictor is related to the correlation of the two variables as follows:</p>
<p><span class="math display">\[
\beta_1^* = \rho_{UY} \frac{\sigma_Y}{\sigma_U}
\]</span></p>
<p>Since <span class="math inline">\(\beta_1^* = \left( \beta_1 \alpha_1 + \beta_2 \right)\)</span>,</p>
<p><span class="math display">\[
\begin{aligned}
\beta_1 \alpha_1 + \beta_2 &= \rho_{UY} \frac{\sigma_Y}{\sigma_U} \\
\\
\rho_{UY} &= \frac{\sigma_U}{\sigma_Y} \left( \beta_1 \alpha_1 + \beta_2 \right) \\
\\
&= \frac{\left( \beta_1 \alpha_1 + \beta_2 \right)}{\sigma_Y}
\end{aligned}
\]</span></p>
</div>
<div id="determine-sigma2_y" class="section level3">
<h3>Determine <span class="math inline">\(\sigma^2_{Y*}\)</span></h3>
<p>In order to simulate data from the alternative DGPs, we need to derive the variation for the noise of the alternative model. That is, we need an estimate of <span class="math inline">\(\sigma^2_{Y*}\)</span>.</p>
<p><span class="math display">\[
\begin{aligned}
Var(Y) &= Var(\beta_0 + \beta_1 D + \beta_2 U + \epsilon_{Y^*}) \\
\\
&= \beta_1^2 Var(D) + \beta_2^2 Var(U) + 2\beta_1\beta_2Cov(D, U) + Var(\epsilon_{y*}) \\
\\
&= \beta_1^2 \sigma^2_D + \beta_2^2 + 2\beta_1\beta_2\rho_{UD}\sigma_D + \sigma^2_{Y*} \\
\end{aligned}
\]</span></p>
<p>So,</p>
<p><span class="math display">\[
\sigma^2_{Y*} = Var(Y) - (\beta_1^2 \sigma^2_D + \beta_2^2 + 2\beta_1\beta_2\rho_{UD}\sigma_D),
\]</span></p>
<p>where <span class="math inline">\(Var(Y)\)</span> is the variation of <span class="math inline">\(Y\)</span> from the observed data. Now we are ready to implement this in R.</p>
</div>
<div id="implementing-in-r" class="section level3">
<h3>Implementing in <code>R</code></h3>
<p>If we have an observed data set with observed <span class="math inline">\(D\)</span> and <span class="math inline">\(Y\)</span>, and some target <span class="math inline">\(\beta_1\)</span> determined by <span class="math inline">\(p\)</span>, we can calculate/generate all the quantities that we just derived.</p>
<p>Before getting to the function, I want to make a brief point about what we do if we have other <em>measured</em> confounders. We can essentially eliminate measured confounders by regressing the exposure <span class="math inline">\(D\)</span> on the confounders and conducting the entire sensitivity analysis with the residual exposure measurements derived from this initial regression model. I won’t be doing this here, but if anyone wants to see an example of this, let me know, and I can do a short post.</p>
<p>OK - here is the function, which essentially follows the path of the derivation above:</p>
<pre class="r"><code>altDGP <- function(dd, p) {
# Create values of rhoUD
dp <- data.table(p = p, rhoUD = seq(0.0, 0.9, length = 1000))
# Parameters estimated from data
dp[, `:=`(sdD = sd(dd$D), s2D = var(dd$D), sdY = sd(dd$Y))]
dp[, k1:= coef(lm(Y ~ D, data = dd))[2]]
# Generate b1 based on p
dp[, b1 := p * k1]
# Determine a1
dp[, a1 := rhoUD * sdD ]
# Determine s2ed
dp[, s2ed := s2D - (a1^2)]
# Determine b2
dp[, g:= s2ed/s2D]
dp <- dp[g != 1]
dp[, b2 := (a1 / (1 - g) ) * ( k1 - b1 )]
# Determine rhoUY
dp[, rhoUY := ( (b1 * a1) + b2 ) / sdY ]
# Eliminate impossible correlations
dp <- dp[rhoUY > 0 & rhoUY <= .9]
# Determine s2eyx
dp[, s2eyx := sdY^2 - (b1^2 * s2D + b2^2 + 2 * b1 * b2 * rhoUD * sdD)]
dp <- dp[s2eyx > 0]
# Determine standard deviations
dp[, sdeyx := sqrt(s2eyx)]
dp[, sdedx := sqrt(s2ed)]
# Finished
dp[]
}</code></pre>
</div>
<div id="assessing-sensitivity" class="section level3">
<h3>Assessing sensitivity</h3>
<p>If we generate the same data set we started out with last post, we can use the function to assess the sensitivity of this association.</p>
<pre class="r"><code>defO <- defData(varname = "D", formula = 0, variance = 1)
defO <- defData(defO, varname = "Y", formula = "1.5 * D", variance = 25)
set.seed(20181201)
dtO <- genData(1200, defO)</code></pre>
<p>In this first example, I am looking for the DGP with <span class="math inline">\(\beta_1 = 0\)</span>, which is implemented as <span class="math inline">\(p = 0\)</span> in the call to function <code>altDGP</code>. Each row of output represents an alternative set of parameters that will result in a DGP with <span class="math inline">\(\beta_1 = 0\)</span>.</p>
<pre class="r"><code>dp <- altDGP(dtO, p = 0)
dp[, .(rhoUD, rhoUY, k1, b1, a1, s2ed, b2, s2eyx)]</code></pre>
<pre><code>## rhoUD rhoUY k1 b1 a1 s2ed b2 s2eyx
## 1: 0.295 0.898 1.41 0 0.294 0.904 4.74 5.36
## 2: 0.296 0.896 1.41 0 0.295 0.903 4.72 5.50
## 3: 0.297 0.893 1.41 0 0.296 0.903 4.71 5.63
## 4: 0.298 0.890 1.41 0 0.297 0.902 4.69 5.76
## 5: 0.299 0.888 1.41 0 0.298 0.902 4.68 5.90
## ---
## 668: 0.896 0.296 1.41 0 0.892 0.195 1.56 25.35
## 669: 0.897 0.296 1.41 0 0.893 0.193 1.56 25.35
## 670: 0.898 0.296 1.41 0 0.894 0.191 1.56 25.36
## 671: 0.899 0.295 1.41 0 0.895 0.190 1.56 25.36
## 672: 0.900 0.295 1.41 0 0.896 0.188 1.55 25.37</code></pre>
<p>Now, I am creating a data set that will be based on four levels of <span class="math inline">\(\beta_1\)</span>. I do this by creating a vector <span class="math inline">\(p = \; <0.0, \; 0.2, \; 0.5, \; 0.8>\)</span>. The idea is to create a plot that shows the curve for each value of <span class="math inline">\(p\)</span>. The most extreme curve (in this case, the curve all the way to the right, since we are dealing with positive associations only) represents the scenario where <span class="math inline">\(p = 0\)</span> (i.e. <span class="math inline">\(\beta_1 = 0\)</span>). The curves moving to the left reflect increasing sensitivity as <span class="math inline">\(p\)</span> increases.</p>
<pre class="r"><code>dsenO <- rbindlist(lapply(c(0.0, 0.2, 0.5, 0.8),
function(x) altDGP(dtO, x)))</code></pre>
<p><img src="https://www.rdatagen.net/post/2019-01-10-what-does-it-mean-if-findings-are-sensitive-to-unmeasured-confounding-ii_files/figure-html/unnamed-chunk-6-1.png" width="720" /></p>
<p>I would say that in this first case the observed association is moderately sensitive to unmeasured confounding, as correlations as low as 0.5 would enough to erase the association.</p>
<p>In the next case, if the association remains unchanged but the variation of <span class="math inline">\(Y\)</span> is considerably reduced, the observed association is much less sensitive. However, it is still quite possible that the observed overestimation is at least partially overstated, as relatively low levels of correlation could reduce the estimated association.</p>
<pre class="r"><code>defA1 <- updateDef(defO, changevar = "Y", newvariance = 4)</code></pre>
<p><img src="https://www.rdatagen.net/post/2019-01-10-what-does-it-mean-if-findings-are-sensitive-to-unmeasured-confounding-ii_files/figure-html/unnamed-chunk-8-1.png" width="720" /></p>
<p>In this last scenario, variance is the same as the initial scenario, but the association is considerably weaker. Here, we see that the estimate of the association is extremely sensitive to unmeasured confounding, as low levels of correlation are required to entirely erase the association.</p>
<pre class="r"><code>defA2 <- updateDef(defO, changevar = "Y", newformula = "0.25 * D")</code></pre>
<p><img src="https://www.rdatagen.net/post/2019-01-10-what-does-it-mean-if-findings-are-sensitive-to-unmeasured-confounding-ii_files/figure-html/unnamed-chunk-10-1.png" width="720" /></p>
</div>
<div id="treatsens-package" class="section level3">
<h3><code>treatSens</code> package</h3>
<p>I want to show output generated by the <code>treatSens</code> package I referenced earlier. <code>treatSens</code> requires a formula that includes an outcome vector <span class="math inline">\(Y\)</span>, an exposure vector <span class="math inline">\(Z\)</span>, and at least one vector of measured of confounders <span class="math inline">\(X\)</span>. In my examples, I have included no measured confounders, so I generate a vector of independent noise that is not related to the outcome.</p>
<pre class="r"><code>library(treatSens)
X <- rnorm(1200)
Y <- dtO$Y
Z <- dtO$D
testsens <- treatSens(Y ~ Z + X, nsim = 5)
sensPlot(testsens)</code></pre>
<p>Once <code>treatSens</code> has been executed, it is possible to generate a sensitivity plot, which looks substantively similar to the ones I have created. The package uses sensitivity parameters <span class="math inline">\(\zeta^Z\)</span> and <span class="math inline">\(\zeta^Y\)</span>, which represent the coefficients of <span class="math inline">\(U\)</span>, the unmeasured confounder. Since <code>treatSens</code> normalizes the data (in the default setting), these coefficients are actually equivalent to the correlations <span class="math inline">\(\rho_{UD}\)</span> and <span class="math inline">\(\rho_{UY}\)</span> that are the basis of my sensitivity analysis. A important difference in the output is that <code>treatSens</code> provides uncertainty bands, and extends into regions of negative correlation. (And of course, a more significant difference is that <code>treatSens</code> is flexible enough to handle binary exposures, whereas I have not yet extended my analytic approach in that direction, and I suspect it is no possible for me to do so due to non-collapsibility of logistic regression estimands - I hope to revisit this in the future.)</p>
<div id="observed-data-scenario-1-smally-sim-n1.50z-25" class="section level4">
<h4>Observed data scenario 1: <span class="math inline">\(\small{Y \sim N(1.50Z, \; 25)}\)</span></h4>
<p><img src="https://www.rdatagen.net/img/post-treatSens/Var25.png" width="550" /></p>
</div>
<div id="observed-data-scenario-2-smally-sim-n1.50z-4" class="section level4">
<h4>Observed data scenario 2: <span class="math inline">\(\small{Y \sim N(1.50Z, \; 4)}\)</span></h4>
<p><img src="https://www.rdatagen.net/img/post-treatSens/Var04.png" width="550" /></p>
</div>
<div id="observed-data-scenario-3-smally-sim-n0.25z-25" class="section level4">
<h4>Observed data scenario 3: <span class="math inline">\(\small{Y \sim N(0.25Z, \; 25)}\)</span></h4>
<p><img src="https://www.rdatagen.net/img/post-treatSens/V25025.png" width="550" /></p>
<p><a name="addendum"></a></p>
</div>
</div>
<div id="addendum-derivation-of-beta_2" class="section level2">
<h2>Addendum: Derivation of <span class="math inline">\(\beta_2\)</span></h2>
<p>In case you want more detail on how we derive <span class="math inline">\(\beta_2\)</span> from the observed data model and assumed correlation parameters, here it is. We start by specifying the simple observed outcome model:</p>
<p><span class="math display">\[ Y = k_0 + k_1D + \epsilon_Y\]</span></p>
<p>We can estimate the parameters <span class="math inline">\(k_0\)</span> and <span class="math inline">\(k_1\)</span> using this standard matrix solution:</p>
<p><span class="math display">\[ <k_0, \; k_1> \; = (W^TW)^{-1}W^TY,\]</span></p>
<p>where <span class="math inline">\(W\)</span> is the <span class="math inline">\(n \times 2\)</span> design matrix:</p>
<p><span class="math display">\[ W = [\mathbf{1}, D]_{n \times 2}.\]</span></p>
<p>We can replace <span class="math inline">\(Y\)</span> with the alternative outcome model:</p>
<p><span class="math display">\[
\begin{aligned}
<k_0, \; k_1> \; &= (W^TW)^{-1}W^T(\beta_0 + \beta_1 D + \beta_2 U + \epsilon_Y^*) \\
&= \;<\beta_0, 0> + <0, \beta_1> +\; (W^TW)^{-1}W^T(\beta_2U) + \mathbf{0} \\
&= \;<\beta_0, \beta_1> +\; (W^TW)^{-1}W^T(\beta_2U)
\end{aligned}
\]</span></p>
<p>Note that</p>
<p><span class="math display">\[
\begin{aligned}
(W^TW)^{-1}W^T(\beta_0) &= \; <\beta_0,\; 0> \; \; and\\
\\
(W^TW)^{-1}W^T(\beta_1D) &= \; <0,\; \beta_1>.
\end{aligned}
\]</span></p>
<p>Now, we need to figure out what <span class="math inline">\((W^TW)^{-1}W^T(\beta_2U)\)</span> is. First, we rearrange the alternate exposure model:
<span class="math display">\[
\begin{aligned}
D &= \alpha_0 + \alpha_1 U + \epsilon_D \\
\alpha_1 U &= D - \alpha_0 - \epsilon_D \\
U &= \frac{1}{\alpha_1} \left( D - \alpha_0 - \epsilon_D \right) \\
\beta_2 U &= \frac{\beta_2}{\alpha_1} \left( D - \alpha_0 - \epsilon_D \right)
\end{aligned}
\]</span></p>
<p>We can replace <span class="math inline">\(\beta_2 U\)</span>:</p>
<p><span class="math display">\[
\begin{aligned}
(W^TW)^{-1}W^T(\beta_2U) &= (W^TW)^{-1}W^T \left[ \frac{\beta_2}{\alpha_1} \left( D - \alpha_0 - \epsilon_D \right) \right] \\
&= <-\frac{\beta_2}{\alpha_1}\alpha_0, 0> + <0,\frac{\beta_2}{\alpha_1}>-\;\frac{\beta_2}{\alpha_1}(W^TW)^{-1}W^T \epsilon_D \\
&= <-\frac{\beta_2}{\alpha_1}\alpha_0, \frac{\beta_2}{\alpha_1}>-\;\frac{\beta_2}{\alpha_1}(W^TW)^{-1}W^T \epsilon_D \\
\end{aligned}
\]</span></p>
<p>And now we get back to <span class="math inline">\(<k_0,\; k_1>\)</span> :</p>
<p><span class="math display">\[
\begin{aligned}
<k_0,\; k_1> \; &= \;<\beta_0,\; \beta_1> +\; (W^TW)^{-1}W^T(\beta_2U) \\
&= \;<\beta_0-\frac{\beta_2}{\alpha_1}\alpha_0, \; \beta_1 + \frac{\beta_2}{\alpha_1}>-\;\frac{\beta_2}{\alpha_1}(W^TW)^{-1}W^T \epsilon_D \\
&= \;<\beta_0-\frac{\beta_2}{\alpha_1}\alpha_0, \; \beta_1 + \frac{\beta_2}{\alpha_1}>-\;\frac{\beta_2}{\alpha_1}<\gamma_0, \; \gamma_1>
\end{aligned}
\]</span></p>
<p>where <span class="math inline">\(\gamma_0\)</span> and <span class="math inline">\(\gamma_1\)</span> come from regressing <span class="math inline">\(\epsilon_D\)</span> on <span class="math inline">\(D\)</span>:</p>
<p><span class="math display">\[ \epsilon_D = \gamma_0 + \gamma_1 D\]</span>
so,</p>
<p><span class="math display">\[
\begin{aligned}
<k_0,\; k_1> \; &= \;<\beta_0-\frac{\beta_2}{\alpha_1}\alpha_0 - \frac{\beta_2}{\alpha_1}\gamma_0, \; \beta_1 + \frac{\beta_2}{\alpha_1} - \frac{\beta_2}{\alpha_1}\gamma_1 > \\
&= \;<\beta_0-\frac{\beta_2}{\alpha_1}\left(\alpha_0 + \gamma_0\right), \; \beta_1 + \frac{\beta_2}{\alpha_1}\left(1 - \gamma_1 \right) >
\end{aligned}
\]</span></p>
<p>Since we can center all the observed data, we can easily assume that <span class="math inline">\(k_0 = 0\)</span>. All we need to worry about is <span class="math inline">\(k_1\)</span>:</p>
<p><span class="math display">\[
\begin{aligned}
k_1 &= \beta_1 + \frac{\beta_2}{\alpha_1}\left(1 - \gamma_1 \right) \\
\frac{\beta_2}{\alpha_1}\left(1 - \gamma_1 \right) &= k_1 - \beta_1 \\
\beta_2 &= \frac{\alpha_1}{1-\gamma_1}\left( k_1 - \beta_1\right)
\end{aligned}
\]</span></p>
<p>We have generated <span class="math inline">\(\alpha_1\)</span> based on <span class="math inline">\(\rho_{UD}\)</span>, <span class="math inline">\(k_1\)</span> is a estimated from the data, and <span class="math inline">\(\beta_1\)</span> is fixed based on some <span class="math inline">\(p, \; 0 \le p \le 1\)</span> such that <span class="math inline">\(\beta_1 = pk_1\)</span>. All that remains is <span class="math inline">\(\gamma_1\)</span>:</p>
<p><span class="math display">\[
\gamma_1 = \rho_{\epsilon_D D} \frac{\sigma_{\epsilon_D}}{\sigma_D}
\]</span></p>
<p>Since <span class="math inline">\(D = \alpha_0 + \alpha_1 U + \epsilon_D\)</span> (and <span class="math inline">\(\epsilon_D \perp \! \! \! \perp U\)</span>)</p>
<p><span class="math display">\[
\begin{aligned}
\rho_{\epsilon_D D} &= \frac{Cov(\epsilon_D, D)}{\sigma_{\epsilon_D} \sigma_D} \\
\\
&=\frac{Cov(\epsilon_D, \;\alpha_0 + \alpha_1 U + \epsilon_D )}{\sigma_{\epsilon_D} \sigma_D} \\
\\
&= \frac{\sigma_{\epsilon_D}^2}{\sigma_{\epsilon_D} \sigma_D} \\
\\
&= \frac{\sigma_{\epsilon_D}}{\sigma_D}
\end{aligned}
\]</span></p>
<p>It follows that</p>
<p><span class="math display">\[
\begin{aligned}
\gamma_1 &= \rho_{\epsilon_D D} \frac{\sigma_{\epsilon_D}}{\sigma_D} \\
\\
&=\frac{\sigma_{\epsilon_D}}{\sigma_D} \times \frac{\sigma_{\epsilon_D}}{\sigma_D} \\
\\
&=\frac{\sigma_{\epsilon_D}^2}{\sigma_D^2}
\end{aligned}
\]</span></p>
<p>So, now, we have all the elements to generate <span class="math inline">\(\beta_2\)</span> for a range of <span class="math inline">\(\alpha_1\)</span>’s and <span class="math inline">\(\sigma_{\epsilon_D}^2\)</span>’s:</p>
<p><span class="math display">\[
\beta_2 = \frac{\alpha_1}{1-\frac{\sigma_{\epsilon_D}^2}{\sigma_D^2}}\left( k_1 - \beta_1\right)
\]</span></p>
</div>
Considering sensitivity to unmeasured confounding: part 1
https://www.rdatagen.net/post/what-does-it-mean-if-findings-are-sensitive-to-unmeasured-confounding/
Wed, 02 Jan 2019 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/what-does-it-mean-if-findings-are-sensitive-to-unmeasured-confounding/<p>Principled causal inference methods can be used to compare the effects of different exposures or treatments we have observed in non-experimental settings. These methods, which include matching (with or without propensity scores), inverse probability weighting, and various g-methods, help us create comparable groups to simulate a randomized experiment. All of these approaches rely on a key assumption of <em>no unmeasured confounding</em>. The problem is, short of subject matter knowledge, there is no way to test this assumption empirically.</p>
<p>The general approach to this problem has been to posit a level of unmeasured confounding that would be necessary to alter the conclusions of a study. The classic example (which also is probably the first) comes from the debate on the effects of smoking on lung cancer. There were some folks who argued that there was a genetic factor that was leading people to smoke and was simultaneously the cause of cancer. The great statistician Jerome Cornfield (who, I just saw on <a href="https://en.wikipedia.org/wiki/Jerome_Cornfield">Wikipedia</a>, happens to have shared my birthday), showed that an unobserved confounder (like a particular genetic factor) would need to lead to a 9-fold increase in the odds of smoking to explain away the association between smoking and cancer. Since such a strong factor was not likely to exist, he argued, the observed association was most likely real. (For a detailed discussion on various approaches to these kinds of sensitivity analyses, look at this paper by <a href="https://link.springer.com/content/pdf/10.1007%2Fs11121-012-0339-5.pdf"><em>Liu, Kuramoto, and Stuart</em></a>.)</p>
<p>My goal here is to think a bit more about what it means for a measured association to be sensitive to unmeasured confounding. When I originally started thinking about this, I thought that an association will be sensitive to unmeasured confounding if the underlying data generation process (DGP) <em>actually includes</em> an unmeasured confounder. Sure, if this is the case - that there actually is unmeasured confounding - then it is more likely that a finding will be sensitive to unmeasured confounding. But, this isn’t really that interesting, because we can’t observe the underlying DGP. And it is not necessarily the case that data sensitive to unmeasured confounding was in fact generated through some process with an unmeasured confounder.</p>
<div id="is-there-room-for-an-alternative-data-generation-process" class="section level3">
<h3>Is there room for an alternative data generation process?</h3>
<p>When considering sensitivity, it may be more useful to talk about the plausibility of alternative models. In this context, sensitivity is inherently related to the (1) the strength of the association of the observed exposure and outcome, and (2) the uncertainty (i.e. variability) around that association. Put succinctly, a relatively weak association with a lot of variability will be much more sensitive to unmeasured confounding than a strong association with little uncertainty. If you think in visual terms, when thinking about sensitivity, you might ask “do the data provide room for an alternative model?”</p>
</div>
<div id="an-alternative-model" class="section level3">
<h3>An alternative model</h3>
<p>Let’s say we observe some exposure <span class="math inline">\(D\)</span> and we are interested in its causal relationship with an outcome <span class="math inline">\(Y\)</span>, which we also observe. I am assuming <span class="math inline">\(D\)</span> and <span class="math inline">\(Y\)</span> are both continuous and normally distributed, which makes all of this work, but also limits how far I can take this. (To be more general, we will ultimately need more powerful tools, such as the <code>R</code> package <code>treatSens</code>, but more on that later.) Also, let’s assume for simplicity’s sake that there are no <em>measured</em> confounders - though that is not a requirement here.</p>
<p>With this observed data, we can go ahead and fit a simple linear regression model:</p>
<p><span class="math display">\[ Y = k_0 + k_1D,\]</span>
where <span class="math inline">\(k_1\)</span> is the parameter of interest, the measured association of exposure <span class="math inline">\(D\)</span> with the outcome <span class="math inline">\(Y\)</span>. (Again for simplicity, I am assuming <span class="math inline">\(k_1 > 0\)</span>.)</p>
<p>The question is, is there a possible underlying data generating process where <span class="math inline">\(D\)</span> plays a minor role or none at all? For example, is there a possible DGP that looks like this:</p>
<p><span class="math display">\[
\begin{aligned}
D &= \alpha_0 + \alpha_1 U + \epsilon_d \\
Y &= \beta_0 + \beta_1 D + \beta_2 U + \epsilon_y,
\end{aligned}
\]</span></p>
<p>where <span class="math inline">\(\beta_1 << k_1\)</span>, or perhaps <span class="math inline">\(\beta_1 = 0\)</span>? That is, is there a process that generates the same observed distribution even though <span class="math inline">\(D\)</span> is not a cause of <span class="math inline">\(Y\)</span>? If so, how can we characterize that process, and is it plausible?</p>
</div>
<div id="simulation" class="section level3">
<h3>Simulation</h3>
<p>The observed DGP can be defined using <code>simstudy</code>. We can assume that the continuous exposure <span class="math inline">\(D\)</span> can always be normalized (by centering and dividing by the standard deviation). In this example, the coefficients <span class="math inline">\(k_0 = 0\)</span> and <span class="math inline">\(k_1 = 1.5\)</span>, so that a unit change in the normalized exposure leads, on average, to a positive change in <span class="math inline">\(Y\)</span> of 1.5 units:</p>
<pre class="r"><code>defO <- defData(varname = "D", formula = 0, variance = 1)
defO <- defData(defO, varname = "ey", formula = 0, variance = 25)
defO <- defData(defO, varname = "Y", formula = "1.5 * D + ey",
dist = "nonrandom")</code></pre>
<p>We can generate the data and take a look at it:</p>
<pre class="r"><code>set.seed(20181201)
dtO <- genData(1200, defO)</code></pre>
<p> </p>
<p><img src="https://www.rdatagen.net/post/2019-01-02-what-does-it-mean-if-findings-are-sensitive-to-unmeasured-confounding_files/figure-html/unnamed-chunk-4-1.png" width="336" /></p>
<p>Can we specify another DGP that removes <span class="math inline">\(D\)</span> from the process that defines <span class="math inline">\(Y\)</span>? The answer in this case is “yes.” Here is one such example where both <span class="math inline">\(D\)</span> and <span class="math inline">\(Y\)</span> are a function of some unmeasured confounder <span class="math inline">\(U\)</span>, but <span class="math inline">\(Y\)</span> is a function of <span class="math inline">\(U\)</span> alone. The variance and coefficient specifications for this DGP may seem a bit arbitrary (and maybe even lucky), but how I arrived at these quantities will be the focus of the second part of this post, coming soon. (My real goal here is to pique your interest.)</p>
<pre class="r"><code>defA1 <- defData(varname = "U", formula = 0, variance = 1)
defA1 <- defData(defA1, varname = "ed", formula = 0, variance = 0.727)
defA1 <- defData(defA1, varname = "D", formula = "0.513 * U + ed",
dist = "nonrandom")
defA1 <- defData(defA1, varname = "ey", formula = 0, variance = 20.412)
defA1 <- defData(defA1, varname = "Y", formula = "2.715 * U + ey",
dist = "nonrandom")</code></pre>
<p>After generating this second data set, we can see that they look pretty similar to each other:</p>
<pre class="r"><code>set.seed(20181201)
dtO <- genData(1200, defO)
dtA1 <- genData(1200, defA1)</code></pre>
<p><img src="https://www.rdatagen.net/post/2019-01-02-what-does-it-mean-if-findings-are-sensitive-to-unmeasured-confounding_files/figure-html/unnamed-chunk-7-1.png" width="672" /></p>
<p>If the data are indeed similar, the covariance matrices generated by each of the data sets should also be similar, and they do appear to be:</p>
<pre class="r"><code>dtO[, round(var(cbind(Y, D)), 1)]</code></pre>
<pre><code>## Y D
## Y 27.8 1.4
## D 1.4 1.0</code></pre>
<pre class="r"><code>dtA1[, round(var(cbind(Y, D)), 1)]</code></pre>
<pre><code>## Y D
## Y 26.8 1.3
## D 1.3 1.0</code></pre>
</div>
<div id="non-unique-data-generating-process" class="section level3">
<h3>Non-unique data generating process</h3>
<p>The DGP defined by <code>defA1</code> is not a unique alternative. There are actually an infinite number of alternatives - here are two more, what I am calling “Alternative 2” and “Alternative 3” to go along with the first.</p>
<pre class="r"><code>defA2 <- defData(varname = "U", formula = 0, variance = 1)
defA2 <- defData(defA2, varname = "ed", formula = 0, variance = 0.794)
defA2 <- defData(defA2, varname = "D", formula = "0.444 * U + ed",
dist = "nonrandom")
defA2 <- defData(defA2, varname = "ey", formula = 0, variance = 17.939)
defA2 <- defData(defA2, varname = "Y", formula = "3.138 * U + ey",
dist = "nonrandom")</code></pre>
<pre class="r"><code>defA3 <- defData(varname = "U", formula = 0, variance = 1)
defA3 <- defData(defA3, varname = "ed", formula = 0, variance = 0.435)
defA3 <- defData(defA3, varname = "D", formula = "0.745 * U + ed",
dist = "nonrandom")
defA3 <- defData(defA3, varname = "ey", formula = 0, variance = 24.292)
defA3 <- defData(defA3, varname = "Y", formula = "1.869 * U + ey",
dist = "nonrandom")</code></pre>
<p>Rather than looking at plots of the four data sets generated by these equivalent processes, I fit four linear regression models based on the observed <span class="math inline">\(D\)</span> and <span class="math inline">\(Y\)</span>. The parameter estimates and residual standard error estimates are quite close for all four:</p>
<table style="text-align:center">
<caption>
<strong>Comparison of different data generating processes</strong>
</caption>
<tr>
<td colspan="5" style="border-bottom: 1px solid black">
</td>
</tr>
<tr>
<td style="text-align:left">
</td>
<td>
Observed
</td>
<td>
Alt 1
</td>
<td>
Alt 2
</td>
<td>
Alt 3
</td>
</tr>
<tr>
<td style="text-align:left">
D
</td>
<td>
1.41<sup></sup>
</td>
<td>
1.41<sup></sup>
</td>
<td>
1.41<sup></sup>
</td>
<td>
1.37<sup></sup>
</td>
</tr>
<tr>
<td style="text-align:left">
</td>
<td>
(0.15)
</td>
<td>
(0.15)
</td>
<td>
(0.15)
</td>
<td>
(0.15)
</td>
</tr>
<tr>
<td style="text-align:left">
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
</tr>
<tr>
<td style="text-align:left">
Constant
</td>
<td>
0.38<sup></sup>
</td>
<td>
-0.33<sup></sup>
</td>
<td>
-0.32<sup></sup>
</td>
<td>
-0.33<sup></sup>
</td>
</tr>
<tr>
<td style="text-align:left">
</td>
<td>
(0.15)
</td>
<td>
(0.14)
</td>
<td>
(0.14)
</td>
<td>
(0.15)
</td>
</tr>
<tr>
<td style="text-align:left">
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
</tr>
<tr>
<td colspan="5" style="border-bottom: 1px solid black">
</td>
</tr>
<tr>
<td style="text-align:left">
Residual Std. Error (df = 1198)
</td>
<td>
5.08
</td>
<td>
4.99
</td>
<td>
4.98
</td>
<td>
5.02
</td>
</tr>
<tr>
<td colspan="5" style="border-bottom: 1px solid black">
</td>
</tr>
</table>
<p> </p>
</div>
<div id="characterizing-each-data-generation-process" class="section level3">
<h3>Characterizing each data generation process</h3>
<p>While each of the alternate DGPs lead to the same (or very similar) observed data distribution, the underlying relationships between <span class="math inline">\(U\)</span>, <span class="math inline">\(D\)</span>, and <span class="math inline">\(Y\)</span> are quite different. In particular, if we inspect the correlations, we can see that they are quite different for each of the three alternatives. In fact, as you will see next time, all we need to do is specify a range of correlations for <span class="math inline">\(U\)</span> and <span class="math inline">\(D\)</span> to derive a curve that defines all the alternatives for a particular value of <span class="math inline">\(\beta_1\)</span>.</p>
<pre class="r"><code>dtA1[, .(cor(U, D), cor(U, Y))]</code></pre>
<pre><code>## V1 V2
## 1: 0.511 0.496</code></pre>
<pre class="r"><code>dtA2[, .(cor(U, D), cor(U, Y))]</code></pre>
<pre><code>## V1 V2
## 1: 0.441 0.579</code></pre>
<pre class="r"><code>dtA3[, .(cor(U, D), cor(U, Y))]</code></pre>
<pre><code>## V1 V2
## 1: 0.748 0.331</code></pre>
</div>
<div id="less-sensitivity" class="section level3">
<h3>Less sensitivity</h3>
<p>So, what does it mean for an observed data set to be sensitive to unmeasured confounding? I would suggest that if an equivalent derived alternative DGP is based on “lower” correlations of <span class="math inline">\(U\)</span> and <span class="math inline">\(D\)</span> and/or <span class="math inline">\(U\)</span> and <span class="math inline">\(Y\)</span>, then the observed data are more sensitive. What “low” correlation is will probably depend on the subject matter. I would say that the data we have been looking at above is moderately sensitive to unmeasured confounding.</p>
<p>Here is an example of an observed data that might be considerably less sensitive:</p>
<pre class="r"><code>defS <- updateDef(defO, changevar = "ey", newvariance = 4)
defAS <- defData(varname = "U", formula = 0, variance = 1)
defAS <- defData(defAS, varname = "ed", formula = 0, variance = 0.414)
defAS <- defData(defAS, varname = "D", formula = "0.759 * U + ed",
dist = "nonrandom")
defAS <- defData(defAS, varname = "ey", formula = 0, variance = 2.613)
defAS <- defData(defAS, varname = "Y", formula = "1.907 * U + ey",
dist = "nonrandom")
set.seed(20181201)
dtS <- genData(1200, defS)
dtAS <- genData(1200, defAS)</code></pre>
<p><img src="https://www.rdatagen.net/post/2019-01-02-what-does-it-mean-if-findings-are-sensitive-to-unmeasured-confounding_files/figure-html/unnamed-chunk-15-1.png" width="672" /></p>
<p>The plots look similar, as do the covariance matrix describing the observed data:</p>
<pre class="r"><code>dtS[, round(var(cbind(Y, D)), 1)]</code></pre>
<pre><code>## Y D
## Y 6.3 1.4
## D 1.4 1.0</code></pre>
<pre class="r"><code>dtAS[, round(var(cbind(Y, D)), 1)]</code></pre>
<pre><code>## Y D
## Y 6.0 1.4
## D 1.4 1.0</code></pre>
<p>In this case, the both the correlations in the alternative DGP are quite high, suggesting a higher bar is needed to remove the association between <span class="math inline">\(D\)</span> and <span class="math inline">\(Y\)</span> entirely:</p>
<pre class="r"><code>dtAS[, .(cor(U, D), cor(U, Y))]</code></pre>
<pre><code>## V1 V2
## 1: 0.762 0.754</code></pre>
<p>In the second part of this post I will show how I derived the alternative DGPs, and then use that derivation to create an <code>R</code> function to generate sensitivity curves that allow us to visualize sensitivity in terms of the correlation parameters <span class="math inline">\(\rho_{UD}\)</span> and <span class="math inline">\(\rho_{UY}\)</span>.</p>
</div>
Parallel processing to add a little zip to power simulations (and other replication studies)
https://www.rdatagen.net/post/parallel-processing-to-add-a-little-zip-to-power-simulations/
Mon, 10 Dec 2018 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/parallel-processing-to-add-a-little-zip-to-power-simulations/<p>It’s always nice to be able to speed things up a bit. My <a href="https://www.rdatagen.net/post/first-blog-entry/">first blog post ever</a> described an approach using <code>Rcpp</code> to make huge improvements in a particularly intensive computational process. Here, I want to show how simple it is to speed things up by using the R package <code>parallel</code> and its function <code>mclapply</code>. I’ve been using this function more and more, so I want to explicitly demonstrate it in case any one is wondering.</p>
<p>I’m using a very simple power calculation as the motivating example here, but parallel processing can be useful in any problem where multiple replications are required. Monte Carlo simulation for experimentation and bootstrapping for variance estimation are other cases where computation times can grow long particularly fast.</p>
<div id="a-simple-two-sample-experiment" class="section level3">
<h3>A simple, two-sample experiment</h3>
<p>In this example, we are interested in estimating the probability of an experiment to show some sort of treatment effect given that there <em>actually is an effect</em>. In this example, I am comparing two group means with an unknown but true difference of 2.7; the standard deviation within each group is 5.0. Furthermore, we know we will be limited to a sample size of 100.</p>
<p>Here is the straightforward data generation process: (1) create 100 individual records, (2) assign 50 to treatment (<em>rx</em>) and 50 to control, and (3) generate an outcome <span class="math inline">\(y\)</span> for each individual, with <span class="math inline">\(\bar{y}_{rx=0} = 10.0\)</span> and <span class="math inline">\(\bar{y}_{rx=1} = 12.7\)</span>, both with standard deviation <span class="math inline">\(5\)</span>.</p>
<pre class="r"><code>set.seed(2827129)
defA <- defDataAdd(varname = "y", formula ="10 + rx*2.7", variance = 25)
DT <- genData(100)
DT <- trtAssign(DT, grpName = "rx")
DX <- addColumns(defA, DT)
ggplot(data = DX, aes(factor(rx), y)) +
geom_boxplot(fill = "red", alpha = .5) +
xlab("rx") +
theme(panel.grid = element_blank())</code></pre>
<p><img src="https://www.rdatagen.net/post/2018-12-10-parallel-processing-to-add-a-little-zip-to-power-simulations_files/figure-html/unnamed-chunk-2-1.png" width="360" /></p>
<p>A simple linear regression model can be used to compare the group means for this particular data set. In this case, since <span class="math inline">\(p < 0.05\)</span>, we would conclude that the treatment effect is indeed different from <span class="math inline">\(0\)</span>. However, in other samples, this will not necessarily be the case.</p>
<pre class="r"><code>rndTidy(lm(y ~ rx, data = DX))</code></pre>
<pre><code>## term estimate std.error statistic p.value
## 1: (Intercept) 9.8 0.72 13.7 0.00
## 2: rx 2.2 1.01 2.2 0.03</code></pre>
</div>
<div id="the-for-loop" class="section level3">
<h3>The <em>for</em> loop</h3>
<p>The single sample above yielded a <span class="math inline">\(p < 0.05\)</span>. The question is, would this be a rare occurrence based on a collection of related experiments. That is, if we repeated the experiment over and over again, what proportion of the time would <span class="math inline">\(p < 0.05\)</span>? To find this out, we can repeatedly draw from the same distributions and for each draw we can estimate the p-value. (In this simple power analysis, we would normally use an analytic solution (i.e., an equation), because that is obviously much faster; but, the analytic solution is not always so straightforward or even available.)</p>
<p>To facilitate this replication process, it is often easier to create a function that both generates the data and provides the estimate that is needed (in this case, the <em>p-value</em>). This is the purposed of function <code>genAndEst</code>:</p>
<pre class="r"><code>genAndEst <- function(def, dx) {
DX <- addColumns(def, dx)
coef(summary(lm(y ~ rx, data = DX)))["rx", "Pr(>|t|)"]
}</code></pre>
<p>Just to show that this function does indeed provide the same <em>p-value</em> as before, we can call based on the same seed.</p>
<pre class="r"><code>set.seed(2827129)
DT <- genData(100)
DT <- trtAssign(DT, grpName = "rx")
(pvalue <- genAndEst(defA, DT))</code></pre>
<pre><code>## [1] 0.029</code></pre>
<p>OK - now we are ready to estimate, using 2500 replications. Each time, we store the results in a vector called <code>pvals</code>. After the replications have been completed, we calculate the proportion of replications where the p-value was indeed below the <span class="math inline">\(5\%\)</span> threshold.</p>
<pre class="r"><code>forPower <- function(def, dx, reps) {
pvals <- vector("numeric", reps)
for (i in 1:reps) {
pvals[i] <- genAndEst(def, dx)
}
mean(pvals < 0.05)
}
forPower(defA, DT, reps = 2500)</code></pre>
<pre><code>## [1] 0.77</code></pre>
<p>The estimated power is 0.77. That is, given the underlying data generating process, we can expect to find a significant result <span class="math inline">\(77\%\)</span> of the times we conduct the experiment.</p>
<p>As an aside, here is the R function <code>power.t.test</code>, which uses the analytic (formulaic) approach:</p>
<pre class="r"><code>power.t.test(50, 2.7, 5)</code></pre>
<pre><code>##
## Two-sample t test power calculation
##
## n = 50
## delta = 2.7
## sd = 5
## sig.level = 0.05
## power = 0.76
## alternative = two.sided
##
## NOTE: n is number in *each* group</code></pre>
<p>Reading along here, you can’t tell how much time the <em>for</em> loop took on my MacBook Pro. It was not exactly zippy, maybe 5 seconds or so. (The result from <code>power.t.test</code> was instantaneous.)</p>
</div>
<div id="lapply" class="section level3">
<h3><em>lapply</em></h3>
<p>The R function <code>lapply</code> offers a second approach that might be simpler to code, but maybe less intuitive to understand. The whole replication process can be coded with a single call to <code>lapply</code>. This call also references the <code>genAndEst</code> function.</p>
<p>In this application of <code>lapply</code>, the argument <span class="math inline">\(X\)</span> is really a dummy argument, as the function call in argument <span class="math inline">\(FUN\)</span> essentially ignores the argument <span class="math inline">\(x\)</span>. <code>lapply</code> executes the function for each element of the vector <span class="math inline">\(X\)</span>; in this case, the function will be executed <span class="math inline">\(n=\text{length}(X)\)</span> times. That is, we get <span class="math inline">\(n\)</span> replications of the function <code>genAndEst</code>, just as we did with the <em>for</em> loop.</p>
<pre class="r"><code>lappPower <- function(def, dx, reps = 1000) {
plist <- lapply(X = 1:reps, FUN = function(x) genAndEst(def, dx))
mean(unlist(plist) < 0.05)
}
lappPower(defA, DT, 2500)</code></pre>
<pre><code>## [1] 0.75</code></pre>
<p>The power estimate is quite close to the initial <em>for</em> loop replication and the analytic solution. However, in this case, it did not appear to provide any time savings, taking about 5 seconds as well.</p>
</div>
<div id="mclapply" class="section level3">
<h3><em>mclapply</em></h3>
<p>The final approach here is the <code>mclapply</code> function - or multi-core lapply. The syntax is almost identical to <code>lapply</code>, but the speed is not. It seems like it took about 2 or 3 seconds to do 2500 replications.</p>
<pre class="r"><code>library(parallel)
mclPower <- function(def, dx, reps) {
plist <- mclapply(1:reps, function(x) genAndEst(def, dx), mc.cores = 4)
mean(unlist(plist) < 0.05)
}
mclPower(defA, DT, 2500)</code></pre>
<pre><code>## [1] 0.75</code></pre>
</div>
<div id="benchmarking-the-processing-times" class="section level3">
<h3>Benchmarking the processing times</h3>
<p>You’ve had to take my word about the relative processing times. Here, I use package <code>microbenchmark</code> to compare the three approaches (leaving out the analytic solution, because it is far, far superior in this case). This bench-marking process actually does 100 replications of each approach. And each replication involves 2500 <em>p-value estimates</em>. So, the benchmark takes quite a while on my laptop:</p>
<pre class="r"><code>library(microbenchmark)
m1500 <- microbenchmark(for_loop = forPower(defA, DT, 1500),
lapply = lappPower(defA, DT, 1500),
mclapply = mclPower(defA, DT, 1500),
times = 100L
)</code></pre>
<p>The results of the benchmark are plotted here, with each of the 100 benchmark calls shown for each method, as well as the average in red. My guesstimates of the processing times were not so far off, and it looks like the parallel processing on my laptop reduces the processing times by about <span class="math inline">\(50\%\)</span>. In my work more generally, I have found this to be typical, and when the computation requirements are more burdensome, this reduction can really be a substantial time saver.</p>
<p><img src="https://www.rdatagen.net/post/2018-12-10-parallel-processing-to-add-a-little-zip-to-power-simulations_files/figure-html/unnamed-chunk-11-1.png" width="288" /></p>
</div>
Horses for courses, or to each model its own (causal effect)
https://www.rdatagen.net/post/different-models-estimate-different-causal-effects-part-ii/
Wed, 28 Nov 2018 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/different-models-estimate-different-causal-effects-part-ii/<p>In my previous <a href="https://www.rdatagen.net/post/generating-data-to-explore-the-myriad-causal-effects/">post</a>, I described a (relatively) simple way to simulate observational data in order to compare different methods to estimate the causal effect of some exposure or treatment on an outcome. The underlying data generating process (DGP) included a possibly unmeasured confounder and an instrumental variable. (If you haven’t already, you should probably take a quick <a href="https://www.rdatagen.net/post/generating-data-to-explore-the-myriad-causal-effects/">look</a>.)</p>
<p>A key point in considering causal effect estimation is that the average causal effect depends on the individuals included in the average. If we are talking about the causal effect for the population - that is, comparing the average outcome if <em>everyone</em> in the population received treatment against the average outcome if <em>no one</em> in the population received treatment - then we are interested in the average causal effect (ACE).</p>
<p>However, if we have an instrument, and we are talking about <em>only the compliers</em> (those who don’t get the treatment when <em>not</em> encouraged but do get it when they <em>are</em> encouraged) - then we will be measuring the complier average causal effect (CACE). The CACE is a comparison of the average outcome when <em>all compliers</em> receive the treatment with the average outcome when <em>none of the compliers</em> receive the treatment.</p>
<p>And the third causal effect I will consider here is the average causal effect for the treated (ACT). This population is defined by those who actually received the treatment (regardless of instrument or complier status). Just like the other causal effects, the ACT is a comparison of the average outcome when all those who were actually treated did get treatment (this is actually what we observe) with the average outcome if all those who were actually treated didn’t get the treatment (the counterfactual of the treated).</p>
<p>As we will see in short order, three different estimation methods using (almost) the same data set provide estimates for each of these three different causal estimands.</p>
<div id="the-data-generating-process" class="section level3">
<h3>The data generating process</h3>
<p>For the purposes of this illustration, I am generating data with heterogeneous causal effects that depend on an measured or unmeasured underlying health status <span class="math inline">\(U\)</span>. (I’m skipping over the details of the DGP that I laid out in <a href="https://www.rdatagen.net/post/generating-data-to-explore-the-myriad-causal-effects/">part I</a>.) Higher values of <span class="math inline">\(U\)</span> indicate a sicker patient. Those patients are more likely to have stronger effects, and are more likely to seek treatment (independent of the instrument).</p>
<p>Here is a set of plots that show the causal effects by health status <span class="math inline">\(U\)</span> and various distributions of the causal effects:</p>
<p><img src="https://www.rdatagen.net/post/2018-11-28-different-models-estimate-different-causal-effects-part-ii_files/figure-html/unnamed-chunk-2-1.png" width="1056" /></p>
</div>
<div id="instrumental-variable" class="section level3">
<h3>Instrumental variable</h3>
<p>First up is IV estimation. The two-stage least squares regression method has been implemented in the R package <code>ivpack</code>. In case you didn’t check out the IV reference last time, here is an excellent <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4201653/">tutorial</a> that describes IV methods in great, accessible detail. The model specification requires the intervention or exposure variable (in this case <span class="math inline">\(T\)</span>) and the instrument (<span class="math inline">\(A\)</span>).</p>
<pre class="r"><code>library(ivpack)
ivmodel <- ivreg(formula = Y ~ T | A, data = DT)
broom::tidy(ivmodel)</code></pre>
<pre><code>## # A tibble: 2 x 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 1.30 0.0902 14.4 6.11e-43
## 2 T 1.52 0.219 6.92 8.08e-12</code></pre>
<p>The causal effect that IV methods is often called the local area treatment effect (LATE), which is just another way to talk about the CACE. Essentially, IV is estimating the causal effect for people whose behavior is modified (or would be modified) by the instrument. If we calculate the average CACE using the (unobservable) potential outcomes data for the compliers, the estimate is quite close to the IV estimate of 1.52:</p>
<pre class="r"><code>DT[fS == "Complier", mean(Y1 - Y0)]</code></pre>
<pre><code>## [1] 1.53</code></pre>
</div>
<div id="propensity-score-matching" class="section level3">
<h3>Propensity score matching</h3>
<p>If we were somehow able to measure <span class="math inline">\(U\)</span>, the underlying health status, we would be in a position to estimate the average causal effect for the treated, what I have been calling ACT, using propensity score matching. The idea here is to create a comparison group from the untreated sample that looks similar to the treated in every way except for treatment. This control is designed to be the counterfactual for the treated.</p>
<p>One way to do this is by matching on the propensity score - the probability of treatment. (See this <a href="https://www.tandfonline.com/doi/abs/10.1080/00273171.2011.568786">article</a> on propensity score methods for a really nice overview on the topic.)</p>
<p>To estimate the probability of treatment, we fit a “treatment” model, in this case a logistic generalized linear model since the treatment is binary. From this model, we can generate a predicted value for each individual. We can use software, in this case the R package <code>Matching</code>, to find individuals in the untreated group who share the exact or very similar propensity for treatment. Actually in this case, I will “match with replacement” so that while each treated individual will be included once, some controls might be matched with more than one treated (and those that are included repeatedly will be counted multiple times in the data).</p>
<p>It turns out that when we do this, the two groups will be balanced on everything that matters. In this case, the “everything”" that matters is only health status <span class="math inline">\(U\)</span>. (We actually could have matched directly on <span class="math inline">\(U\)</span> here, but I wanted to show propensity score matching, which is useful when there are many confounders that matter, and matching on them separately would be extremely difficult or impossible.)</p>
<p>Once we have the two groups, all we need to do is take the difference of the means of the two groups and that will give us an estimate for ACT. We could use bootstrapping methods to estimate the standard error. Below, we will use Monte Carlo simulation, so that will give us sense of the variability.</p>
<pre class="r"><code>library(Matching)
# Treatment model and ps estimation
glm.fit <- glm(T ~ U, family=binomial, data=DT)
DT$ps = predict(glm.fit,type="response")
setkey(DT, T, id)
TR = DT$T
X = DT$ps
# Matching with replacement
matches <- Match(Y = NULL, Tr = TR, X = X, ties = FALSE, replace = TRUE)
# Select matches from original dataset
dt.match <- DT[c(matches$index.treated, matches$index.control)]
# ACT estimate
dt.match[T == 1, mean(Y)] - dt.match[T == 0, mean(Y)]</code></pre>
<pre><code>## [1] 1.79</code></pre>
<p>Once again, the matching estimate is quite close to the “true” value of the ACT calculated using the potential outcomes:</p>
<pre class="r"><code>DT[T == 1, mean(Y1 - Y0)]</code></pre>
<pre><code>## [1] 1.77</code></pre>
</div>
<div id="inverse-probability-weighting" class="section level3">
<h3>Inverse probability weighting</h3>
<p>This last method also uses the propensity score, but as a weight, rather than for the purposes of matching. Each individual weight is the inverse probability of receiving the treatment they actually received. (I wrote a series of posts on IPW; you can look <a href="https://www.rdatagen.net/post/inverse-probability-weighting-when-the-outcome-is-binary/">here</a> if you want to see a bit more.)</p>
<p>To implement IPW in this simple case, we just calculate the weight based on the propensity score, and use that weight in a simple linear regression model:</p>
<pre class="r"><code>DT[, ipw := 1 / ((ps * T) + ( (1 - ps) * (1 - T) ))]
lm.ipw <- lm(Y ~ T, weights = DT$ipw, data = DT)
broom::tidy(lm.ipw)</code></pre>
<pre><code>## # A tibble: 2 x 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 1.21 0.0787 15.3 1.06e-47
## 2 T 1.02 0.110 9.28 1.04e-19</code></pre>
<p>The IPW estimate is quite close to the estimate of the average causal effect (ACE). That is, the IPW is the marginal average:</p>
<pre class="r"><code>DT[, mean(Y1 - Y0)]</code></pre>
<pre><code>## [1] 1.1</code></pre>
</div>
<div id="randomized-clinical-trial" class="section level3">
<h3>Randomized clinical trial</h3>
<p>If we can make the assumption that <span class="math inline">\(A\)</span> is not the instrument but is the actual randomization <em>and</em> that everyone is a complier (i.e. everyone follows the randomized protocol), then the estimate we get from comparing treated with controls will also be quite close to the ACE of 1.1. So, the randomized trial in its ideal execution provides an estimate of the average causal effect for the entire sample.</p>
<pre class="r"><code>randtrial <- lm(Y.r ~ A, data = DT)
broom::tidy(randtrial)</code></pre>
<pre><code>## # A tibble: 2 x 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 1.22 0.0765 15.9 5.35e-51
## 2 A 1.09 0.108 10.1 8.26e-23</code></pre>
</div>
<div id="intention-to-treat-from-rct" class="section level3">
<h3>Intention-to-treat from RCT</h3>
<p>Typically, however, in a randomized trial, there isn’t perfect compliance, so randomization is more like strong encouragement. Studies are typically analyzed using an intent-to-treat approach, doing the analysis <em>as if</em> protocol was followed correctly. This method is considered conservative (in the sense that the estimated effect is closer to 0 than true ACE is), because many of those assumed to have been treated were not actually treated, and <em>vice versa</em>. In this case, the estimated ITT quantity is quite a bit smaller than the estimate from a perfectly executed RCT (which is the ACE):</p>
<pre class="r"><code>itt.fit <- lm(Y ~ A, data = DT)
broom::tidy(itt.fit)</code></pre>
<pre><code>## # A tibble: 2 x 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 1.50 0.0821 18.3 1.34e-64
## 2 A 0.659 0.116 5.68 1.76e- 8</code></pre>
</div>
<div id="per-protocol-analysis-from-rct" class="section level3">
<h3>Per protocol analysis from RCT</h3>
<p>Yet another approach to analyzing the data is to consider only those cases that followed protocol. So, for those randomized to treatment, we would look only at those who actually were treated. And for those randomized to control, we would only look at those who did not get treatment. It is unclear what this is actually measuring since the two groups are not comparable: the treated group includes both compliers and always-takers, whereas the control group includes both compliers and never-takers. If always-takers have larger causal effects on average and never-takers have smaller causal effects on average, the per protocol estimate will be larger than the average causal effect (ACE), and will not represent any other obvious quantity.</p>
<p>And with this data set, this is certainly the case:</p>
<pre class="r"><code>DT[A == 1 & T == 1, mean(Y)] - DT[A == 0 & T == 0, mean(Y)] </code></pre>
<pre><code>## [1] 2.22</code></pre>
</div>
<div id="monte-carlo-simulation" class="section level3">
<h3>Monte Carlo simulation</h3>
<p>I leave you with a figure that shows the point estimates and 95% confidence intervals for each of these methods. Based on 1000 replications of the data set, this series of plots underscores the relationship of the methods to the various causal estimands.</p>
<p><img src="https://www.rdatagen.net/post/2018-11-28-different-models-estimate-different-causal-effects-part-ii_files/figure-html/unnamed-chunk-13-1.png" width="672" /></p>
</div>
Generating data to explore the myriad causal effects that can be estimated in observational data analysis
https://www.rdatagen.net/post/generating-data-to-explore-the-myriad-causal-effects/
Tue, 20 Nov 2018 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/generating-data-to-explore-the-myriad-causal-effects/<p>I’ve been inspired by two recent talks describing the challenges of using instrumental variable (IV) methods. IV methods are used to estimate the causal effects of an exposure or intervention when there is unmeasured confounding. This estimated causal effect is very specific: the complier average causal effect (CACE). But, the CACE is just one of several possible causal estimands that we might be interested in. For example, there’s the average causal effect (ACE) that represents a population average (not just based the subset of compliers). Or there’s the average causal effect for the exposed or treated (ACT) that allows for the fact that the exposed could be different from the unexposed.</p>
<p>I thought it would be illuminating to analyze a single data set using different causal inference methods, including IV as well as propensity score matching and inverse probability weighting. Each of these methods targets different causal estimands, which may or may not be equivalent depending on the subgroup-level causal effects and underlying population distribution of those subgroups.</p>
<p>This is the first of a two-part post. In this first part, I am focusing entirely on the data generation process (DGP). In the follow-up, I will get to the model estimation.</p>
<div id="underlying-assumptions-of-the-dgp" class="section level3">
<h3>Underlying assumptions of the DGP</h3>
<p>Since the motivation here is instrumental variable analysis, it seems natural that the data generation process include a possible instrument. (Once again, I am going to refer to elsewhere in case you want more details on the theory and estimation of IV models. Here is an excellent in-depth tutorial by <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4201653/"><em>Baiocchi et al</em></a> that provides great background. I’ve even touched on the topic of CACE in an earlier series of <a href="https://www.rdatagen.net/post/cace-explored/">posts</a>. Certainly, there is no lack of discussion on this topic, as a quick search around the internet will make readily obvious.)</p>
<p>The figure below is a variation on the directed acyclic graph (DAG) that is often very useful in laying out causal assumptions of a DGP. This particular figure is a type of SWIG: single world intervention graph. SWIGs, <a href="https://pdfs.semanticscholar.org/07bb/cb458109d2663acc0d098e8913892389a2a7.pdf">developed by Robins and Richardson</a>, fuse the worlds of potential outcomes and DAGs.</p>
<p><img src="https://www.rdatagen.net/img/post-ivdgp/IV_SWIT.png" /></p>
<p>Important things to note here:</p>
<ol style="list-style-type: decimal">
<li><p>There is an instrumental variable <span class="math inline">\(A\)</span> that has a direct causal relationship only to the exposure of interest, <span class="math inline">\(T\)</span>. If the exposure is a particular medical intervention, think of the instrument as some kind of encouragement to get that treatment. Some people get the encouragement, others don’t - though on average folks who are encouraged are no different from folks who are not (at least not in ways that relate to the outcome.)</p></li>
<li><p>There is a confounder <span class="math inline">\(U\)</span>, possibly unmeasured, that is related both to potential outcomes and the exposure, but not to the encouragement (the instrument)! In the example below, we conceive of <span class="math inline">\(U\)</span> as an underlying health status.</p></li>
<li><p>Exposure variable <span class="math inline">\(T\)</span> (that, in this case, is binary, just to keep things simpler) indicates whether a person gets the treatment or not.</p></li>
<li><p>Each individual will have two <em>potential treatments</em> <span class="math inline">\(T^0\)</span> and <span class="math inline">\(T^1\)</span>, where <span class="math inline">\(T^0\)</span> is the treatment when there is no encouragement (i.e. A = 0), and <span class="math inline">\(T^1\)</span> is the treatment when <span class="math inline">\(A = 1\)</span>. For any individual, we actually only observe one of these treatments (depending on the actual value of <span class="math inline">\(A\)</span>. The population of interest consists of <strong>always-takers</strong>, <strong>compliers</strong>, and <strong>never-takers</strong>. <em>Never-takers</em> always reject the treatment regardless of whether or not they get encouragement - that is, <span class="math inline">\(T^0 = T^1 = 0\)</span>. <em>Compliers</em> only seek out the treatment when they are encouraged, otherwise they don’t: <span class="math inline">\(T^0 = 0\)</span> and <span class="math inline">\(T^1 = 1\)</span>. And <em>always-takers</em> always (of course) seek out the treatment: <span class="math inline">\(T^0 = T^1 = 1\)</span>. (In order for the model to be identifiable, we need to make a not-so-crazy assumption that there are no so-called <em>deniers</em>, where <span class="math inline">\(T^0 = 1\)</span> and <span class="math inline">\(T^1 = 0\)</span>.) An individual may have a different complier status depending on the instrument and exposure (<em>i.e.</em>, one person might be a never-taker in one scenario but a complier in another). In this simulation, larger values of the confounder <span class="math inline">\(U\)</span> will increase <span class="math inline">\(P(T^a = 1)\)</span> for both <span class="math inline">\(a \in (0,1)\)</span>.</p></li>
<li><p>Each individual will have two <em>potential outcomes</em>, only one of which is observed. <span class="math inline">\(Y_i^0\)</span> is the outcome for person <span class="math inline">\(i\)</span> when they are unexposed or do not receive the treatment. <span class="math inline">\(Y_i^1\)</span> is the outcome for that same person when they are exposed or do receive the treatment. In this case, the confounder <span class="math inline">\(U\)</span> can affect the potential outcomes. (This diagram is technically a SWIT, which is template, since I have generically referred to the potential treatment <span class="math inline">\(T^a\)</span> and potential outcome <span class="math inline">\(Y^t\)</span>.)</p></li>
<li><p>Not shown in this diagram are the observed <span class="math inline">\(T_i\)</span> and <span class="math inline">\(Y_i\)</span>; we assume that <span class="math inline">\(T_i = (T_i^a | A = a)\)</span> and <span class="math inline">\(Y_i = (Y_i^t | T = t)\)</span></p></li>
<li><p>Also not shown on the graph is the causal estimand of an exposure for individual <span class="math inline">\(i\)</span>, which can be defined as <span class="math inline">\(CE_i \equiv Y^1_i - Y^0_i\)</span>. We can calculate the average causal effect, <span class="math inline">\(E[CE]\)</span>, for the sample as a whole as well as for subgroups.</p></li>
</ol>
</div>
<div id="dgp-for-potential-outcomes" class="section level3">
<h3>DGP for potential outcomes</h3>
<p>The workhorse of this data generating process is a logistic sigmoid function that represents the mean potential outcome <span class="math inline">\(Y^t\)</span> at each value of <span class="math inline">\(u\)</span>. This allows us to easily generate homogeneous or heterogeneous causal effects. The function has four parameters, <span class="math inline">\(M\)</span>, <span class="math inline">\(\gamma\)</span>, <span class="math inline">\(\delta\)</span>, and <span class="math inline">\(\alpha\)</span>:</p>
<p><span class="math display">\[
Y^t = f(u) = M/[1 + exp(-\gamma(u - \delta))] + \alpha,
\]</span>
where <span class="math inline">\(M\)</span> is the maximum of the function (assuming the minimum is <span class="math inline">\(0\)</span>), <span class="math inline">\(\gamma\)</span> is the steepness of the curve, <span class="math inline">\(\delta\)</span> is the inflection point of the curve, and <span class="math inline">\(\alpha\)</span> is a vertical shift of the entire curve. This function is easily implemented in R:</p>
<pre class="r"><code>fYt <- function(x, max, grad, inflect = 0, offset = 0) {
( max / (1 + exp( -grad * (x - inflect) ) ) ) + offset
}</code></pre>
<p>Here is a single curve based on an arbitrary set of parameters:</p>
<pre class="r"><code>ggplot(data = data.frame(x = 0), mapping = aes(x = x)) +
stat_function(fun = fYt, size = 2,
args = list(max = 1.5, grad = 5, inflect = 0.2)) +
xlim(-1.5, 1.5)</code></pre>
<p><img src="https://www.rdatagen.net/post/2018-11-20-generating-data-to-explore-the-myriad-causal-effects_files/figure-html/unnamed-chunk-3-1.png" width="672" /></p>
<p>The figures below show the mean of the potential outcomes <span class="math inline">\(Y^0\)</span> and <span class="math inline">\(Y^1\)</span> under two different scenarios. On the left, the causal effect at each level of <span class="math inline">\(u\)</span> is constant, and on the right, the causal effect changes over the different values of <span class="math inline">\(u\)</span>, increasing rapidly when <span class="math inline">\(0 < u < 0.2\)</span>.</p>
<p><img src="https://www.rdatagen.net/post/2018-11-20-generating-data-to-explore-the-myriad-causal-effects_files/figure-html/unnamed-chunk-5-1.png" width="1056" /></p>
</div>
<div id="homogeneous-causal-effect" class="section level3">
<h3>Homogeneous causal effect</h3>
<p>Here’s a closer look at the the different causal effects under the first scenario of homogeneous causal effects across values of <span class="math inline">\(u\)</span> by generating some data. The data definitions are provided in three steps. In the first step, the confounder <span class="math inline">\(U\)</span> is generated. Think of this as health status, which can take on values ranging from <span class="math inline">\(-0.5\)</span> to <span class="math inline">\(0.5\)</span>, where lower scores indicate worse health.</p>
<p>Next up are the definitions of the potential outcomes of treatment and outcome, both of which are dependent on the unmeasured confounder.</p>
<p>(Though technically not a definition step, the instrument assignment (variable <span class="math inline">\(A\)</span>) is generated later using <code>trtAssign</code>.)</p>
<p>In the final steps, we generate the observed treatment <span class="math inline">\(T\)</span> (a function of both <span class="math inline">\(A\)</span> and complier status <span class="math inline">\(S\)</span>), and observed outcome <span class="math inline">\(Y\)</span> (which is determined by <span class="math inline">\(T\)</span>). A complier status is determined based on the potential outcomes of treatment.</p>
<pre class="r"><code>library(simstudy)
### Potential treatments U and outcomes Y
def <- defData(varname = "U", formula = "-0.5;0.5",
dist = "uniform")
def <- defData(def, varname = "T0",
formula = "-2 + 4 * U",
dist = "binary", link = "logit")
def <- defData(def, varname = "T1x",
formula = "4 * U ",
dist = "binary", link = "logit")
# This prevents any deniers:
def <- defData(def, varname = "T1",
formula = "(T0 == 0) * T1x + (T0 == 1) * 1",
dist = "nonrandom")
def <- defData(def, varname = "Y0",
formula = "fYt(U, 5, 15, 0.02)",
variance = 0.25)
def <- defData(def, varname = "Y1",
formula = "fYt(U, 5.0, 15, 0.02, 1)",
variance = 0.25)
### Observed treatments
defA <- defDataAdd(varname = "T",
formula = "(A == 0) * T0 + (A == 1) * T1")
defA <- defDataAdd(defA, varname = "Y",
formula = "(T == 0) * Y0 + (T == 1) * Y1",
dist = "nonrandom")
defA <- defDataAdd(defA, varname = "Y.r",
formula = "(A == 0) * Y0 + (A == 1) * Y1",
dist = "nonrandom")
### Complier status
defC <- defCondition(condition = "T0 == 0 & T1 == 0", formula = 1,
dist = "nonrandom")
defC <- defCondition(defC, condition = "T0 == 0 & T1 == 1", formula = 2,
dist = "nonrandom")
defC <- defCondition(defC, condition = "T0 == 1 & T1 == 1", formula = 3,
dist = "nonrandom")</code></pre>
<p>Once all the definitions are set, it is quite simple to generate the data:</p>
<pre class="r"><code>set.seed(383726)
# Step 1 - generate U and potential outcomes for T and Y
dx <- genData(500, def)
# Step 2 - randomly assign instrument
dx <- trtAssign(dx, nTrt = 2, grpName = "A" )
# Step 3 - generate observed T and Y
dx <- addColumns(defA, dx)
# Step 4 - determine complier status
dx <- addCondition(defC, dx, "S")
dx <- genFactor(dx, "S", labels = c("Never", "Complier", "Always"))</code></pre>
</div>
<div id="looking-at-the-data" class="section level3">
<h3>Looking at the data</h3>
<p>Here are a few records from the generated dataset:</p>
<pre class="r"><code>dx</code></pre>
<pre><code>## id S A U T0 T1x T1 Y0 Y1 T Y Y.r fS
## 1: 1 1 1 0.282 0 0 0 5.5636 6.01 0 5.5636 6.01 Never
## 2: 2 3 0 0.405 1 1 1 4.5534 5.83 1 5.8301 4.55 Always
## 3: 3 3 1 0.487 1 1 1 6.0098 5.82 1 5.8196 5.82 Always
## 4: 4 2 1 0.498 0 1 1 5.2695 6.43 1 6.4276 6.43 Complier
## 5: 5 1 1 -0.486 0 0 0 0.0088 1.02 0 0.0088 1.02 Never
## ---
## 496: 496 3 0 -0.180 1 0 1 0.8384 1.10 1 1.0966 0.84 Always
## 497: 497 3 1 0.154 1 1 1 4.9118 5.46 1 5.4585 5.46 Always
## 498: 498 1 1 0.333 0 0 0 5.4800 5.46 0 5.4800 5.46 Never
## 499: 499 1 1 0.049 0 0 0 3.4075 5.15 0 3.4075 5.15 Never
## 500: 500 1 1 -0.159 0 0 0 0.4278 0.96 0 0.4278 0.96 Never</code></pre>
<p>The various average causal effects, starting with the (marginal) average causal effect and ending with the average causal effect for those treated are all close to <span class="math inline">\(1\)</span>:</p>
<pre class="r"><code>ACE <- dx[, mean(Y1 - Y0)]
AACE <- dx[fS == "Always", mean(Y1 - Y0)]
CACE <- dx[fS == "Complier", mean(Y1 - Y0)]
NACE <- dx[fS == "Never", mean(Y1 - Y0)]
ACT <- dx[T == 1, mean(Y1 - Y0)]</code></pre>
<pre><code>## ceType ce
## 1: ACE 0.97
## 2: AACE 0.96
## 3: CACE 1.00
## 4: NACE 0.96
## 5: ACT 1.05</code></pre>
<p>Here is a visual summary of the generated data. The upper left shows the underlying data generating functions for the potential outcomes and the upper right plot shows the various average causal effects: average causal effect for the population (ACE), average causal effect for always-takers (AACE), complier average causal effect (CACE), average causal effect for never-takers (NACE), and the average causal effect for the treated (ACT).</p>
<p>The true individual-specific causal effects color-coded based on complier status (that we could never observe in the real world, but we can here in simulation world) are on the bottom left, and the true individual causal effects for those who received treatment are on the bottom right. These figures are only remarkable in that all average causal effects and individual causal effects are close to <span class="math inline">\(1\)</span>, reflecting the homogeneous causal effect data generating process.</p>
<p><img src="https://www.rdatagen.net/post/2018-11-20-generating-data-to-explore-the-myriad-causal-effects_files/figure-html/unnamed-chunk-11-1.png" width="1056" /></p>
</div>
<div id="heterogenous-causal-effect-1" class="section level3">
<h3>Heterogenous causal effect #1</h3>
<p>Here is a set of figures for a heterogeneous data generating process (which can be seen on the upper left). Now, the average causal effects are quite different from each other. In particular <span class="math inline">\(ACE < CACE < ACT\)</span>. Obviously, none of these quantities is wrong, they are just estimating the average effect for different groups of people that are characterized by different levels of health status <span class="math inline">\(U\)</span>:</p>
<p><img src="https://www.rdatagen.net/post/2018-11-20-generating-data-to-explore-the-myriad-causal-effects_files/figure-html/unnamed-chunk-12-1.png" width="1056" /></p>
</div>
<div id="heterogenous-causal-effect-2" class="section level3">
<h3>Heterogenous causal effect #2</h3>
<p>Finally, here is one more scenario, also with heterogeneous causal effects. In this case <span class="math inline">\(ACE \approx CACE\)</span>, but the other effects are quite different, actually with different signs.</p>
<p><img src="https://www.rdatagen.net/post/2018-11-20-generating-data-to-explore-the-myriad-causal-effects_files/figure-html/unnamed-chunk-13-1.png" width="1056" /></p>
</div>
<div id="next-up-estimating-the-causal-effects" class="section level3">
<h3>Next up: estimating the causal effects</h3>
<p>In the second part of this post, I will use this DGP and estimate these effects using various modeling techniques. It will hopefully become apparent that different modeling approaches provide estimates of different causal estimands.</p>
</div>
Causal mediation estimation measures the unobservable
https://www.rdatagen.net/post/causal-mediation/
Tue, 06 Nov 2018 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/causal-mediation/<p>I put together a series of demos for a group of epidemiology students who are studying causal mediation analysis. Since mediation analysis is not always so clear or intuitive, I thought, of course, that going through some examples of simulating data for this process could clarify things a bit.</p>
<p>Quite often we are interested in understanding the relationship between an exposure or intervention on an outcome. Does exposure <span class="math inline">\(A\)</span> (could be randomized or not) have an effect on outcome <span class="math inline">\(Y\)</span>?</p>
<p><img src="https://www.rdatagen.net/img/post-mediation/Model_1.png" /></p>
<p>But sometimes we are interested in understanding <em>more</em> than whether or not <span class="math inline">\(A\)</span> causes or influences <span class="math inline">\(B\)</span>; we might want to have some insight into the mechanisms <em>underlying</em> that influence. And this is where mediation analysis can be useful. (If you want to delve deeply into the topic, I recommend you check out this <a href="https://global.oup.com/academic/product/explanation-in-causal-inference-9780199325870?cc=us&lang=en&">book</a> by Tyler VanderWeele, or this nice <a href="https://www.mailman.columbia.edu/research/population-health-methods/causal-mediation">website</a> developed at Columbia University.)</p>
<p>In the example here, I am using the simplest possible scenario of an exposure <span class="math inline">\(A\)</span>, a mediator <span class="math inline">\(M\)</span>, and an outcome <span class="math inline">\(Y\)</span>, without any confounding:</p>
<p><img src="https://www.rdatagen.net/img/post-mediation/Model_2.png" /></p>
<p>A key challenge of understanding and conducting a mediation analysis is how we should <em>quantify</em> this concept of mediation. Sure, <span class="math inline">\(A\)</span> has an effect on <span class="math inline">\(M\)</span>, which in turn has an effect on <span class="math inline">\(Y\)</span>, and <span class="math inline">\(A\)</span> also may have an effect on <span class="math inline">\(Y\)</span> through other pathways. But how can we make sense of all of this? One approach, which is a relatively recent development, is to use the <em>potential outcome</em> framework of causal inference to define the various estimands (or quantities) that arise in a mediation analysis. (I draw on a <a href="https://www.jstor.org/stable/41058997?seq=1#metadata_info_tab_contents">paper</a> by Imai, Keele and Yamamoto for the terminology, as there is not complete agreement on what to call various quantities. The estimation methods and software used here are also described in the paper.)</p>
<div id="defining-the-potential-outcomes" class="section level3">
<h3>Defining the potential outcomes</h3>
<p>In an earlier <a href="https://www.rdatagen.net/post/be-careful/">post</a>, I described the concept of potential outcomes. I extend that a bit here to define the quantities we are interested in. In this case, we have two effects of the possible exposure: <span class="math inline">\(M\)</span> and <span class="math inline">\(Y\)</span>. Under this framework, each individual has a potential outcome for each level of <span class="math inline">\(A\)</span> (I am assuming <span class="math inline">\(A\)</span> is binary). So, for the mediator, <span class="math inline">\(M_{i0}\)</span> and <span class="math inline">\(M_{i1}\)</span> are the values of <span class="math inline">\(M\)</span> we would observe for individual <span class="math inline">\(i\)</span> without and with exposure, respectively. That is pretty straightforward. (From here on out, I will remove the subscript <span class="math inline">\(i\)</span>, because it gets a little unwieldy.)</p>
<p>The potential outcomes under <span class="math inline">\(Y\)</span> are less intuitive, as there are four of them. First, there is <span class="math inline">\(Y_{0M_0}\)</span>, which is the potential outcome of <span class="math inline">\(Y\)</span> <em>without</em> exposure for <span class="math inline">\(A\)</span> and whatever the potential outcome for <span class="math inline">\(M\)</span> is <em>without</em> exposure for <span class="math inline">\(A\)</span>. This is what we observe when <span class="math inline">\(A=0\)</span> for an individual. <span class="math inline">\(Y_{1M_1}\)</span> is the potential outcome of <span class="math inline">\(Y\)</span> <em>with</em> exposure for <span class="math inline">\(A\)</span> and whatever the potential outcome for <span class="math inline">\(M\)</span> is <em>with</em> exposure for <span class="math inline">\(A\)</span>. This is what we observe when <span class="math inline">\(A=1\)</span> for an individual. That’s all fine.</p>
<p>But we also have <span class="math inline">\(Y_{0M_1}\)</span>, which can never be observed unless we can intervene on the mediator <span class="math inline">\(M\)</span> somehow. This is the potential outcome of <span class="math inline">\(Y\)</span> <em>without</em> exposure for <span class="math inline">\(A\)</span> and whatever the mediator would have been had the individual been exposed. This potential outcome is controversial, because it is defined across two different universes of exposure to <span class="math inline">\(A\)</span>. Finally, there is <span class="math inline">\(Y_{1M_0}\)</span>. It is analogously defined across two universes, but in reverse.</p>
</div>
<div id="defining-the-causal-mediation-effects-and-direct-effects" class="section level3">
<h3>Defining the causal mediation effects and direct effects</h3>
<p>The estimands or quantities that we are interested are defined in terms of the potential outcomes. The <strong><em>causal mediation effects</em></strong> for an individual are</p>
<p><span class="math display">\[
\begin{aligned}
CME_0 &= Y_{0M_1} - Y_{0M_0} \\
CME_1 &= Y_{1M_1} - Y_{1M_0},
\end{aligned}
\]</span></p>
<p>and the <strong><em>causal direct effects</em></strong> are</p>
<p><span class="math display">\[
\begin{aligned}
CDE_0 &= Y_{1M_0} - Y_{0M_0} \\
CDE_1 &= Y_{1M_1} - Y_{0M_1}.
\end{aligned}
\]</span></p>
<p>A few important points. (1) Since we are in the world of potential outcomes, we do not observe these quantities for everyone. In fact, we don’t observe these quantities for anyone, since some of the measures are across two universes. (2) The two causal mediation effects under do not need to be the same. The same goes for the two causal direct effects. (3) Under a set of pretty strong assumptions related to unmeasured confounding, independence, and consistency (see <a href="https://www.jstor.org/stable/41058997?seq=1#metadata_info_tab_contents"><em>Imai et al</em></a> for the details), the average causal mediation effects and average causal direct effects can be estimated using <em>observed</em> data only. Before I simulate some data to demonstrate all of this, here is the definition for the <strong><em>total causal effect</em></strong> (and its decomposition into mediation and direct effects):</p>
<p><span class="math display">\[
\begin{aligned}
TCE &= Y_{1M_1} - Y_{0M_0} \\
&= CME_1 + CDE_0 \\
&= CME_0 + CDE_1
\end{aligned}
\]</span></p>
</div>
<div id="generating-the-data" class="section level3">
<h3>Generating the data</h3>
<p>I’m using the <code>simstudy</code> package to generate the data. I’ll start by generating the binary potential outcomes for the mediator, <span class="math inline">\(M_0\)</span> and <span class="math inline">\(M_1\)</span>, which are correlated in this example. <span class="math inline">\(P(M_1=1) > P(M_0=1)\)</span>, implying that exposure to <span class="math inline">\(A\)</span> does indeed have an effect on <span class="math inline">\(M\)</span>. Note that it is possible that for an individual <span class="math inline">\(M_0 = 1\)</span> and <span class="math inline">\(M_1 = 0\)</span>, so that exposure to <span class="math inline">\(A\)</span> has an effect contrary to what we see in the population generally. (We don’t need to make this assumption in the data generation process; we could force <span class="math inline">\(M_1\)</span> to be 1 if <span class="math inline">\(M_0\)</span> is 1.)</p>
<pre class="r"><code>set.seed(3872672)
dd <- genCorGen(n=5000, nvars = 2, params1 = c(.2, .6),
dist = "binary", rho = .3, corstr = "cs",
wide = TRUE, cnames = c("M0", "M1"))</code></pre>
<p>Observe treatment:</p>
<pre class="r"><code>dd <- trtObserve(dd, 0.6, grpName = "A")</code></pre>
<p>Initial data set:</p>
<pre class="r"><code>dd</code></pre>
<pre><code>## id A M0 M1
## 1: 1 0 0 1
## 2: 2 1 0 0
## 3: 3 1 0 1
## 4: 4 0 1 1
## 5: 5 1 0 0
## ---
## 4996: 4996 1 0 1
## 4997: 4997 0 0 0
## 4998: 4998 1 1 1
## 4999: 4999 1 1 0
## 5000: 5000 0 0 0</code></pre>
<p><span class="math inline">\(Y_{0M_0}\)</span> is a function of <span class="math inline">\(M_0\)</span> and some noise <span class="math inline">\(e_0\)</span>, and <span class="math inline">\(Y_{0M_1}\)</span> is a function of <span class="math inline">\(M_1\)</span> and the same noise (this is not a requirement). However, if <span class="math inline">\(M_0 = M_1\)</span> (i.e. the mediator is not affected by exposure status), then I am setting <span class="math inline">\(Y_{0M_1} = Y_{0M_0}\)</span>. In this case, <span class="math inline">\(CME_0\)</span> for an individual is <span class="math inline">\(2(M_1 - M_0)\)</span>, so <span class="math inline">\(CME_0 \in \{-2, 0, 2\}\)</span>, and the population average <span class="math inline">\(CME_0\)</span> will depend on the mixture of potential outcomes <span class="math inline">\(M_0\)</span> and <span class="math inline">\(M_1\)</span>.</p>
<pre class="r"><code>def <- defDataAdd(varname = "e0", formula = 0,
variance = 1, dist = "normal")
def <- defDataAdd(def, varname = "Y0M0", formula = "2 + M0*2 + e0",
dist = "nonrandom")
def <- defDataAdd(def, varname = "Y0M1", formula = "2 + M1*2 + e0",
variance = 1, dist = "nonrandom")</code></pre>
<p>The same logic holds for <span class="math inline">\(Y_{1M_0}\)</span> and <span class="math inline">\(Y_{1M_1}\)</span>, though at the individual level <span class="math inline">\(CME_1 \in \{-5, 0, 5\}\)</span>:</p>
<pre class="r"><code>def <- defDataAdd(def, varname = "e1", formula = 0,
variance = 1, dist = "normal")
def <- defDataAdd(def, varname = "Y1M0", formula = "8 + M0*5 + e1",
dist = "nonrandom")
def <- defDataAdd(def, varname = "Y1M1", formula = "8 + M1*5 + e1",
dist = "nonrandom")</code></pre>
<p>The <em>observed</em> mediator (<span class="math inline">\(M\)</span>) and outcome (<span class="math inline">\(Y\)</span>) are determined by the observed exposure (<span class="math inline">\(A\)</span>).</p>
<pre class="r"><code>def <- defDataAdd(def, varname = "M",
formula = "(A==0) * M0 + (A==1) * M1", dist = "nonrandom")
def <- defDataAdd(def, varname = "Y",
formula = "(A==0) * Y0M0 + (A==1) * Y1M1", dist = "nonrandom")</code></pre>
<p>Here is the entire data definitions table:</p>
<table class="table table-condensed">
<thead>
<tr>
<th style="text-align:right;">
varname
</th>
<th style="text-align:right;">
formula
</th>
<th style="text-align:right;">
variance
</th>
<th style="text-align:right;">
dist
</th>
<th style="text-align:right;">
link
</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align:right;">
<span style="font-size: 16px">e0 </span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">0 </span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">1</span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">normal </span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">identity</span>
</td>
</tr>
<tr>
<td style="text-align:right;">
<span style="font-size: 16px">Y0M0</span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">2 + M0*2 + e0 </span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">0</span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">nonrandom</span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">identity</span>
</td>
</tr>
<tr>
<td style="text-align:right;">
<span style="font-size: 16px">Y0M1</span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">2 + M1*2 + e0 </span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">1</span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">nonrandom</span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">identity</span>
</td>
</tr>
<tr>
<td style="text-align:right;">
<span style="font-size: 16px">e1 </span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">0 </span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">1</span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">normal </span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">identity</span>
</td>
</tr>
<tr>
<td style="text-align:right;">
<span style="font-size: 16px">Y1M0</span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">8 + M0*5 + e1 </span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">0</span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">nonrandom</span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">identity</span>
</td>
</tr>
<tr>
<td style="text-align:right;">
<span style="font-size: 16px">Y1M1</span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">8 + M1*5 + e1 </span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">0</span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">nonrandom</span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">identity</span>
</td>
</tr>
<tr>
<td style="text-align:right;">
<span style="font-size: 16px">M </span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">(A==0) * M0 + (A==1) * M1 </span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">0</span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">nonrandom</span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">identity</span>
</td>
</tr>
<tr>
<td style="text-align:right;">
<span style="font-size: 16px">Y </span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">(A==0) * Y0M0 + (A==1) * Y1M1</span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">0</span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">nonrandom</span>
</td>
<td style="text-align:right;">
<span style="font-size: 16px">identity</span>
</td>
</tr>
</tbody>
</table>
<p>Based on the parameters used to generate the data, we can calculate the expected causal mediation effects:</p>
<p><span class="math display">\[
\begin{aligned}
E[CME_0] &= E[2 + 2M_1 + e_0] - E[2+2M_0+e_0] \\
&= E[2(M_1 - M_0)] \\
&= 2(E[M_1] - E[M_0]) \\
&= 2(0.6 - 0.2) \\
&= 0.8
\end{aligned}
\]</span></p>
<p><span class="math display">\[
\begin{aligned}
E[CME_1] &= E[8 + 5M_1 + e_1] - E[8+5M_0+e_1] \\
&= E[5(M_1 - M_0)] \\
&= 5(E[M_1] - E[M_0]) \\
&= 5(0.6 - 0.2) \\
&= 2.0
\end{aligned}
\]</span></p>
<p>Likewise, the expected values of the causal direct effects can be calculated:</p>
<p><span class="math display">\[
\begin{aligned}
E[CDE_0] &= E[8 + 5M_0 + e_1] - E[2+2M_0+e_0] \\
&= E[6 + 5M_0 - 2M_0)] \\
&= 6 + 3M_0 \\
&= 6 + 3*0.2 \\
&= 6.6
\end{aligned}
\]</span></p>
<p><span class="math display">\[
\begin{aligned}
E[CDE_1] &= E[8 + 5M_1 + e_1] - E[2+2M_1+e_0] \\
&= E[6 + 5M_1 - 2M_1)] \\
&= 6 + 3M_1 \\
&= 6 + 3*0.6 \\
&= 7.8
\end{aligned}
\]</span></p>
<p>Finally, the expected total causal effect is:</p>
<p><span class="math display">\[
\begin{aligned}
ATCE &= E[CDE_0] + E[CME_1] = 6.6 + 2.0 \\
&= E[CDE_1] + E[CME_0] = 7.8 + 0.8 \\
&= 8.6
\end{aligned}
\]</span>
And now, the complete data set can be generated.</p>
<pre class="r"><code>dd <- addColumns(def, dd)
dd <- delColumns(dd, c("e0", "e1")) # these are not needed
dd</code></pre>
<pre><code>## id A M0 M1 Y0M0 Y0M1 Y1M0 Y1M1 M Y
## 1: 1 0 0 1 0.933 2.93 7.58 12.58 0 0.933
## 2: 2 1 0 0 2.314 2.31 6.84 6.84 0 6.841
## 3: 3 1 0 1 3.876 5.88 9.05 14.05 1 14.053
## 4: 4 0 1 1 5.614 5.61 12.04 12.04 1 5.614
## 5: 5 1 0 0 1.469 1.47 8.81 8.81 0 8.809
## ---
## 4996: 4996 1 0 1 2.093 4.09 8.82 13.82 1 13.818
## 4997: 4997 0 0 0 1.734 1.73 7.28 7.28 0 1.734
## 4998: 4998 1 1 1 3.256 3.26 12.49 12.49 1 12.489
## 4999: 4999 1 1 0 5.149 3.15 12.57 7.57 0 7.572
## 5000: 5000 0 0 0 1.959 1.96 5.23 5.23 0 1.959</code></pre>
</div>
<div id="looking-at-the-observed-potential-outcomes" class="section level3">
<h3>Looking at the “observed” potential outcomes</h3>
<p>The advantage of simulating data is that we can see what the average causal effects are based on the potential outcomes. Here are the average potential outcomes in the generated data set:</p>
<pre class="r"><code>dd[,.( Y0M0 = mean(Y0M0), Y0M1 = mean(Y0M1),
Y1M0 = mean(Y1M0), Y1M1 = mean(Y1M1))]</code></pre>
<pre><code>## Y0M0 Y0M1 Y1M0 Y1M1
## 1: 2.39 3.2 8.99 11</code></pre>
<p>The four average causal effects based on the data are quite close to the expected values:</p>
<pre class="r"><code>dd[, .(ACME0 = mean(Y0M1 - Y0M0), ACME1= mean(Y1M1 - Y1M0),
ACDE0 = mean(Y1M0 - Y0M0), ACDE1= mean(Y1M1 - Y0M1))]</code></pre>
<pre><code>## ACME0 ACME1 ACDE0 ACDE1
## 1: 0.81 2.03 6.6 7.81</code></pre>
<p>And the here is the average total causal effect from the data set:</p>
<pre class="r"><code>dd[, mean(Y1M1 - Y0M0)]</code></pre>
<pre><code>## [1] 8.62</code></pre>
<p>All of these quantities can be visualized in this figure. The lengths of the solid vertical lines are the mediated effects. The lengths of the dotted vertical lines are the direct effects. And the sums of these vertical lines (by color) each represent the total effect:</p>
<p><img src="https://www.rdatagen.net/post/2018-11-07-causal-mediation_files/figure-html/unnamed-chunk-13-1.png" width="672" /></p>
</div>
<div id="estimated-causal-mediation-effect-from-observed-data" class="section level3">
<h3>Estimated causal mediation effect from observed data</h3>
<p>Clearly, the real interest is in estimating the causal effects from data that we can actually observe. And that, of course, is where things start to get challenging. I will not go into the important details here (again, <a href="https://www.jstor.org/stable/41058997?seq=1#metadata_info_tab_contents"><em>Imai et al</em></a> provide these), but here are formulas that have been derived to estimate the effects (simplified since there are no confounders in this example) and the calculations using the observed data:</p>
<p><span class="math display">\[\small
\hat{CME_0} =\sum_{m\in0,1}E[Y|A=0, M=m][P(M=m|A=1)-P(M=m|A=0)]
\]</span></p>
<pre class="r"><code># Estimate CME0
dd[M == 0 & A == 0, mean(Y)] *
(dd[A == 1, mean(M == 0)] - dd[A == 0, mean(M == 0)]) +
dd[M == 1 & A == 0, mean(Y)] *
(dd[A == 1, mean(M == 1)] - dd[A == 0, mean(M == 1)])</code></pre>
<pre><code>## [1] 0.805</code></pre>
<p><span class="math display">\[\small
\hat{CME_1} =\sum_{m\in0,1}E[Y|A=1, M=m][P(M=m|A=1)-P(M=m|A=0)]
\]</span></p>
<pre class="r"><code># Estimate CME1
dd[M == 0 & A == 1, mean(Y)] *
(dd[A == 1, mean(M == 0)] - dd[A == 0, mean(M == 0)]) +
dd[M == 1 & A == 1, mean(Y)] *
(dd[A == 1, mean(M == 1)] - dd[A == 0, mean(M == 1)])</code></pre>
<pre><code>## [1] 2</code></pre>
<p><span class="math display">\[\small
\hat{CDE_0} =\sum_{m\in0,1}(E[Y|A=1, M=m] - E[Y|A=0, M=m])P(M=m|A=0)
\]</span></p>
<pre class="r"><code># Estimate CDE0
(dd[M == 0 & A == 1, mean(Y)] - dd[M == 0 & A == 0, mean(Y)]) *
dd[A == 0, mean(M == 0)] +
(dd[M == 1 & A == 1, mean(Y)] - dd[M == 1 & A == 0, mean(Y)]) *
dd[A == 0, mean(M == 1)]</code></pre>
<pre><code>## [1] 6.56</code></pre>
<p><span class="math display">\[\small
\hat{CDE_1} =\sum_{m\in0,1}(E[Y|A=1, M=m] - E[Y|A=0, M=m])P(M=m|A=1)
\]</span></p>
<pre class="r"><code># Estimate CDE1
(dd[M == 0 & A == 1, mean(Y)] - dd[M == 0 & A == 0, mean(Y)]) *
dd[A == 1, mean(M == 0)] +
(dd[M == 1 & A == 1, mean(Y)] - dd[M == 1 & A == 0, mean(Y)]) *
dd[A == 1, mean(M == 1)]</code></pre>
<pre><code>## [1] 7.76</code></pre>
</div>
<div id="estimation-with-mediation-package" class="section level3">
<h3>Estimation with mediation package</h3>
<p>Fortunately, there is software available to provide these estimates (and more importantly measures of uncertainty). In <code>R</code>, one such package is <code>mediation</code>, which is available on <a href="https://cran.r-project.org/web/packages/mediation/index.html">CRAN</a>. This package implements the formulas derived in the <em>Imai et al</em> paper.</p>
<p>Not surprisingly, the model estimates are in line with expected values, true underlying effects, and the previous estimates conducted by hand:</p>
<pre class="r"><code>library(mediation)
med.fit <- glm(M ~ A, data = dd, family = binomial("logit"))
out.fit <- lm(Y ~ M*A, data = dd)
med.out <- mediate(med.fit, out.fit, treat = "A", mediator = "M",
robustSE = TRUE, sims = 1000)
summary(med.out)</code></pre>
<pre><code>##
## Causal Mediation Analysis
##
## Quasi-Bayesian Confidence Intervals
##
## Estimate 95% CI Lower 95% CI Upper p-value
## ACME (control) 0.8039 0.7346 0.88 <2e-16 ***
## ACME (treated) 2.0033 1.8459 2.16 <2e-16 ***
## ADE (control) 6.5569 6.4669 6.65 <2e-16 ***
## ADE (treated) 7.7563 7.6555 7.86 <2e-16 ***
## Total Effect 8.5602 8.4317 8.69 <2e-16 ***
## Prop. Mediated (control) 0.0940 0.0862 0.10 <2e-16 ***
## Prop. Mediated (treated) 0.2341 0.2179 0.25 <2e-16 ***
## ACME (average) 1.4036 1.2917 1.52 <2e-16 ***
## ADE (average) 7.1566 7.0776 7.24 <2e-16 ***
## Prop. Mediated (average) 0.1640 0.1524 0.17 <2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Sample Size Used: 5000
##
##
## Simulations: 1000</code></pre>
</div>
Cross-over study design with a major constraint
https://www.rdatagen.net/post/when-the-research-question-doesn-t-fit-nicely-into-a-standard-study-design/
Tue, 23 Oct 2018 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/when-the-research-question-doesn-t-fit-nicely-into-a-standard-study-design/<p>Every new study presents its own challenges. (I would have to say that one of the great things about being a biostatistician is the immense variety of research questions that I get to wrestle with.) Recently, I was approached by a group of researchers who wanted to evaluate an intervention. Actually, they had two, but the second one was a minor tweak added to the first. They were trying to figure out how to design the study to answer two questions: (1) is intervention <span class="math inline">\(A\)</span> better than doing nothing and (2) is <span class="math inline">\(A^+\)</span>, the slightly augmented version of <span class="math inline">\(A\)</span>, better than just <span class="math inline">\(A\)</span>?</p>
<p>It was clear in this context (and it is certainly not usually the case) that exposure to <span class="math inline">\(A\)</span> on one day would have <em>no</em> effect on the outcome under <span class="math inline">\(A^+\)</span> the next day (or <em>vice versa</em>). That is, spillover risks were minimal. Given this, the study was an ideal candidate for a cross-over design, where each study participant would receive both versions of the intervention and the control. This design can be much more efficient than a traditional RCT, because we can control for variability across patients.</p>
<p>While a cross-over study is interesting and challenging in its own right, the researchers had a pretty serious constraint: they did not feel they could assign intervention <span class="math inline">\(A^+\)</span> until <span class="math inline">\(A\)</span> had been applied, which would be necessary in a proper cross-over design. So, we had to come up with something a little different.</p>
<p>This post takes a look at how to generate data for and analyze data from a more standard cross-over trial, and then presents the solution we came up with for the problem at hand.</p>
<div id="cross-over-design-with-three-exposures" class="section level3">
<h3>Cross-over design with three exposures</h3>
<p>If we are free to assign any intervention on any day, one possible randomization scheme involving three interventions could look like this:
<img src="https://www.rdatagen.net/img/post-crossover/3way.png" /></p>
<p>Key features of this scheme are: (1) all individuals are exposed to each intervention over three days, (2) on any given day, each intervention is applied to one group of participants (just in case the specific day has an impact on the outcome), and (3) not every permutation is included (for example, <span class="math inline">\(A\)</span> does not immediately proceed <span class="math inline">\(Control\)</span> in any sequence), because the relative ordering of interventions in this case is assumed not to matter. (We might need to expand to six groups to rectify this.)</p>
</div>
<div id="data-simulation" class="section level3">
<h3>Data simulation</h3>
<p>In this simulation, we will assume (1) that the outcome is slightly elevated on days two and three, (2) <span class="math inline">\(A\)</span> is an improvement over <span class="math inline">\(Control\)</span>, (3) <span class="math inline">\(A^+\)</span> is an improvement over <span class="math inline">\(A\)</span>, (4) there is strong correlation of outcomes within each individual, and (5) group membership has no bearing on the outcome.</p>
<p>First, I define the data, starting with the different sources of variation. I have specified a fairly high intra-class coefficient (ICC), because it is reasonable to assume that there will be quite a bit of variation across individuals:</p>
<pre class="r"><code>vTotal = 1
vAcross <- iccRE(ICC = 0.5, varTotal = vTotal, "normal")
vWithin <- vTotal - vAcross
### Definitions
b <- defData(varname = "b", formula = 0, variance = vAcross,
dist = "normal")
d <- defCondition(condition = "rxlab == 'C'",
formula = "0 + b + (day == 2) * 0.5 + (day == 3) * 0.25",
variance = vWithin, dist = "normal")
d <- defCondition(d, "rxlab == 'A'",
formula = "0.4 + b + (day == 2) * 0.5 + (day == 3) * 0.25",
variance = vWithin, dist = "normal")
d <- defCondition(d, "rxlab == 'A+'",
formula = "1.0 + b + (day == 2) * 0.5 + (day == 3) * 0.25",
variance = vWithin, dist = "normal")</code></pre>
<p>Next, I generate the data, assigning three groups, each of which is tied to one of the three treatment sequences.</p>
<pre class="r"><code>set.seed(39217)
db <- genData(240, b)
dd <- trtAssign(db, 3, grpName = "grp")
dd <- addPeriods(dd, 3)
dd[grp == 1, rxlab := c("C", "A", "A+")]
dd[grp == 2, rxlab := c("A+", "C", "A")]
dd[grp == 3, rxlab := c("A", "A+", "C")]
dd[, rxlab := factor(rxlab, levels = c("C", "A", "A+"))]
dd[, day := factor(period + 1)]
dd <- addCondition(d, dd, newvar = "Y")
dd</code></pre>
<pre><code>## timeID Y id period grp b rxlab day
## 1: 1 0.9015848 1 0 2 0.2664571 A+ 1
## 2: 2 1.2125919 1 1 2 0.2664571 C 2
## 3: 3 0.7578572 1 2 2 0.2664571 A 3
## 4: 4 2.0157066 2 0 3 1.1638244 A 1
## 5: 5 2.4948799 2 1 3 1.1638244 A+ 2
## ---
## 716: 716 1.9617832 239 1 1 0.3340201 A 2
## 717: 717 1.9231570 239 2 1 0.3340201 A+ 3
## 718: 718 1.0280355 240 0 3 1.4084395 A 1
## 719: 719 2.5021319 240 1 3 1.4084395 A+ 2
## 720: 720 0.4610550 240 2 3 1.4084395 C 3</code></pre>
<p>Here is a plot of the treatment averages each day for each of the three groups:</p>
<pre class="r"><code>dm <- dd[, .(Y = mean(Y)), keyby = .(grp, period, rxlab)]
ngrps <- nrow(dm[, .N, keyby = grp])
nperiods <- nrow(dm[, .N, keyby = period])
ggplot(data = dm, aes(y=Y, x = period + 1)) +
geom_jitter(data = dd, aes(y=Y, x = period + 1),
width = .05, height = 0, color="grey70", size = 1 ) +
geom_line(color = "grey50") +
geom_point(aes(color = rxlab), size = 2.5) +
scale_color_manual(values = c("#4477AA", "#DDCC77", "#CC6677")) +
scale_x_continuous(name = "day", limits = c(0.9, nperiods + .1),
breaks=c(1:nperiods)) +
facet_grid(~ factor(grp, labels = paste("Group", 1:ngrps))) +
theme(panel.grid = element_blank(),
legend.title = element_blank())</code></pre>
<p><img src="https://www.rdatagen.net/post/2018-10-23-when-the-research-question-doesn-t-fit-nicely-into-a-standard-study-design_files/figure-html/unnamed-chunk-3-1.png" width="672" /></p>
</div>
<div id="estimating-the-effects" class="section level3">
<h3>Estimating the effects</h3>
<p>To estimate the treatment effects, I will use this mixed effects linear regression model:</p>
<p><span class="math display">\[Y_{it} = \alpha_0 + \gamma_{t} D_{it} + \beta_1 A_{it} + \beta_2 P_{it} + b_i + e_i\]</span></p>
<p>where <span class="math inline">\(Y_{it}\)</span> is the outcome for individual <span class="math inline">\(i\)</span> on day <span class="math inline">\(t\)</span>, <span class="math inline">\(t \in (1,2,3)\)</span>. <span class="math inline">\(A_{it}\)</span> is an indicator for treatment <span class="math inline">\(A\)</span> in time <span class="math inline">\(t\)</span>; likewise <span class="math inline">\(P_{it}\)</span> is an indicator for <span class="math inline">\(A^+\)</span>. <span class="math inline">\(D_{it}\)</span> is an indicator that the outcome was recorded on day <span class="math inline">\(t\)</span>. <span class="math inline">\(b_i\)</span> is an individual (latent) random effect, <span class="math inline">\(b_i \sim N(0, \sigma_b^2)\)</span>. <span class="math inline">\(e_i\)</span> is the (also latent) noise term, <span class="math inline">\(e_i \sim N(0, \sigma_e^2)\)</span>.</p>
<p>The parameter <span class="math inline">\(\alpha_0\)</span> is the mean outcome on day 1 under <span class="math inline">\(Control\)</span>. The <span class="math inline">\(\gamma\)</span>’s are the day-specific effects for days 2 and 3, with <span class="math inline">\(\gamma_1\)</span> fixed at 0. <span class="math inline">\(\beta_1\)</span> is the effect of <span class="math inline">\(A\)</span> (relative to <span class="math inline">\(Control\)</span>) and <span class="math inline">\(\beta_2\)</span> is the effect of <span class="math inline">\(A^+\)</span>. In this case, the researchers were primarily interested in <span class="math inline">\(\beta_1\)</span> and <span class="math inline">\(\beta_2 - \beta_1\)</span>, which is the incremental change from <span class="math inline">\(A\)</span> to <span class="math inline">\(A^+\)</span>.</p>
<pre class="r"><code>library(lme4)
lmerfit <- lmer(Y ~ day + rxlab + (1|id), data = dd)
rndTidy(lmerfit)</code></pre>
<pre><code>## term estimate std.error statistic group
## 1: (Intercept) -0.14 0.08 -1.81 fixed
## 2: day2 0.63 0.06 9.82 fixed
## 3: day3 0.38 0.06 5.97 fixed
## 4: rxlabA 0.57 0.06 8.92 fixed
## 5: rxlabA+ 0.98 0.06 15.35 fixed
## 6: sd_(Intercept).id 0.74 NA NA id
## 7: sd_Observation.Residual 0.70 NA NA Residual</code></pre>
<p>As to why we would want to bother with this complex design if we could just randomize individuals to one of three treatment groups, this little example using a more standard parallel design might provide a hint:</p>
<pre class="r"><code>def2 <- defDataAdd(varname = "Y",
formula = "0 + (frx == 'A') * 0.4 + (frx == 'A+') * 1",
variance = 1, dist = "normal")
dd <- genData(240)
dd <- trtAssign(dd, nTrt = 3, grpName = "rx")
dd <- genFactor(dd, "rx", labels = c("C","A","A+"), replace = TRUE)
dd <- addColumns(def2, dd)
lmfit <- lm(Y~frx, data = dd)
rndTidy(lmfit)</code></pre>
<pre><code>## term estimate std.error statistic p.value
## 1: (Intercept) -0.12 0.10 -1.15 0.25
## 2: frxA 0.64 0.15 4.38 0.00
## 3: frxA+ 1.01 0.15 6.86 0.00</code></pre>
<p>If we compare the standard error for the effect of <span class="math inline">\(A^+\)</span> in the two studies, the cross-over design is much more efficient (i.e. standard error is considerably smaller: 0.06 vs. 0.15). This really isn’t so surprising since we have collected a lot more data and modeled variation across individuals in the cross-over study.</p>
</div>
<div id="constrained-cross-over-design" class="section level3">
<h3>Constrained cross-over design</h3>
<p>Unfortunately, the project was not at liberty to implement the three-way/three-day design just simulated. We came up with this approach that would provide some cross-over, but with an added day of treatment and measurement:</p>
<p><img src="https://www.rdatagen.net/img/post-crossover/4constrained.png" /></p>
<p>The data generation is slightly modified, though the original definitions can still be used:</p>
<pre class="r"><code>db <- genData(240, b)
dd <- trtAssign(db, 2, grpName = "grp")
dd <- addPeriods(dd, 4)
dd[grp == 0, rxlab := c("C", "C", "A", "A+")]
dd[grp == 1, rxlab := c("C", "A", "A+", "A")]
dd[, rxlab := factor(rxlab, levels = c("C", "A", "A+"))]
dd[, day := factor(period + 1)]
dd <- addCondition(d, dd, "Y")</code></pre>
<p><img src="https://www.rdatagen.net/post/2018-10-23-when-the-research-question-doesn-t-fit-nicely-into-a-standard-study-design_files/figure-html/unnamed-chunk-7-1.png" width="672" /></p>
<p>The model estimates indicate slightly higher standard errors than in the pure cross-over design:</p>
<pre class="r"><code>lmerfit <- lmer(Y ~ day + rxlab + (1|id), data = dd)
rndTidy(lmerfit)</code></pre>
<pre><code>## term estimate std.error statistic group
## 1: (Intercept) 0.15 0.06 2.36 fixed
## 2: day2 0.48 0.08 6.02 fixed
## 3: day3 0.16 0.12 1.32 fixed
## 4: day4 -0.12 0.12 -1.02 fixed
## 5: rxlabA 0.46 0.10 4.70 fixed
## 6: rxlabA+ 1.14 0.12 9.76 fixed
## 7: sd_(Intercept).id 0.69 NA NA id
## 8: sd_Observation.Residual 0.68 NA NA Residual</code></pre>
<p>Here are the key parameters of interest (refit using package <code>lmerTest</code> to get the contrasts). The confidence intervals include the true values (<span class="math inline">\(\beta_1 = 0.4\)</span> and <span class="math inline">\(\beta_2 - \beta_1 = 0.6\)</span>):</p>
<pre class="r"><code>library(lmerTest)
lmerfit <- lmer(Y ~ day + rxlab + (1|id), data = dd)
L <- matrix(c(0, 0, 0, 0, 1, 0, 0, 0, 0, 0, -1, 1),
nrow = 2, ncol = 6, byrow = TRUE)
con <- data.table(contest(lmerfit, L, confint = TRUE, joint = FALSE))
round(con[, .(Estimate, `Std. Error`, lower, upper)], 3)</code></pre>
<pre><code>## Estimate Std. Error lower upper
## 1: 0.462 0.098 0.269 0.655
## 2: 0.673 0.062 0.551 0.795</code></pre>
</div>
<div id="exploring-bias" class="section level3">
<h3>Exploring bias</h3>
<p>A single data set does not tell us if the proposed approach is indeed unbiased. Here, I generate 1000 data sets and fit the mixed effects model. In addition, I fit a model that ignores the day factor to see if it will induce bias (of course it will).</p>
<pre class="r"><code>iter <- 1000
ests <- vector("list", iter)
xests <- vector("list", iter)
for (i in 1:iter) {
db <- genData(240, b)
dd <- trtAssign(db, 2, grpName = "grp")
dd <- addPeriods(dd, 4)
dd[grp == 0, rxlab := c("C", "C", "A", "A+")]
dd[grp == 1, rxlab := c("C", "A", "A+", "A")]
dd[, rxlab := factor(rxlab, levels = c("C", "A", "A+"))]
dd[, day := factor(period + 1)]
dd <- addCondition(d, dd, "Y")
lmerfit <- lmer(Y ~ day + rxlab + (1|id), data = dd)
xlmerfit <- lmer(Y ~ rxlab + (1|id), data = dd)
ests[[i]] <- data.table(estA = fixef(lmerfit)[5],
estAP = fixef(lmerfit)[6] - fixef(lmerfit)[5])
xests[[i]] <- data.table(estA = fixef(xlmerfit)[2],
estAP = fixef(xlmerfit)[3] - fixef(xlmerfit)[2])
}
ests <- rbindlist(ests)
xests <- rbindlist(xests)</code></pre>
<p>The results for the correct model estimation indicate that there is no bias (and that the standard error estimates from the model fit above were correct):</p>
<pre class="r"><code>ests[, .(A.est = round(mean(estA), 3),
A.se = round(sd(estA), 3),
AP.est = round(mean(estAP), 3),
AP.se = round(sd(estAP), 3))]</code></pre>
<pre><code>## A.est A.se AP.est AP.se
## 1: 0.407 0.106 0.602 0.06</code></pre>
<p>In contrast, the estimates that ignore the day or period effect are in fact biased (as predicted):</p>
<pre class="r"><code>xests[, .(A.est = round(mean(estA), 3),
A.se = round(sd(estA), 3),
AP.est = round(mean(estAP), 3),
AP.se = round(sd(estAP), 3))]</code></pre>
<pre><code>## A.est A.se AP.est AP.se
## 1: 0.489 0.053 0.474 0.057</code></pre>
</div>
In regression, we assume noise is independent of all measured predictors. What happens if it isn't?
https://www.rdatagen.net/post/linear-regression-models-assume-noise-is-independent/
Tue, 09 Oct 2018 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/linear-regression-models-assume-noise-is-independent/<p>A number of key assumptions underlie the linear regression model - among them linearity and normally distributed noise (error) terms with constant variance In this post, I consider an additional assumption: the unobserved noise is uncorrelated with any covariates or predictors in the model.</p>
<p>In this simple model:</p>
<p><span class="math display">\[Y_i = \beta_0 + \beta_1X_i + e_i,\]</span></p>
<p><span class="math inline">\(Y_i\)</span> has both a structural and stochastic (random) component. The structural component is the linear relationship of <span class="math inline">\(Y\)</span> with <span class="math inline">\(X\)</span>. The random element is often called the <span class="math inline">\(error\)</span> term, but I prefer to think of it as <span class="math inline">\(noise\)</span>. <span class="math inline">\(e_i\)</span> is not measuring something that has gone awry, but rather it is variation emanating from some unknown, unmeasurable source or sources for each individual <span class="math inline">\(i\)</span>. It represents everything we haven’t been able to measure.</p>
<p>Our goal is to estimate <span class="math inline">\(\beta_1\)</span>, which characterizes the structural linear relationship of <span class="math inline">\(X\)</span> and <span class="math inline">\(Y\)</span>. When we estimate the model, we get a quantity <span class="math inline">\(\hat{\beta_1}\)</span>, and we hope that on average we do pretty well (i.e. if we were to estimate <span class="math inline">\(\beta_1\)</span> repeatedly, <span class="math inline">\(E[\hat{\beta_1}] = \beta_1\)</span>). In order for us to make sure that is the case, we need to assume that <span class="math inline">\(e_i\)</span> and <span class="math inline">\(X_i\)</span> are independent. In other words, the sources that comprise <span class="math inline">\(e_i\)</span> must not be related in way to whatever it is that <span class="math inline">\(X_i\)</span> is measuring.</p>
<div id="correlation-without-causation" class="section level3">
<h3>Correlation without causation</h3>
<p>First, I’ll generate <span class="math inline">\(X's\)</span> and <span class="math inline">\(e's\)</span> that are correlated with a using a data generation process that makes no assumptions about the underlying causal process. The provides a picture of how <span class="math inline">\(\hat{\beta_1}\)</span> might diverge from the true <span class="math inline">\(\beta_1\)</span>.</p>
<pre class="r"><code>library(simstudy)
set.seed(3222)
dT <- genCorData(500, mu = c(0, 0), sigma = c(sqrt(1.25), 1),
rho = 0.446, corstr = "cs", cnames = c("X","eCor"))</code></pre>
<p>Outcome <span class="math inline">\(Y\)</span> is based on <span class="math inline">\(X\)</span> and <span class="math inline">\(e_{cor}\)</span>. For comparison’s sake, I generate a parallel outcome that is also based on <span class="math inline">\(X\)</span> but the noise variable <span class="math inline">\(e_{ind}\)</span> is independent of <span class="math inline">\(X\)</span>:</p>
<pre class="r"><code>def <- defDataAdd(varname = "Ycor", formula = "X + eCor",
dist = "nonrandom")
def <- defDataAdd(def, varname = "eInd", formula = 0, variance = 1,
dist = "normal" )
def <- defDataAdd(def, varname = "Yind", formula = "X + eInd",
dist = "nonrandom")
dT <- addColumns(def, dT)
dT</code></pre>
<pre><code>## id X eCor Ycor eInd Yind
## 1: 1 -1.1955846 -0.1102777 -1.3058624 1.1369435 -0.05864113
## 2: 2 -0.4056655 -0.6709221 -1.0765875 -0.8441431 -1.24980856
## 3: 3 -0.5893938 1.2146488 0.6252550 -0.2666314 -0.85602516
## 4: 4 0.9090881 0.3108645 1.2199526 0.3397857 1.24887377
## 5: 5 -2.6139989 -1.7382986 -4.3522975 -0.1793858 -2.79338470
## ---
## 496: 496 3.1615624 0.6160661 3.7776285 0.4658992 3.62746167
## 497: 497 0.6416140 0.1031316 0.7447456 -0.1440062 0.49760784
## 498: 498 0.1340967 -0.4029388 -0.2688421 0.6165793 0.75067604
## 499: 499 -1.2381040 0.8197002 -0.4184038 0.6717294 -0.56637463
## 500: 500 -0.7159373 -0.0905287 -0.8064660 0.9148175 0.19888019</code></pre>
<p>The observed <span class="math inline">\(X\)</span> and <span class="math inline">\(e_{cor}\)</span> are correlated, but <span class="math inline">\(X\)</span> and <span class="math inline">\(e_{ind}\)</span> are not:</p>
<pre class="r"><code>dT[, cor(cbind(X, eCor))]</code></pre>
<pre><code>## X eCor
## X 1.0000000 0.4785528
## eCor 0.4785528 1.0000000</code></pre>
<pre class="r"><code>dT[, cor(cbind(X, eInd))]</code></pre>
<pre><code>## X eInd
## X 1.00000000 -0.02346812
## eInd -0.02346812 1.00000000</code></pre>
<p>On the left below is a plot of outcome <span class="math inline">\(Y_{ind}\)</span> as a function of <span class="math inline">\(X\)</span>. The red line is the true structural component defining the relationship between these two variables. The points are scattered around that line without any clear pattern, which is indicative of independent noise.</p>
<p>The plot on the right shows <span class="math inline">\(Y_{cor}\)</span> as a function of <span class="math inline">\(X\)</span>. Since the stochastic component of <span class="math inline">\(Y_{cor}\)</span> is the correlated noise, the lower <span class="math inline">\(X\)</span> values are more likely to fall below the true line, and the larger <span class="math inline">\(X\)</span> values above. The red line does not appear to be a very good fit in this case; this is the bias induced by correlated noise.</p>
<p><img src="https://www.rdatagen.net/post/2018-10-10-linear-regression-models-assume-noise-is-independent_files/figure-html/unnamed-chunk-4-1.png" width="921.6" /></p>
<p>The model fits corroborate the visual inspection. <span class="math inline">\(\hat{\beta_1}\)</span> based on uncorrelated noise is close to 1, the true value:</p>
<pre class="r"><code>fit2 <- lm(Yind ~ X, data = dT)
rndTidy(fit2)</code></pre>
<pre><code>## term estimate std.error statistic p.value
## 1: (Intercept) 0.06 0.05 1.37 0.17
## 2: X 0.98 0.04 25.75 0.00</code></pre>
<p><span class="math inline">\(\hat{\beta_1}\)</span> based on correlated noise is 1.42, larger than the true value:</p>
<pre class="r"><code>fit1 <- lm(Ycor ~ X, data = dT)
rndTidy(fit1)</code></pre>
<pre><code>## term estimate std.error statistic p.value
## 1: (Intercept) -0.01 0.04 -0.25 0.8
## 2: X 1.42 0.03 41.28 0.0</code></pre>
<p>A plot of the fitted (blue) line based on the biased estimate clearly shows the problem of regression estimation in this context:</p>
<p><img src="https://www.rdatagen.net/post/2018-10-10-linear-regression-models-assume-noise-is-independent_files/figure-html/unnamed-chunk-7-1.png" width="460.8" /></p>
</div>
<div id="thinking-about-underlying-causality-and-noise" class="section level3">
<h3>Thinking about underlying causality and noise</h3>
<p>Here is a pure thought exercise to consider this bias induced by the correlation. Fundamentally, the implications depend on the purpose of the model. If we are using the model for description or prediction, we may not care about the bias. For example, if we are <em>describing</em> how <span class="math inline">\(Y\)</span> changes as <span class="math inline">\(X\)</span> changes in some population, the underlying data generation process may not be of interest. Likewise, if our goal is predicting <span class="math inline">\(Y\)</span> based on an observed <span class="math inline">\(X\)</span>, the biased estimate of <span class="math inline">\(\beta_1\)</span> may be adequate.</p>
<p>However, if we are interested in understanding how <em>intervening</em> or <em>changing</em> the level of <span class="math inline">\(X\)</span> at the individual level effects the outcome <span class="math inline">\(Y\)</span> for that individual, then an unbiased estimate of <span class="math inline">\(\beta_1\)</span> becomes more important, and noise that is correlated with the predictor of interest could be problematic.</p>
<p>However, in a causal context, all noise may not be created equally. Consider these two different causal models:</p>
<p><img src="https://www.rdatagen.net/img/post-correrrors/confounding_mediation.png" /></p>
<p>We can generate identically distributed data based on these two mechanisms:</p>
<pre class="r"><code># Confounding
defc <- defData(varname = "U", formula=0, variance=1, dist="normal")
defc <- defData(defc, "X", "0.5*U", 1, "normal")
defc <- defData(defc, "Y", "X + U", dist = "nonrandom")
dcon <- genData(1000, defc)</code></pre>
<pre class="r"><code># Mediation
defm <- defData(varname="X", formula=0, variance =1.25, dist="normal")
defm <- defData(defm, "U", ".4*X", .8, "normal")
defm <- defData(defm, "Y", "X + U", dist = "nonrandom")
dmed <- genData(1000, defm)</code></pre>
<p>The observed covariance between <span class="math inline">\(X\)</span> and <span class="math inline">\(U\)</span> (the noise) is similar for the two processes …</p>
<pre class="r"><code>dcon[, var(cbind(X,U))]</code></pre>
<pre><code>## X U
## X 1.2516199 0.5807696
## U 0.5807696 1.0805321</code></pre>
<pre class="r"><code>dmed[, var(cbind(X,U))]</code></pre>
<pre><code>## X U
## X 1.2365285 0.5401577
## U 0.5401577 1.0695366</code></pre>
<p>… as is the model fit for each:</p>
<p><img src="https://www.rdatagen.net/post/2018-10-10-linear-regression-models-assume-noise-is-independent_files/figure-html/unnamed-chunk-11-1.png" width="921.6" /></p>
<p>And here is a pair of histograms of estimated values of <span class="math inline">\(\beta_1\)</span> for each data generating process, based on 1000 replications of samples of 100 individuals. Again, pretty similar:</p>
<p><img src="https://www.rdatagen.net/post/2018-10-10-linear-regression-models-assume-noise-is-independent_files/figure-html/unnamed-chunk-12-1.png" width="864" /></p>
<p>Despite the apparent identical nature of the two data generating processes, I would argue that biased estimation is only a problem in the context of confounding noise. If we intervene on <span class="math inline">\(X\)</span> without changing <span class="math inline">\(U\)</span>, which could occur in the context of unmeasured confounding, the causal effect of <span class="math inline">\(X\)</span> on <span class="math inline">\(Y\)</span> would be overestimated by the regression model. However, if we intervene on <span class="math inline">\(X\)</span> in the context of a process that involves mediation, it would be appropriate to consider all the post-intervention effects of changing <span class="math inline">\(X\)</span>, so the “biased” estimate may in fact be the appropriate one.</p>
<p>The key here, of course, is that we cannot verify this unobserved process. By definition, the noise is unobservable and stochastic. But, if we are developing models that involve causal relations of unmeasured quantities, we have to be explicit about the causal nature underlying these unmeasured quantities. That way, we know if we should be concerned about hidden correlation or not.</p>
</div>
simstudy update: improved correlated binary outcomes
https://www.rdatagen.net/post/simstudy-update-to-version-0-1-10/
Tue, 25 Sep 2018 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/simstudy-update-to-version-0-1-10/<p>An updated version of the <code>simstudy</code> package (0.1.10) is now available on <a href="https://cran.r-project.org/web/packages/simstudy/index.html">CRAN</a>. The impetus for this release was a series of requests about generating correlated binary outcomes. In the last <a href="https://www.rdatagen.net/post/binary-beta-beta-binomial/">post</a>, I described a beta-binomial data generating process that uses the recently added beta distribution. In addition to that update, I’ve added functionality to <code>genCorGen</code> and <code>addCorGen</code>, functions which generate correlated data from non-Gaussian or normally distributed data such as Poisson, Gamma, and binary data. Most significantly, there is a newly implemented algorithm based on the work of <a href="https://www.tandfonline.com/doi/abs/10.1080/00031305.1991.10475828">Emrich & Piedmonte</a>, which I mentioned the last time around.</p>
<div id="limitation-of-copula-algorithm" class="section level3">
<h3>Limitation of copula algorithm</h3>
<p>The existing copula algorithm is limited when generating correlated binary data. (I did acknowledge this when I first <a href="https://www.rdatagen.net/post/simstudy-update-two-functions-for-correlation/">introduced</a> the new functions.) The generated marginal means are what we would expect. though the observed correlation on the binary scale is biased downwards towards zero. Using the copula algorithm, the specified correlation really pertains to the underlying normal data that is used in the data generation process. Information is lost when moving between the continuous and dichotomous distributions:</p>
<pre class="r"><code>library(simstudy)
set.seed(736258)
d1 <- genCorGen(n = 1000, nvars = 4, params1 = c(0.2, 0.5, 0.6, 0.7),
dist = "binary", rho = 0.3, corstr = "cs", wide = TRUE,
method = "copula")
d1</code></pre>
<pre><code>## id V1 V2 V3 V4
## 1: 1 0 0 0 0
## 2: 2 0 1 1 1
## 3: 3 0 1 0 1
## 4: 4 0 0 1 0
## 5: 5 0 1 0 1
## ---
## 996: 996 0 0 0 0
## 997: 997 0 1 0 0
## 998: 998 0 1 1 1
## 999: 999 0 0 0 0
## 1000: 1000 0 0 0 0</code></pre>
<pre class="r"><code>d1[, .(V1 = mean(V1), V2 = mean(V2),
V3 = mean(V3), V4 = mean(V4))]</code></pre>
<pre><code>## V1 V2 V3 V4
## 1: 0.184 0.486 0.595 0.704</code></pre>
<pre class="r"><code>d1[, round(cor(cbind(V1, V2, V3, V4)), 2)]</code></pre>
<pre><code>## V1 V2 V3 V4
## V1 1.00 0.18 0.17 0.17
## V2 0.18 1.00 0.19 0.23
## V3 0.17 0.19 1.00 0.15
## V4 0.17 0.23 0.15 1.00</code></pre>
</div>
<div id="the-ep-option-offers-an-improvement" class="section level3">
<h3>The <em>ep</em> option offers an improvement</h3>
<p>Data generated using the Emrich & Piedmonte algorithm, done by specifying the “<em>ep</em>” method, does much better; the observed correlation is much closer to what we specified. (Note that the E&P algorithm may restrict the range of possible correlations; if you specify a correlation outside of the range, an error message is issued.)</p>
<pre class="r"><code>set.seed(736258)
d2 <- genCorGen(n = 1000, nvars = 4, params1 = c(0.2, 0.5, 0.6, 0.7),
dist = "binary", rho = 0.3, corstr = "cs", wide = TRUE,
method = "ep")
d2[, .(V1 = mean(V1), V2 = mean(V2),
V3 = mean(V3), V4 = mean(V4))]</code></pre>
<pre><code>## V1 V2 V3 V4
## 1: 0.199 0.504 0.611 0.706</code></pre>
<pre class="r"><code>d2[, round(cor(cbind(V1, V2, V3, V4)), 2)]</code></pre>
<pre><code>## V1 V2 V3 V4
## V1 1.00 0.33 0.33 0.29
## V2 0.33 1.00 0.32 0.31
## V3 0.33 0.32 1.00 0.28
## V4 0.29 0.31 0.28 1.00</code></pre>
<p>If we generate the data using the “long” form, we can fit a <em>GEE</em> marginal model to recover the parameters used in the data generation process:</p>
<pre class="r"><code>library(geepack)
set.seed(736258)
d3 <- genCorGen(n = 1000, nvars = 4, params1 = c(0.2, 0.5, 0.6, 0.7),
dist = "binary", rho = 0.3, corstr = "cs", wide = FALSE,
method = "ep")
geefit3 <- geeglm(X ~ factor(period), id = id, data = d3,
family = binomial, corstr = "exchangeable")
summary(geefit3)</code></pre>
<pre><code>##
## Call:
## geeglm(formula = X ~ factor(period), family = binomial, data = d3,
## id = id, corstr = "exchangeable")
##
## Coefficients:
## Estimate Std.err Wald Pr(>|W|)
## (Intercept) -1.39256 0.07921 309.1 <2e-16 ***
## factor(period)1 1.40856 0.08352 284.4 <2e-16 ***
## factor(period)2 1.84407 0.08415 480.3 <2e-16 ***
## factor(period)3 2.26859 0.08864 655.0 <2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Estimated Scale Parameters:
## Estimate Std.err
## (Intercept) 1 0.01708
##
## Correlation: Structure = exchangeable Link = identity
##
## Estimated Correlation Parameters:
## Estimate Std.err
## alpha 0.3114 0.01855
## Number of clusters: 1000 Maximum cluster size: 4</code></pre>
<p>And the point estimates for each variable on the probability scale:</p>
<pre class="r"><code>round(1/(1+exp(1.3926 - c(0, 1.4086, 1.8441, 2.2686))), 2)</code></pre>
<pre><code>## [1] 0.20 0.50 0.61 0.71</code></pre>
</div>
<div id="longitudinal-repeated-measures" class="section level3">
<h3>Longitudinal (repeated) measures</h3>
<p>One researcher wanted to generate individual-level longitudinal data that might be analyzed using a GEE model. This is not so different from what I just did, but incorporates a specific time trend to define the probabilities. In this case, the steps are to (1) generate longitudinal data using the <code>addPeriods</code> function, (2) define the longitudinal probabilities, and (3) generate correlated binary outcomes with an AR-1 correlation structure.</p>
<pre class="r"><code>set.seed(393821)
probform <- "-2 + 0.3 * period"
def1 <- defDataAdd(varname = "p", formula = probform,
dist = "nonrandom", link = "logit")
dx <- genData(1000)
dx <- addPeriods(dx, nPeriods = 4)
dx <- addColumns(def1, dx)
dg <- addCorGen(dx, nvars = 4,
corMatrix = NULL, rho = .4, corstr = "ar1",
dist = "binary", param1 = "p",
method = "ep", formSpec = probform,
periodvar = "period")</code></pre>
<p>The correlation matrix from the observed data is reasonably close to having an AR-1 structure, where <span class="math inline">\(\rho = 0.4\)</span>, <span class="math inline">\(\rho^2 = 0.16\)</span>, <span class="math inline">\(\rho^3 = 0.064\)</span>.</p>
<pre class="r"><code>cor(dcast(dg, id ~ period, value.var = "X")[,-1])</code></pre>
<pre><code>## 0 1 2 3
## 0 1.00000 0.4309 0.1762 0.04057
## 1 0.43091 1.0000 0.3953 0.14089
## 2 0.17618 0.3953 1.0000 0.36900
## 3 0.04057 0.1409 0.3690 1.00000</code></pre>
<p>And again, the model recovers the time trend parameter defined in variable <code>probform</code> as well as the correlation parameter:</p>
<pre class="r"><code>geefit <- geeglm(X ~ period, id = id, data = dg, corstr = "ar1",
family = binomial)
summary(geefit)</code></pre>
<pre><code>##
## Call:
## geeglm(formula = X ~ period, family = binomial, data = dg, id = id,
## corstr = "ar1")
##
## Coefficients:
## Estimate Std.err Wald Pr(>|W|)
## (Intercept) -1.9598 0.0891 484.0 <2e-16 ***
## period 0.3218 0.0383 70.6 <2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Estimated Scale Parameters:
## Estimate Std.err
## (Intercept) 1 0.0621
##
## Correlation: Structure = ar1 Link = identity
##
## Estimated Correlation Parameters:
## Estimate Std.err
## alpha 0.397 0.0354
## Number of clusters: 1000 Maximum cluster size: 4</code></pre>
</div>
<div id="model-mis-specification" class="section level3">
<h3>Model mis-specification</h3>
<p>And just for fun, here is an example of how simulation might be used to investigate the performance of a model. Let’s say we are interested in the implications of mis-specifying the correlation structure. In this case, we can fit two GEE models (one correctly specified and one mis-specified) and assess the sampling properties of the estimates from each:</p>
<pre class="r"><code>library(broom)
dx <- genData(100)
dx <- addPeriods(dx, nPeriods = 4)
dx <- addColumns(def1, dx)
iter <- 1000
rescorrect <- vector("list", iter)
resmisspec <- vector("list", iter)
for (i in 1:iter) {
dw <- addCorGen(dx, nvars = 4,
corMatrix = NULL, rho = .5, corstr = "ar1",
dist = "binary", param1 = "p",
method = "ep", formSpec = probform,
periodvar = "period")
correctfit <- geeglm(X ~ period, id = id, data = dw,
corstr = "ar1", family = binomial)
misfit <- geeglm(X ~ period, id = id, data = dw,
corstr = "independence", family = binomial)
rescorrect[[i]] <- data.table(i, tidy(correctfit))
resmisspec[[i]] <- data.table(i, tidy(misfit))
}
rescorrect <-
rbindlist(rescorrect)[term == "period"][, model := "correct"]
resmisspec <-
rbindlist(resmisspec)[term == "period"][, model := "misspec"]</code></pre>
<p>Here are the averages, standard deviation, and average standard error of the point estimates under the correct specification:</p>
<pre class="r"><code>rescorrect[, c(mean(estimate), sd(estimate), mean(std.error))]</code></pre>
<pre><code>## [1] 0.304 0.125 0.119</code></pre>
<p>And for the incorrect specification:</p>
<pre class="r"><code>resmisspec[, c(mean(estimate), sd(estimate), mean(std.error))]</code></pre>
<pre><code>## [1] 0.303 0.126 0.121</code></pre>
<p>The estimates of the time trend from both models are unbiased, and the observed standard error of the estimates are the same for each model, which in turn are not too far off from the estimated standard errors. This becomes quite clear when we look at the virtually identical densities of the estimates:</p>
<p><img src="https://www.rdatagen.net/post/2018-09-25-simstudy-update-to-version-0-1-10_files/figure-html/unnamed-chunk-11-1.png" width="672" /></p>
</div>
<div id="addendum" class="section level3">
<h3>Addendum</h3>
<p>As an added bonus, here is a conditional generalized mixed effects model of the larger data set generated earlier. The conditional estimates are quite different from the marginal GEE estimates, but this is <a href="https://www.rdatagen.net/post/mixed-effect-models-vs-gee/">not surprising</a> given the binary outcomes. (For comparison, the period coefficient was estimated using the marginal model to be 0.32)</p>
<pre class="r"><code>library(lme4)
glmerfit <- glmer(X ~ period + (1 | id), data = dg, family = binomial)
summary(glmerfit)</code></pre>
<pre><code>## Generalized linear mixed model fit by maximum likelihood (Laplace
## Approximation) [glmerMod]
## Family: binomial ( logit )
## Formula: X ~ period + (1 | id)
## Data: dg
##
## AIC BIC logLik deviance df.resid
## 3595 3614 -1795 3589 3997
##
## Scaled residuals:
## Min 1Q Median 3Q Max
## -1.437 -0.351 -0.284 -0.185 2.945
##
## Random effects:
## Groups Name Variance Std.Dev.
## id (Intercept) 2.38 1.54
## Number of obs: 4000, groups: id, 1000
##
## Fixed effects:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -2.7338 0.1259 -21.7 <2e-16 ***
## period 0.4257 0.0439 9.7 <2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Correlation of Fixed Effects:
## (Intr)
## period -0.700</code></pre>
</div>
Binary, beta, beta-binomial
https://www.rdatagen.net/post/binary-beta-beta-binomial/
Tue, 11 Sep 2018 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/binary-beta-beta-binomial/<p>I’ve been working on updates for the <a href="http://www.rdatagen.net/page/simstudy/"><code>simstudy</code></a> package. In the past few weeks, a couple of folks independently reached out to me about generating correlated binary data. One user was not impressed by the copula algorithm that is already implemented. I’ve added an option to use an algorithm developed by <a href="https://www.tandfonline.com/doi/abs/10.1080/00031305.1991.10475828">Emrich and Piedmonte</a> in 1991, and will be incorporating that option soon in the functions <code>genCorGen</code> and <code>addCorGen</code>. I’ll write about that change some point soon.</p>
<p>A second researcher was trying to generate data using parameters that could be recovered using GEE model estimation. I’ve always done this by using an underlying mixed effects model, but of course, the marginal model parameter estimates might be quite different from the conditional parameters. (I’ve written about this a number of times, most recently <a href="https://www.rdatagen.net/post/mixed-effect-models-vs-gee/">here</a>.) As a result, the model and the data generation process don’t match, which may not be such a big deal, but is not so helpful when trying to illuminate the models.</p>
<p>One simple solution is using a <em>beta-binomial</em> mixture data generating process. The <a href="https://en.wikipedia.org/wiki/Beta_distribution"><em>beta</em> distribution</a> is a continuous probability distribution that is defined on the interval from 0 to 1, so it is not too unreasonable as model for probabilities. If we assume that cluster-level probabilities have a beta distribution, and that within each cluster the individual outcomes have a <em>binomial</em> distribution defined by the cluster-specific probability, we will get the data generation process we are looking for.</p>
<div id="generating-the-clustered-data" class="section level3">
<h3>Generating the clustered data</h3>
<p>In these examples, I am using 500 clusters, each with cluster size of 40 individuals. There is a cluster-level covariate <code>x</code> that takes on integer values between 1 and 3. The beta distribution is typically defined using two shape parameters usually referenced as <span class="math inline">\(\alpha\)</span> and <span class="math inline">\(\beta\)</span>, where <span class="math inline">\(E(Y) = \alpha / (\alpha + \beta)\)</span>, and <span class="math inline">\(Var(Y) = (\alpha\beta)/[(\alpha + \beta)^2(\alpha + \beta + 1)]\)</span>. In <code>simstudy</code>, the distribution is specified using the mean probability (<span class="math inline">\(p_m\)</span>) and a <em>precision</em> parameter (<span class="math inline">\(\phi_\beta > 0\)</span>) (that is specified using the variance argument). Under this specification, <span class="math inline">\(Var(Y) = p_m(1 - p_m)/(1 + \phi_\beta)\)</span>. Precision is inversely related to variability: lower precision is higher variability.</p>
<p>In this simple simulation, the cluster probabilities are a function of the cluster-level covariate and precision parameter <span class="math inline">\(\phi_\beta\)</span>. Specifically</p>
<p><span class="math display">\[logodds(p_{clust}) = -2.0 + 0.65x.\]</span>
The binomial variable of interest <span class="math inline">\(b\)</span> is a function of <span class="math inline">\(p_{clust}\)</span> only, and represents a count of individuals in the cluster with a “success”:</p>
<pre class="r"><code>library(simstudy)
set.seed(87387)
phi.beta <- 3 # precision
n <- 40 # cluster size
def <- defData(varname = "n", formula = n,
dist = 'nonrandom', id = "cID")
def <- defData(def, varname = "x", formula = "1;3",
dist = 'uniformInt')
def <- defData(def, varname = "p", formula = "-2.0 + 0.65 * x",
variance = phi.beta, dist = "beta", link = "logit")
def <- defData(def, varname = "b", formula = "p", variance = n,
dist = "binomial")
dc <- genData(500, def)
dc</code></pre>
<pre><code>## cID n x p b
## 1: 1 40 2 0.101696930 4
## 2: 2 40 2 0.713156596 32
## 3: 3 40 1 0.020676443 2
## 4: 4 40 2 0.091444678 4
## 5: 5 40 2 0.139946091 6
## ---
## 496: 496 40 1 0.062513419 4
## 497: 497 40 1 0.223149651 5
## 498: 498 40 3 0.452904009 14
## 499: 499 40 2 0.005143594 1
## 500: 500 40 2 0.481283809 16</code></pre>
<p>The generated data with <span class="math inline">\(\phi_\beta = 3\)</span> is shown on the left below. Data sets with increasing precision (less variability) are shown to the right:</p>
<p><img src="https://www.rdatagen.net/post/2018-09-11-binary-beta-beta-binomial_files/figure-html/unnamed-chunk-2-1.png" width="1056" /></p>
<p>The relationship of <span class="math inline">\(\phi_\beta\)</span> and variance is made clear by evaluating the variance of the cluster probabilities at each level of <span class="math inline">\(x\)</span> and comparing these variance estimates with the theoretical values suggested by parameters specified in the data generation process:</p>
<pre class="r"><code>p.clust = 1/(1 + exp(2 - 0.65*(1:3)))
cbind(dc[, .(obs = round(var(p), 3)), keyby = x],
theory = round( (p.clust*(1 - p.clust))/(1 + phi.beta), 3))</code></pre>
<pre><code>## x obs theory
## 1: 1 0.041 0.041
## 2: 2 0.054 0.055
## 3: 3 0.061 0.062</code></pre>
</div>
<div id="beta-and-beta-binomial-regression" class="section level3">
<h3>Beta and beta-binomial regression</h3>
<p>Before getting to the GEE estimation, here are two less frequently used regression models: beta and beta-binomial regression. Beta regression may not be super-useful, because we would need to observe (and measure) the probabilities directly. In this case, we randomly generated the probabilities, so it is fair to estimate a regression model to recover the same parameters we used to generate the data! But, back in the real world, we might only observe <span class="math inline">\(\hat{p}\)</span>, which results from generating data based on the underlying true <span class="math inline">\(p\)</span>. This is where we will need the beta-binomial regression (and later, the GEE model).</p>
<p>First, here is the beta regression using package <code>betareg</code>, which provides quite good estimates of the two coefficients and the precision parameter <span class="math inline">\(\phi_\beta\)</span>, which is not so surprising given the large number of clusters in our sample:</p>
<pre class="r"><code>library(betareg)
model.beta <- betareg(p ~ x, data = dc, link = "logit")
summary(model.beta)</code></pre>
<pre><code>##
## Call:
## betareg(formula = p ~ x, data = dc, link = "logit")
##
## Standardized weighted residuals 2:
## Min 1Q Median 3Q Max
## -3.7420 -0.6070 0.0306 0.6699 3.4952
##
## Coefficients (mean model with logit link):
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -2.09663 0.12643 -16.58 <2e-16 ***
## x 0.70080 0.05646 12.41 <2e-16 ***
##
## Phi coefficients (precision model with identity link):
## Estimate Std. Error z value Pr(>|z|)
## (phi) 3.0805 0.1795 17.16 <2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Type of estimator: ML (maximum likelihood)
## Log-likelihood: 155.2 on 3 Df
## Pseudo R-squared: 0.2388
## Number of iterations: 13 (BFGS) + 1 (Fisher scoring)</code></pre>
<p>The beta-binomial regression model, which is estimated using package <code>aod</code>, is a reasonable model to fit in this case where we have observed binomial outcomes and unobserved underlying probabilities:</p>
<pre class="r"><code>library(aod)
model.betabinom <- betabin(cbind(b, n - b) ~ x, ~ 1, data = dc)
model.betabinom</code></pre>
<pre><code>## Beta-binomial model
## -------------------
## betabin(formula = cbind(b, n - b) ~ x, random = ~1, data = dc)
##
## Convergence was obtained after 100 iterations.
##
## Fixed-effect coefficients:
## Estimate Std. Error z value Pr(> |z|)
## (Intercept) -2.103e+00 1.361e-01 -1.546e+01 0e+00
## x 6.897e-01 6.024e-02 1.145e+01 0e+00
##
## Overdispersion coefficients:
## Estimate Std. Error z value Pr(> z)
## phi.(Intercept) 2.412e-01 1.236e-02 1.951e+01 0e+00
##
## Log-likelihood statistics
## Log-lik nbpar df res. Deviance AIC AICc
## -1.711e+03 3 497 1.752e+03 3.428e+03 3.428e+03</code></pre>
<p>A couple of interesting things to note here. First is that the coefficient estimates are pretty similar to the beta regression model. However, the standard errors are slightly higher, as they should be, since we are using only observed probabilities and not the true (albeit randomly selected or generated) probabilities. So, there is another level of uncertainty beyond sampling error.</p>
<p>Second, there is a new parameter: <span class="math inline">\(\phi_{overdisp}\)</span>. What is that, and how does that relate to <span class="math inline">\(\phi_\beta\)</span>? The variance of a binomial random variable <span class="math inline">\(Y\)</span> with a single underlying probability is <span class="math inline">\(Var(Y) = np(1-p)\)</span>. However, when the underlying probability varies across different subgroups (or clusters), the variance is augmented by <span class="math inline">\(\phi_{overdisp}\)</span>: <span class="math inline">\(Var(Y) = np(1-p)[1 + (n-1)\phi_{overdisp}]\)</span>. It turns out to be the case that <span class="math inline">\(\phi_{overdisp} = 1/(1+\phi_\beta)\)</span>:</p>
<pre class="r"><code>round(model.betabinom@random.param, 3) # from the beta - binomial model</code></pre>
<pre><code>## phi.(Intercept)
## 0.241</code></pre>
<pre class="r"><code>round(1/(1 + coef(model.beta)["(phi)"]), 3) # from the beta model</code></pre>
<pre><code>## (phi)
## 0.245</code></pre>
<p>The observed variances of the binomial outcome <span class="math inline">\(b\)</span> at each level of <span class="math inline">\(x\)</span> come quite close to the theoretical variances based on <span class="math inline">\(\phi_\beta\)</span>:</p>
<pre class="r"><code>phi.overdisp <- 1/(1+phi.beta)
cbind(dc[, .(obs = round(var(b),1)), keyby = x],
theory = round( n*p.clust*(1-p.clust)*(1 + (n-1)*phi.overdisp), 1))</code></pre>
<pre><code>## x obs theory
## 1: 1 69.6 70.3
## 2: 2 90.4 95.3
## 3: 3 105.2 107.4</code></pre>
</div>
<div id="gee-and-individual-level-data" class="section level3">
<h3>GEE and individual level data</h3>
<p>With individual level binary outcomes (as opposed to count data we were working with before), GEE models are appropriate. The code below generates individual-level for each cluster level:</p>
<pre class="r"><code>defI <- defDataAdd(varname = "y", formula = "p", dist = "binary")
di <- genCluster(dc, "cID", numIndsVar = "n", level1ID = "id")
di <- addColumns(defI, di)
di</code></pre>
<pre><code>## cID n x p b id y
## 1: 1 40 2 0.1016969 4 1 0
## 2: 1 40 2 0.1016969 4 2 0
## 3: 1 40 2 0.1016969 4 3 0
## 4: 1 40 2 0.1016969 4 4 0
## 5: 1 40 2 0.1016969 4 5 1
## ---
## 19996: 500 40 2 0.4812838 16 19996 0
## 19997: 500 40 2 0.4812838 16 19997 0
## 19998: 500 40 2 0.4812838 16 19998 1
## 19999: 500 40 2 0.4812838 16 19999 1
## 20000: 500 40 2 0.4812838 16 20000 0</code></pre>
<p>The GEE model provides estimates of the coefficients as well as the working correlation. If we assume an “exchangeable” correlation matrix, in which each individual is correlated with all other individuals in the cluster but is not correlated with individuals in other clusters, we will get a single correlation estimate, which is labeled as <em>alpha</em> in the GEE output:</p>
<pre class="r"><code>library(geepack)
geefit <- geeglm(y ~ x, family = "binomial", data = di,
id = cID, corstr = "exchangeable" )
summary(geefit)</code></pre>
<pre><code>##
## Call:
## geeglm(formula = y ~ x, family = "binomial", data = di, id = cID,
## corstr = "exchangeable")
##
## Coefficients:
## Estimate Std.err Wald Pr(>|W|)
## (Intercept) -2.07376 0.14980 191.6 <2e-16 ***
## x 0.68734 0.06566 109.6 <2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Estimated Scale Parameters:
## Estimate Std.err
## (Intercept) 1 0.03235
##
## Correlation: Structure = exchangeable Link = identity
##
## Estimated Correlation Parameters:
## Estimate Std.err
## alpha 0.256 0.01746
## Number of clusters: 500 Maximum cluster size: 40</code></pre>
<p>In this case, <em>alpha</em> (<span class="math inline">\(\alpha\)</span>) is estimated at 0.25, which is quite close to the previous estimate of <span class="math inline">\(\phi_{overdisp}\)</span>, 0.24. So, it appears to be the case that if we have a target correlation <span class="math inline">\(\alpha\)</span>, we know the corresponding <span class="math inline">\(\phi_\beta\)</span> to use in the beta-binomial data generation process. That is, <span class="math inline">\(\phi_\beta = (1 - \alpha)/\alpha\)</span>.</p>
<p>While this is certainly not a proof of anything, let’s give it a go with a target <span class="math inline">\(\alpha = 0.44\)</span>:</p>
<pre class="r"><code>phi.beta.new <- (1-0.44)/0.44
def <- updateDef(def, "p", newvariance = phi.beta.new)
dc2 <- genData(500, def)
di2 <- genCluster(dc2, "cID", numIndsVar = "n", level1ID = "id")
di2 <- addColumns(defI, di2)
geefit <- geeglm(y ~ x, family = "binomial", data = di2,
id = cID, corstr = "exchangeable" )
summary(geefit)</code></pre>
<pre><code>##
## Call:
## geeglm(formula = y ~ x, family = "binomial", data = di2, id = cID,
## corstr = "exchangeable")
##
## Coefficients:
## Estimate Std.err Wald Pr(>|W|)
## (Intercept) -1.7101 0.1800 90.3 < 2e-16 ***
## x 0.5685 0.0806 49.8 1.7e-12 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Estimated Scale Parameters:
## Estimate Std.err
## (Intercept) 1 0.0307
##
## Correlation: Structure = exchangeable Link = identity
##
## Estimated Correlation Parameters:
## Estimate Std.err
## alpha 0.444 0.0242
## Number of clusters: 500 Maximum cluster size: 40</code></pre>
</div>
<div id="addendum" class="section level3">
<h3>Addendum</h3>
<p>Above, I suggested that the estimator of the effect of <code>x</code> based on the beta model will have less variation than the estimator based on the beta-binomial model. I drew 5000 samples from the data generating process and estimated the models each time. Below is a density distribution of the estimates of each of the models from all 5000 iterations. As expected, the beta-binomial process has more variability, as do the related estimates; we can see this in the relative “peakedness”" of the beta density:</p>
<p><img src="https://www.rdatagen.net/img/post-betabin/betabetabin.png" /></p>
<p>Also based on these 5000 iterations, the GEE model estimation appears to be less efficient than the beta-binomial model. This is not surprising since the beta-binomial model was the actual process that generated the data (so it is truly the correct model). The GEE model is robust to mis-specification of the correlation structure, but the price we pay for that robustness is a slightly less precise estimate (even if we happen to get the correlation structure right):</p>
<p><img src="https://www.rdatagen.net/img/post-betabin/betabingee.png" /></p>
</div>
The power of stepped-wedge designs
https://www.rdatagen.net/post/alternatives-to-stepped-wedge-designs/
Tue, 28 Aug 2018 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/alternatives-to-stepped-wedge-designs/<p>Just before heading out on vacation last month, I put up a <a href="https://www.rdatagen.net/post/by-vs-within/">post</a> that purported to compare stepped-wedge study designs with more traditional cluster randomized trials. Either because I rushed or was just lazy, I didn’t exactly do what I set out to do. I <em>did</em> confirm that a multi-site randomized clinical trial can be more efficient than a cluster randomized trial when there is variability across clusters. (I compared randomizing within a cluster with randomization by cluster.) But, this really had nothing to with stepped-wedge designs.</p>
<p>Here, I will try to rectify the shortcomings of that post by actually simulating data from a traditional stepped-wedge design and two variations on that theme with the aim of seeing which approach might be preferable. These variations were inspired by this extremely useful <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5718336/">paper</a> by Thompson et al. (If you stop reading here and go to the paper, I will consider mission accomplished.)</p>
<p>The key differences in the various designs are how many sites are exposed to the intervention and what the phase-in schedule looks like. In the examples that follow, I am assuming a study that lasts 24 weeks and with 50 total sites. Each site will include six patients per week. That means if we are collecting data for all sites over the entire study period, we will have <span class="math inline">\(24 \times 6 \times 50 = 7200\)</span> outcome measurements.</p>
<p>The most important assumption I am making, however, is that the investigators can introduce the intervention at a small number of sites during each time period (for example, because the intervention involves extensive training and there is a limited number of trainers.) In this case, I am assuming that at most 10 sites can start the intervention at any point in time, and we must wait at least 4 weeks until the next wave can be started. (We can proceed slower than 4 weeks, of course, which surprisingly may be the best option.)</p>
<p>I am going to walk through the data generation process for each of the variations and then present the results of a series of power analyses to compare and contrast each design.</p>
<div id="stepped-wedge-design" class="section level3">
<h3>Stepped-wedge design</h3>
<p><img src="https://www.rdatagen.net/img/post-stepwedge/TradSW.png" /></p>
<p>In the stepped-wedge design, all clusters in a trial will receive the intervention at some point, but the start of the intervention will be staggered. The amount of time in each state (control or intervention) will differ for each site (or group of sites if there are waves of more than one site starting up at the same time).</p>
<p>In this design (and in the others as well) time is divided into discrete data collection/phase-in periods. In the schematic figure, the light blue sections are periods during which the sites are in a control state, and the darker blue are periods during which the sites are in the intervention state. Each period in this case is 4 weeks long.</p>
<p>Following the Thompson et al. <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5718336/">paper</a>, the periods can be characterized as pre-rollout (where no intervention occurs), rollout (where the intervention is introduced over time), and post-rollout (where the all clusters are under intervention). Here, the rollout period includes periods two through five.</p>
<p>First, we define the data, which will largely be the same across the designs: 6 individual patients per week, an intervention effect of 0.33, and a weekly time effect (which unfortunately is parameterized as “period”) of 0.02, and standard deviation within each cluster of 3.</p>
<pre class="r"><code>library(simstudy)
defS <- defData(varname = "n", formula = 6,
dist = "nonrandom", id = "site")
defS <- defData(defS, varname = "siteInt", formula = 0,
variance = 1, dist = "normal")
defP <- defDataAdd(varname = "rx",
formula = "(start <= period) * everTrt",
dist = "nonrandom")
defI <- defDataAdd(varname = "Y",
formula = "10 + rx * 0.33 + period * 0.02 + siteInt",
variance = 9, dist = "normal")</code></pre>
<p>Now, we actually generate the data, starting with the site level data, then the period data, and then the individual patient level data. Note that the intervention is phased in every 4 weeks so that by the end of the 24 weeks all 5 waves are operating under the intervention:</p>
<pre class="r"><code>set.seed(111)
dS <- genData(50, defS)
dS[, start := rep((1:5)*4, each = 10)]
dS[, everTrt := 1]
dS[site %in% c(1, 2, 11, 12, 49, 50)] # review a subset</code></pre>
<pre><code>## site n siteInt start everTrt
## 1: 1 6 0.2352207 4 1
## 2: 2 6 -0.3307359 4 1
## 3: 11 6 -0.1736741 8 1
## 4: 12 6 -0.4065988 8 1
## 5: 49 6 2.4856616 20 1
## 6: 50 6 1.9599817 20 1</code></pre>
<pre class="r"><code># weekly data
dP <- addPeriods(dtName = dS, nPeriods = 24, idvars = "site")
dP <- addColumns(defP, dP)
dP[site %in% c(3, 17) & period < 5] # review a subset</code></pre>
<pre><code>## site period n siteInt start everTrt timeID rx
## 1: 3 0 6 -0.31162382 4 1 49 0
## 2: 3 1 6 -0.31162382 4 1 50 0
## 3: 3 2 6 -0.31162382 4 1 51 0
## 4: 3 3 6 -0.31162382 4 1 52 0
## 5: 3 4 6 -0.31162382 4 1 53 1
## 6: 17 0 6 -0.08585101 8 1 385 0
## 7: 17 1 6 -0.08585101 8 1 386 0
## 8: 17 2 6 -0.08585101 8 1 387 0
## 9: 17 3 6 -0.08585101 8 1 388 0
## 10: 17 4 6 -0.08585101 8 1 389 0</code></pre>
<pre class="r"><code># patient data
dI <- genCluster(dtClust = dP, cLevelVar = "timeID", numIndsVar = "n",
level1ID = "id")
dI <- addColumns(defI, dI)
dI</code></pre>
<pre><code>## site period n siteInt start everTrt timeID rx id Y
## 1: 1 0 6 0.2352207 4 1 1 0 1 10.810211
## 2: 1 0 6 0.2352207 4 1 1 0 2 14.892854
## 3: 1 0 6 0.2352207 4 1 1 0 3 12.977948
## 4: 1 0 6 0.2352207 4 1 1 0 4 11.311097
## 5: 1 0 6 0.2352207 4 1 1 0 5 10.760508
## ---
## 7196: 50 23 6 1.9599817 20 1 1200 1 7196 11.317432
## 7197: 50 23 6 1.9599817 20 1 1200 1 7197 7.909369
## 7198: 50 23 6 1.9599817 20 1 1200 1 7198 13.048293
## 7199: 50 23 6 1.9599817 20 1 1200 1 7199 17.625904
## 7200: 50 23 6 1.9599817 20 1 1200 1 7200 7.147883</code></pre>
<p>Here is a plot of the site level averages at each time point:</p>
<pre class="r"><code>library(ggplot2)
dSum <- dI[, .(Y = mean(Y)), keyby = .(site, period, rx, everTrt, start)]
ggplot(data = dSum, aes(x = period, y = Y, group = interaction(site, rx))) +
geom_line(aes(color = factor(rx))) +
facet_grid(factor(start, labels = c(1 : 5)) ~ .) +
scale_x_continuous(breaks = seq(0, 23, by = 4), name = "week") +
scale_color_manual(values = c("#b8cce4", "#4e81ba")) +
theme(panel.grid = element_blank(),
legend.position = "none") </code></pre>
<p><img src="https://www.rdatagen.net/post/2018-08-28-alternatives-to-stepped-wedge-designs_files/figure-html/unnamed-chunk-3-1.png" width="672" /></p>
<p>Finally, we can fit a linear mixed effects model to estimate the treatment effect:</p>
<pre class="r"><code>library(lme4)
library(broom)
tidy(lmer(Y ~ rx + period + (1|site), data = dI))</code></pre>
<pre><code>## term estimate std.error statistic group
## 1 (Intercept) 9.78836231 0.184842722 52.955086 fixed
## 2 rx 0.35246094 0.122453829 2.878317 fixed
## 3 period 0.02110481 0.007845705 2.689983 fixed
## 4 sd_(Intercept).site 1.21303055 NA NA site
## 5 sd_Observation.Residual 2.99488532 NA NA Residual</code></pre>
</div>
<div id="stepped-wedge-using-rollout-stage-only" class="section level3">
<h3>Stepped-wedge using “rollout” stage only</h3>
<p><img src="https://www.rdatagen.net/img/post-stepwedge/SWro.png" /></p>
<p>The Thompson et al. paper argued that if we limit the study to the rollout period only (periods 2 through 5 in the example above) but increase the length of the periods (here, from 4 to 6 weeks), we can actually increase power. In this case, there will be one wave of 10 sites that never receives the intervention.</p>
<p>The data generation process is exactly the same as above, except the statement defining the length of periods (6 weeks instead of 4 weeks) and starting point (week 0 vs. week 4) is slightly changed:</p>
<pre class="r"><code>dS[, start := rep((0:4)*6, each = 10)]</code></pre>
<p>So the site level data set with starting points at 0, 6, 12, and 18 weeks for each of the four waves that ever receive treatment looks like this:</p>
<pre><code>## site n siteInt start everTrt
## 1: 1 6 0.2352207 0 1
## 2: 2 6 -0.3307359 0 1
## 3: 11 6 -0.1736741 6 1
## 4: 12 6 -0.4065988 6 1
## 5: 49 6 2.4856616 24 1
## 6: 50 6 1.9599817 24 1</code></pre>
<p>And the data generated under this scenario looks like:</p>
<p><img src="https://www.rdatagen.net/post/2018-08-28-alternatives-to-stepped-wedge-designs_files/figure-html/unnamed-chunk-7-1.png" width="672" /></p>
<p>Here is the model estimation:</p>
<pre class="r"><code>tidy(lmer(Y ~ rx + period + (1|site), data = dI))</code></pre>
<pre><code>## term estimate std.error statistic group
## 1 (Intercept) 9.79022407 0.185294936 52.835897 fixed
## 2 rx 0.30707559 0.122414620 2.508488 fixed
## 3 period 0.02291619 0.006378367 3.592800 fixed
## 4 sd_(Intercept).site 1.21153700 NA NA site
## 5 sd_Observation.Residual 2.99490926 NA NA Residual</code></pre>
</div>
<div id="staggered-cluster-randomized-trial" class="section level3">
<h3>Staggered cluster randomized trial</h3>
<p><img src="https://www.rdatagen.net/img/post-stepwedge/StagCRG.png" /></p>
<p>If we wanted to conduct a cluster randomized trial but were able to phase in the intervention over time as we have been assuming, this design is the closest we could get. In this example with 50 sites and five phase-in periods, the intervention waves (in this example 1, 3, 5, 7, and 9) would each include five clusters. The respective control waves (2, 4, 6, 8, and 10) would also have five clusters each. And since we are assuming five waves, each wave will be in the study for eight: the first four weeks comprise “pre” measurement period, and the second four week period is the “post” measurement period.</p>
<p>The problem with this design relative to all the others discussed here is that the amount of data collected for each site is considerably reduced. As a result, this design is going to be much less efficient (hence less powerful) than the others. So much so, that I do not even generate data for this design (though I did actually confirm using simulations not shown here.)</p>
</div>
<div id="staggered-cluster-randomized-trial-with-continued-measurement" class="section level3">
<h3>Staggered cluster randomized trial with continued measurement</h3>
<p><img src="https://www.rdatagen.net/img/post-stepwedge/StagCRT.png" /></p>
<p>This is the staggered CRT just described, but we collect data for all 24 weeks for all of the sites. In this case, we are not at disadvantage with respect to the number of measurements, so it might be a competitive design. This version of staggered CRT could also be viewed as a traditional stepped-wedge design with controls.</p>
<p>The data generation is identical to the traditional stepped-wedge design we started with, except the only half of the sites are “ever treated”:</p>
<pre class="r"><code>dS[, everTrt := rep(0:1)]</code></pre>
<p>Here is the plot, with the control arm on the left, and the intervention arm on the right. The control arm is never introduced to the intervention.</p>
<p><img src="https://www.rdatagen.net/post/2018-08-28-alternatives-to-stepped-wedge-designs_files/figure-html/unnamed-chunk-10-1.png" width="672" /></p>
</div>
<div id="conducting-a-power-analysis-using-simulation" class="section level3">
<h3>Conducting a power analysis using simulation</h3>
<p>We are ultimately interested in assessing how much information each study design can provide. Power analyses under different conditions are one way to measure this.</p>
<p>Since one of my missions here is to illustrate as much <code>R</code> code as possible, here is how I do conduct the power analysis of the traditional stepped-wedge design:</p>
<pre class="r"><code>powerStepWedge1 <- function(x) {
# generate data
dS <- genData(50, defS)
dS[, start := rep((1:5)*4, each = 10)]
dS[, everTrt := 1]
dP <- addPeriods(dtName = dS, nPeriods = 24, idvars = "site")
dP <- addColumns(defP, dP)
dI <- genCluster(dtClust = dP, cLevelVar = "timeID",
numIndsVar = "n", level1ID = "id")
dI <- addColumns(defI, dI)
# fit model
data.frame(summary(lmer(Y ~ rx + period + (1|site), data = dI))$coef)
}
res <- vector("list", length = 5)
i <- 0
for (icc in seq(0, 0.04, .01)) {
i <- i + 1
# update data definition based on new ICC
between.var <- iccRE(ICC = icc, dist = "normal", varWithin = 9)
defS <- updateDef(defS, changevar = "siteInt", newvariance = between.var)
# generate 200 data sets and fit models
resSW1<- lapply(1:200, FUN = powerStepWedge1)
# estimate and store power
pSW1 <- mean( unlist(lapply(resSW1, `[`, 2, 3 )) >= 1.96)
res[[i]] <- data.table(icc, pSW1)
}
rbindlist(res)</code></pre>
<pre><code>## icc pSW1
## 1: 0.00 0.940
## 2: 0.01 0.855
## 3: 0.02 0.850
## 4: 0.03 0.830
## 5: 0.04 0.780</code></pre>
</div>
<div id="comparing-power-of-three-different-designs" class="section level3">
<h3>Comparing power of three different designs</h3>
<p>The next figure shows the estimated power for all three designs based on the same effect size and a range of ICC’s. The SW rollout only design consistently equals or outperforms the others. When the ICC is moderate to large (in this case > 0.06), the traditional SW design performs equally well. The design that comes closest to a staggered cluster randomized trial, the SW + controls performs well here on the lower range of ICCs, but is less compelling with more between site variation.</p>
<p><img src="https://www.rdatagen.net/img/post-stepwedge/power3.png" /></p>
<p><a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5718336/">Thompson et al.</a> provide more nuance that can improve power under different conditions - mostly involving changing period lengths or adding control-only sites, or both - but these simulations suggest that some sort of stepped-wedge design (either limited to the rollout phase or not) will generally be advantageous, at least under the strict requirements that I established to frame the design.</p>
<p>All of this has been done in the context of a normally distributed outcome. At some point, I will certainly re-do this comparison with a binary outcome.</p>
</div>
<div id="addendum-cluster-randomized-trial" class="section level3">
<h3>Addendum: cluster randomized trial</h3>
<p><img src="https://www.rdatagen.net/img/post-stepwedge/CRT.png" /></p>
<p>A traditional cluster randomized trial was not really under consideration because we declared that we could only deliver the intervention to 10 sites at any one time. However, it is illustrative to compare this design to make it clear that CRT is really best used when variability across sites is at its lowest (i.e. when the ICC is at or very close to zero). In this example, 25 sites are randomized to receive the intervention starting in the first week and 25 sites never receive the intervention. Data are collected for all 24 weeks for each of the 50 clusters.</p>
<p><img src="https://www.rdatagen.net/post/2018-08-28-alternatives-to-stepped-wedge-designs_files/figure-html/unnamed-chunk-12-1.png" width="672" /></p>
<p>The simulations confirm findings that the CRT is more efficient than stepped-wedge designs when the ICC is close to zero, but pales in comparison even with ICCs as low as 0.01:</p>
<p><img src="https://www.rdatagen.net/img/post-stepwedge/power2.png" /></p>
</div>
Multivariate ordinal categorical data generation
https://www.rdatagen.net/post/multivariate-ordinal-categorical-data-generation/
Wed, 15 Aug 2018 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/multivariate-ordinal-categorical-data-generation/<p>An economist contacted me about the ability of <code>simstudy</code> to generate correlated ordinal categorical outcomes. He is trying to generate data as an aide to teaching cost-effectiveness analysis, and is hoping to simulate responses to a quality-of-life survey instrument, the EQ-5D. The particular instrument has five questions related to mobility, self-care, activities, pain, and anxiety. Each item has three possible responses: (1) no problems, (2) some problems, and (3) a lot of problems. Although the instrument has been designed so that each item is orthogonal (independent) from the others, it is impossible to avoid correlation. So, in generating (and analyzing) these kinds of data, it is important to take this into consideration.</p>
<p>I had recently added functions to generate correlated data from non-normal distributions, and I had also created a function that generates ordinal categorical outcomes, but there was nothing to address the data generation problem he had in mind. After a little back forth, I came up with some code that will hopefully address his needs. And I hope the new function <code>genCorOrdCat</code> is general enough to support other data generation needs as well. (For the moment, this version is only available for download from the <a href="https://github.com/kgoldfeld/simstudy">github</a> site, but will be on CRAN sometime soon.)</p>
<div id="general-approach" class="section level2">
<h2>General approach</h2>
<p>The data generation algorithm assumes an underlying latent process logistic process that I’ve described <a href="https://www.rdatagen.net/post/a-hidden-process-part-2-of-2/">earlier</a>. In the context of a set of multivariate responses, there is a latent process for each of the responses. For a single response, we can randomly select a value from the logistic distribution and determine the response region in which this values falls to assign the randomly generated response. To generate correlated responses, we generate correlated values from the logistic distribution using a standard normal copula-like approach, just as I <a href="https://www.rdatagen.net/post/correlated-data-copula/">did</a> to generate multivariate data from non-normal distributions.</p>
<p>The new function <code>genCorOrdCat</code> requires specification of the baseline probabilities for each of the items in matrix form. The function also provides an argument to incorporate covariates, much like its univariate counterpart <code>genOrdCat</code> <a href="https://www.rdatagen.net/post/generating-and-displaying-likert-type-data/">does</a>. The correlation is specified either with a single correlation coefficient <span class="math inline">\(\rho\)</span> and a correlation structure (“independence”, “compound symmetry”, or “AR-1”) or by specifying the correlation matrix directly.</p>
</div>
<div id="examples" class="section level2">
<h2>Examples</h2>
<p>In the following examples, I assume four items each with four possible responses - which is different from the EQ-5D.</p>
<div id="high-correlation" class="section level4">
<h4>High correlation</h4>
<p>In the first simulation items two and three share the same uniform distribution, and items one and four each have their own distribution:</p>
<pre class="r"><code>baseprobs <- matrix(c(0.10, 0.20, 0.30, 0.40,
0.25, 0.25, 0.25, 0.25,
0.25, 0.25, 0.25, 0.25,
0.40, 0.30, 0.20, 0.10),
nrow = 4, byrow = TRUE)
# generate the data
set.seed(3333)
dT <- genData(100000)
dX <- genCorOrdCat(dT, adjVar = NULL, baseprobs = baseprobs,
prefix = "q", rho = 0.8, corstr = "cs")
dX</code></pre>
<pre><code>## id q1 q2 q3 q4
## 1: 1 2 1 1 1
## 2: 2 1 1 1 1
## 3: 3 2 2 1 1
## 4: 4 3 3 3 2
## 5: 5 4 2 3 1
## ---
## 99996: 99996 3 4 4 3
## 99997: 99997 2 1 1 2
## 99998: 99998 2 2 2 2
## 99999: 99999 3 1 1 1
## 100000: 100000 4 4 4 4</code></pre>
<p>Here is a correlation plot that tries to help us visualize what high correlation looks like under this context. (The plots are generated using function <code>ggpairs</code> from the package <code>GGally</code>. Details of the plot are provided in the addendum.) In the plot, the size of the circles represents the frequency of observations with a particular combination; the larger the circle, the more times we observe a combination. The correlation that is reported is the estimated <em>Spearman’s Rho</em>, which is appropriate for ordered or ranked data.</p>
<p>If you look at the plot in the third row and second column of this first example, the observations are mostly located near the diagonal - strong evidence of high correlation.</p>
<p><img src="https://www.rdatagen.net/post/2018-08-15-multivariate-ordinal-categorical-data-generation_files/figure-html/unnamed-chunk-1-1.png" width="768" /></p>
</div>
<div id="low-correlation" class="section level4">
<h4>Low correlation</h4>
<pre class="r"><code>dX <- genCorOrdCat(dT, adjVar = NULL, baseprobs = baseprobs,
prefix = "q", rho = 0.05, corstr = "cs")</code></pre>
<p>In this second example with very little correlation, the clustering around the diagonal in the third row/second column is less pronounced.</p>
<p><img src="https://www.rdatagen.net/post/2018-08-15-multivariate-ordinal-categorical-data-generation_files/figure-html/unnamed-chunk-2-1.png" width="768" /></p>
</div>
<div id="same-distribution" class="section level3">
<h3>Same distribution</h3>
<p>I leave you with two plots that are based on responses that share the same distributions:</p>
<pre class="r"><code>baseprobs <- matrix(c(0.1, 0.2, 0.3, 0.4,
0.1, 0.2, 0.3, 0.4,
0.1, 0.2, 0.3, 0.4,
0.1, 0.2, 0.3, 0.4),
nrow = 4, byrow = TRUE)</code></pre>
<p> </p>
<div id="high-correlation-1" class="section level4">
<h4>High correlation</h4>
<p><img src="https://www.rdatagen.net/post/2018-08-15-multivariate-ordinal-categorical-data-generation_files/figure-html/unnamed-chunk-3-1.png" width="768" /></p>
</div>
<div id="low-correlation-1" class="section level4">
<h4>Low correlation</h4>
<p><img src="https://www.rdatagen.net/post/2018-08-15-multivariate-ordinal-categorical-data-generation_files/figure-html/unnamed-chunk-4-1.png" width="768" /></p>
</div>
</div>
</div>
<div id="addendum" class="section level2">
<h2>Addendum</h2>
<p>In case you are interested in seeing how I generated the correlation plots, here is the code:</p>
<pre class="r"><code>library(GGally)
mycor <- function(data, mapping, sgnf=3, size = 8, ...) {
xCol <- as.character(mapping[[1]][[2]])
yCol <- as.character(mapping[[2]][[2]])
xVal <- data[[xCol]]
yVal <- data[[yCol]]
rho <- Hmisc::rcorr(xVal, yVal, type = "spearman")$r[2,1]
loc <- data.table(x=.5, y=.5)
p <- ggplot(data = loc, aes(x = x, y = y)) +
xlim(0:1) +
ylim(0:1) +
theme(panel.background = element_rect(fill = "grey95"),
panel.grid = element_blank()) +
labs(x = NULL, y = NULL) +
geom_text(size = size, color = "#8c8cc2",
label =
paste("rank corr:\n", round(rho, sgnf), sep = "", collapse = ""))
p
}
my_lower <- function(data, mapping, ...){
xCol <- as.character(mapping[[1]][[2]])
yCol <- as.character(mapping[[2]][[2]])
dx <- data.table(data)[ , c(xCol, yCol), with = FALSE]
ds <- dx[, .N,
keyby = .(eval(parse(text=xCol)), eval(parse(text=yCol)))]
setnames(ds, c("parse", "parse.1"), c(xCol, yCol))
p <- ggplot(data = ds, mapping = mapping) +
geom_point(aes(size = N), color = "#adadd4") +
scale_x_continuous(expand = c(.2, 0)) +
scale_y_continuous(expand = c(.2, 0)) +
theme(panel.grid = element_blank())
p
}
my_diag <- function(data, mapping, ...){
p <- ggplot(data = data, mapping = mapping) +
geom_bar(aes(y = (..count..)/sum(..count..)), fill = "#8c8cc2") +
theme(panel.grid = element_blank())
p
}
ggpairs(dX[, -"id"], lower = list(continuous = my_lower),
diag = list(continuous = my_diag),
upper = list(continuous = wrap(mycor, sgnf = 2, size = 3.5)))</code></pre>
</div>
Randomize by, or within, cluster?
https://www.rdatagen.net/post/by-vs-within/
Thu, 19 Jul 2018 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/by-vs-within/<p>I am involved with a <em>stepped-wedge</em> designed study that is exploring whether we can improve care for patients with end-stage disease who show up in the emergency room. The plan is to train nurses and physicians in palliative care. (A while ago, I <a href="https://www.rdatagen.net/post/using-simulation-for-power-analysis-an-example/">described</a> what the stepped wedge design is.)</p>
<p>Under this design, 33 sites around the country will receive the training at some point, which is no small task (and fortunately as the statistician, this is a part of the study I have little involvement). After hearing about this ambitious plan, a colleague asked why we didn’t just randomize half the sites to the intervention and conduct a more standard cluster randomized trial, where a site would either get the training or not. I quickly simulated some data to see what we would give up (or gain) if we had decided to go that route. (It is actually a moot point, since there would be no way to simultaneously train 16 or so sites, which is why we opted for the stepped-wedge design in the first place.)</p>
<p>I simplified things a bit by comparing randomization <em>within</em> site with randomization <em>by</em> site. The stepped wedge design is essentially a within-site randomization, except that the two treatment arms are defined at different time points, and things are complicated a bit because there might be time by intervention confounding. But, I won’t deal with that here.</p>
<div id="simulate-data" class="section level3">
<h3>Simulate data</h3>
<pre class="r"><code>library(simstudy)
# define data
cvar <- iccRE(0.20, dist = "binary")
d <- defData(varname = "a", formula = 0, variance = cvar,
dist = "normal", id = "cid")
d <- defData(d, varname = "nper", formula = 100, dist = "nonrandom")
da <- defDataAdd(varname = "y", formula = "-1 + .4*rx + a",
dist="binary", link = "logit")</code></pre>
</div>
<div id="randomize-within-cluster" class="section level3">
<h3>Randomize <em>within</em> cluster</h3>
<pre class="r"><code>set.seed(11265)
dc <- genData(100, d)
di <- genCluster(dc, "cid", "nper", "id")
di <- trtAssign(di, strata = "cid", grpName = "rx")
di <- addColumns(da, di)
di</code></pre>
<pre><code>## id rx cid a nper y
## 1: 1 1 1 -0.4389391 100 1
## 2: 2 0 1 -0.4389391 100 0
## 3: 3 1 1 -0.4389391 100 0
## 4: 4 0 1 -0.4389391 100 0
## 5: 5 0 1 -0.4389391 100 1
## ---
## 9996: 9996 0 100 -1.5749783 100 0
## 9997: 9997 1 100 -1.5749783 100 0
## 9998: 9998 0 100 -1.5749783 100 0
## 9999: 9999 1 100 -1.5749783 100 0
## 10000: 10000 1 100 -1.5749783 100 0</code></pre>
<p>I fit a <strong>conditional</strong> mixed effects model, and then manually calculate the conditional log odds from the data just to give a better sense of what the conditional effect is (see <a href="https://www.rdatagen.net/post/mixed-effect-models-vs-gee/">earlier post</a> for more on conditional vs. marginal effects).</p>
<pre class="r"><code>library(lme4)
rndTidy(glmer(y ~ rx + (1 | cid), data = di, family = binomial))</code></pre>
<pre><code>## term estimate std.error statistic p.value group
## 1 (Intercept) -0.86 0.10 -8.51 0 fixed
## 2 rx 0.39 0.05 8.45 0 fixed
## 3 sd_(Intercept).cid 0.95 NA NA NA cid</code></pre>
<pre class="r"><code>calc <- di[, .(estp = mean(y)), keyby = .(cid, rx)]
calc[, lo := log(odds(estp))]
calc[rx == 1, mean(lo)] - calc[rx == 0, mean(lo)] </code></pre>
<pre><code>## [1] 0.3985482</code></pre>
<p>Next, I fit a <strong>marginal</strong> model and calculate the effect manually as well.</p>
<pre class="r"><code>library(geepack)
rndTidy(geeglm(y ~ rx, data = di, id = cid, corstr = "exchangeable",
family = binomial))</code></pre>
<pre><code>## term estimate std.error statistic p.value
## 1 (Intercept) -0.74 0.09 67.09 0
## 2 rx 0.32 0.04 74.80 0</code></pre>
<pre class="r"><code>log(odds(di[rx==1, mean(y)])/odds(di[rx==0, mean(y)]))</code></pre>
<pre><code>## [1] 0.323471</code></pre>
<p>As <a href="https://www.rdatagen.net/post/log-odds/">expected</a>, the marginal estimate of the effect is less than the conditional effect.</p>
</div>
<div id="randomize-by-cluster" class="section level3">
<h3>Randomize <em>by</em> cluster</h3>
<p>Next we repeat all of this, though randomization is at the cluster level.</p>
<pre class="r"><code>dc <- genData(100, d)
dc <- trtAssign(dc, grpName = "rx")
di <- genCluster(dc, "cid", "nper", "id")
di <- addColumns(da, di)
di</code></pre>
<pre><code>## cid rx a nper id y
## 1: 1 0 0.8196365 100 1 0
## 2: 1 0 0.8196365 100 2 1
## 3: 1 0 0.8196365 100 3 0
## 4: 1 0 0.8196365 100 4 0
## 5: 1 0 0.8196365 100 5 0
## ---
## 9996: 100 1 -0.1812079 100 9996 1
## 9997: 100 1 -0.1812079 100 9997 0
## 9998: 100 1 -0.1812079 100 9998 0
## 9999: 100 1 -0.1812079 100 9999 1
## 10000: 100 1 -0.1812079 100 10000 0</code></pre>
<p>Here is the conditional estimate of the effect:</p>
<pre class="r"><code>rndTidy(glmer(y~rx + (1|cid), data = di, family = binomial))</code></pre>
<pre><code>## term estimate std.error statistic p.value group
## 1 (Intercept) -0.71 0.15 -4.69 0.00 fixed
## 2 rx 0.27 0.21 1.26 0.21 fixed
## 3 sd_(Intercept).cid 1.04 NA NA NA cid</code></pre>
<p>And here is the marginal estimate</p>
<pre class="r"><code>rndTidy(geeglm(y ~ rx, data = di, id = cid, corstr = "exchangeable",
family = binomial))</code></pre>
<pre><code>## term estimate std.error statistic p.value
## 1 (Intercept) -0.56 0.13 18.99 0.00
## 2 rx 0.21 0.17 1.46 0.23</code></pre>
<p>While the within- and by-site randomization estimates are quite different, we haven’t really learned anything, since those differences could have been due to chance. So, I created 500 data sets under different assumptions to see what the expected estimate would be as well as the variability of the estimate.</p>
</div>
<div id="fixed-icc-varied-randomization" class="section level3">
<h3>Fixed ICC, varied randomization</h3>
<p>From this first set of simulations, the big take away is that randomizing <em>within</em> clusters provides an unbiased estimate of the conditional effect, but so does randomizing <em>by</em> site. The big disadvantage of randomizing <em>by</em> site is the added variability of the conditional estimate. The attenuation of the marginal effect estimates under both scenarios has nothing to do with randomization, but results from intrinsic variability across sites.</p>
<p><img src="https://www.rdatagen.net/img/post-condmarg/pRT.png" /></p>
</div>
<div id="fixed-randomization-varied-icc" class="section level3">
<h3>Fixed randomization, varied ICC</h3>
<p>This next figure isolates the effect of across-site variability on the estimates. In this case, randomization is only <em>by</em> site (i.e. no within site randomization), but the ICC is set at 0.05 and 0.20. For the conditional model, the ICC has no impact on the expected value of the log-odds ratio, but when variability is higher (ICC = 0.20), the standard error of the estimate increases. For the marginal model, the ICC has an impact on <em>both</em> the expected value and standard error of the estimate. In the case with a low ICC (top row in plot), the marginal and condition estimates are quite similar.</p>
<p><img src="https://www.rdatagen.net/img/post-condmarg/pIT.png" /></p>
</div>
How the odds ratio confounds: a brief study in a few colorful figures
https://www.rdatagen.net/post/log-odds/
Tue, 10 Jul 2018 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/log-odds/<p>The odds ratio always confounds: while it may be constant across different groups or clusters, the risk ratios or risk differences across those groups may vary quite substantially. This makes it really hard to interpret an effect. And then there is inconsistency between marginal and conditional odds ratios, a topic I seem to be visiting frequently, most recently last <a href="https://www.rdatagen.net/post/mixed-effect-models-vs-gee/">month</a>.</p>
<p>My aim here is to generate a few figures that might highlight some of these issues.</p>
<p>Assume that there is some exposure (indicated by the use of a <span class="math inline">\(1\)</span> or <span class="math inline">\(0\)</span> subscript) applied across a number of different groups or clusters of people (think different regions, hospitals, schools, etc.) - indicated by some number or letter <span class="math inline">\(i\)</span>. Furthermore, assume that the total number exposed at each location is the same as the number unexposed: <span class="math inline">\(N_{i0} = N_{i1} = N = 100\)</span>.</p>
<p>The number of folks with exposure at a particular location <span class="math inline">\(i\)</span> who have a poor outcome is <span class="math inline">\(n_{i1}\)</span> and the number with a good outcome is <span class="math inline">\(N-n_{i1}\)</span>. Likewise, the corresponding measures for folks not exposed are <span class="math inline">\(n_{i0}\)</span> and <span class="math inline">\(N-n_{i0}\)</span>. The probabilities of a poor outcome for exposed and non-exposed are <span class="math inline">\(n_{i1}/N\)</span> and <span class="math inline">\(n_{i0}/N\)</span>. The relative risk of a poor outcome for those exposed compared to those non exposed is</p>
<p><span class="math display">\[\text{RR}_i = \frac{n_{i1}/N}{n_{i0}/N} = \frac{n_{i1}}{n_{i0}},\]</span>
the risk difference between exposed and unexposed groups is</p>
<p><span class="math display">\[ \text{RD}_i = \frac{n_{i1}}{N}-\frac{n_{i0}}{N} = \frac{n_{i1} - n_{i0}}{N},\]</span>
and the odds ratio is</p>
<p><span class="math display">\[ \text{OR}_i = \frac{[n_{i1}/N] / [(N - n_{i1})/N]}{[n_{i0}/N] / [(N - n_{i0})/N]} \]</span>
<span class="math display">\[= \frac{n_{i1}(N-n_{i0})}{n_{i0}(N-n_{i1})}.\]</span></p>
<p>The simple conditional logistic regression model that includes a group-level random effect <span class="math inline">\(b_i\)</span> assumes a constant odds ratio between exposed and unexposed individuals across the different clusters:</p>
<p><span class="math display">\[\text{logit} (Y_{ij}) = \beta_0 + \beta_1 E_{ij} + b_i,\]</span>
where <span class="math inline">\(E_{ij}\)</span> is an exposure indicator for person <span class="math inline">\(j\)</span> in group <span class="math inline">\(i\)</span>. The parameter <span class="math inline">\(\text{exp}(\beta_1)\)</span> is an estimate of the odds ratio defined above.</p>
<p>The point of all of this is to illustrate that although the odds-ratio is the same across all groups/clusters (i.e., there is no <span class="math inline">\(i\)</span> subscript in <span class="math inline">\(\beta_1\)</span> and <span class="math inline">\(\text{OR}_i = \text{OR}\)</span>), the risk ratios and risk differences <em>can</em> vary greatly across groups, particularly if the <span class="math inline">\(b\)</span>’s vary considerably.</p>
<div id="constant-odds-ratio-different-risk-ratios-and-differences" class="section level3">
<h3>Constant odds ratio, different risk ratios and differences</h3>
<p>If the odds ratio is constant and we know <span class="math inline">\(n_{i1}\)</span>, we can perform a little algebraic maneuvering on the <span class="math inline">\(\text{OR}\)</span> formula above to find <span class="math inline">\(n_{i0}\)</span>:</p>
<p><span class="math display">\[ n_{i0} = \frac{N \times n_{i1}}{\text{OR} \times (N - n_{i1}) + n_{i1}}\]</span></p>
<p>If we assume that the <span class="math inline">\(n_{i1}\)</span>’s can range from 2 to 98 (out of 100), we can see how the risk ratios and risk differences vary considerably even though we fix the odds ratio fixed at a value of 3 (don’t pay too close attention to the fact the <span class="math inline">\(n_0\)</span> is not an integer - this is just an illustration that makes a few violations - if I had used <span class="math inline">\(N=1000\)</span>, we could have called this rounding error):</p>
<pre class="r"><code>N <- 100
trueOddsRatio <- 3
n1 <- seq(2:98)
n0 <- (N * n1)/(trueOddsRatio * (N - n1) + n1)
oddsRatio <- ((n1 / (N - n1) ) / (n0 / (N - n0) ))
riskRatio <- n1 / n0
riskDiff <- (n1 - n0) / N
dn <- data.table(n1 = as.double(n1), n0, oddsRatio,
riskRatio, riskDiff = round(riskDiff,3))
dn[1:6]</code></pre>
<pre><code>## n1 n0 oddsRatio riskRatio riskDiff
## 1: 1 0.3355705 3 2.98 0.007
## 2: 2 0.6756757 3 2.96 0.013
## 3: 3 1.0204082 3 2.94 0.020
## 4: 4 1.3698630 3 2.92 0.026
## 5: 5 1.7241379 3 2.90 0.033
## 6: 6 2.0833333 3 2.88 0.039</code></pre>
<p>With a constant odds ratio of 3, the risk ratios range from 1 to 3, and the risk differences range from almost 0 to just below 0.3. The odds ratio is not exactly informative with respect to these other two measures. The plots - two takes on the same data - tell a better story:</p>
<p><img src="https://www.rdatagen.net/post/2018-07-10-odds-ratio_files/figure-html/unnamed-chunk-2-1.png" width="1152" /></p>
</div>
<div id="another-look-at-contrasting-marginal-vs-conditional-odds-ratios" class="section level3">
<h3>Another look at contrasting marginal vs conditional odds ratios</h3>
<p>Using this same simple framework, I thought I’d see if I can illustrate the relationship between marginal and conditional odds ratios.</p>
<p>In this case, we have two groups/clusters where the conditional odds ratios are equivalent, yet when we combine the groups into a single entity, the combined (marginal) odds ratio is less than each of the conditional odds ratios.</p>
<p>In this scenario each cluster has 100 people who are exposed and 100 who are not, as before. <span class="math inline">\(a_1\)</span> and <span class="math inline">\(a_0\)</span> represent the number of folks with a poor outcome for the exposed and unexposed in the first cluster, respectively; <span class="math inline">\(b_1\)</span> and <span class="math inline">\(b_0\)</span> represent the analogous quantities in the second cluster. As before <span class="math inline">\(a_0\)</span> and <span class="math inline">\(b_0\)</span> are derived as a function of <span class="math inline">\(a_1\)</span> and <span class="math inline">\(b_1\)</span>, respectively, and the constant odds ratio.</p>
<pre class="r"><code>constantOR <- function(n1, N, OR) {
return(N*n1 / (OR*(N-n1) + n1))
}
# Cluster A
a1 <- 55
a0 <- constantOR(a1, N = 100, OR = 3)
(a1*(100 - a0)) / (a0 * (100 - a1))</code></pre>
<pre><code>## [1] 3</code></pre>
<pre class="r"><code># Cluster B
b1 <- 35
b0 <- constantOR(b1, N = 100, OR = 3)
(b1*(100 - b0)) / (b0 * (100 - b1))</code></pre>
<pre><code>## [1] 3</code></pre>
<pre class="r"><code># Marginal OR
tot0 <- a0 + b0
tot1 <- a1 + b1
(tot1*(200 - tot0)) / (tot0 * (200 - tot1))</code></pre>
<pre><code>## [1] 2.886952</code></pre>
<p>For this example, the marginal odds ratio is less than the conditional odds ratio. How does this contrast between the marginal and conditional odds ratio play out with a range of possible outcomes - all meeting the requirement of a constant conditional odds ratio? (Note we are talking about odds ratio larger than 1; everything is flipped if the OR is < 1.) The plot below shows possible combinations of sums <span class="math inline">\(a_1 + b_1\)</span> and <span class="math inline">\(a_0 + b_0\)</span>, where the constant conditional odds ratio condition holds within each group. The red line shows all points where the marginal odds ratio equals the conditional odds ratio (which happens to be 3 in this case):</p>
<p><img src="https://www.rdatagen.net/post/2018-07-10-odds-ratio_files/figure-html/unnamed-chunk-4-1.png" width="672" /></p>
<p>Here is the same plot, but a yellow line is drawn in all cases where <span class="math inline">\(a_1 = b_1\)</span> (hence <span class="math inline">\(a_0 = b_0\)</span>). This line is the directly over the earlier line where the marginal odds ratios equal 3. So, sort of proof by plotting. The marginal odds ratio appears to equal the conditional odds ratio when the proportions of each group are equal.</p>
<p><img src="https://www.rdatagen.net/post/2018-07-10-odds-ratio_files/figure-html/unnamed-chunk-5-1.png" width="672" /></p>
<p>But are the marginal odds ratios not on the colored lines higher or lower than 3? To check this, look at the next figure. In this plot, the odds ratio is plotted as a function of <span class="math inline">\(a_1 + b_1\)</span>, which represents the total number of poor outcomes in the combined exposed groups. Each line represents the marginal odds ratio for a specific value of <span class="math inline">\(a_1\)</span>.</p>
<p><img src="https://www.rdatagen.net/post/2018-07-10-odds-ratio_files/figure-html/unnamed-chunk-6-1.png" width="672" /></p>
<p>If you notice, the odds ratio reaches the constant conditional odds ratio (which is 3) only when <span class="math inline">\(a_1 + b_1 = 2a_1\)</span>, or when <span class="math inline">\(a_1 = b_1\)</span>. It appears then, when <span class="math inline">\(a_1 \ne b_1\)</span>, the marginal odds ratio lies below the conditional odds ratio. Another “proof” by figure. OK, not a proof, but colorful nonetheless.</p>
</div>
Re-referencing factor levels to estimate standard errors when there is interaction turns out to be a really simple solution
https://www.rdatagen.net/post/re-referencing-to-estimate-effects-when-there-is-interaction/
Tue, 26 Jun 2018 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/re-referencing-to-estimate-effects-when-there-is-interaction/<p>Maybe this should be filed under topics that are so obvious that it is not worth writing about. But, I hate to let a good simulation just sit on my computer. I was recently working on a paper investigating the relationship of emotion knowledge (EK) in very young kids with academic performance a year or two later. The idea is that kids who are more emotionally intelligent might be better prepared to learn. My collaborator suspected that the relationship between EK and academics would be different for immigrant and non-immigrant children, so we agreed that this would be a key focus of the analysis.</p>
<p>In model terms, we would describe the relationship for each student <span class="math inline">\(i\)</span> as;</p>
<p><span class="math display">\[ T_i = \beta_0 + \beta_1 I_i + \beta_2 EK_i + \beta_3 I_i \times EK_i + \epsilon_i,\]</span>
where <span class="math inline">\(T\)</span> is the academic outcome, <span class="math inline">\(I\)</span> is an indicator for immigrant status (either 0 or 1), and <span class="math inline">\(EK\)</span> is a continuous measure of emotion knowledge. By including the <span class="math inline">\(I \times EK\)</span> interaction term, we allow for the possibility that the effect of emotion knowledge will be different for immigrants. In particular, if we code <span class="math inline">\(I=0\)</span> for non-immigrant kids and <span class="math inline">\(I=1\)</span> for immigrant kids, <span class="math inline">\(\beta_2\)</span> represents the relationship of EK and academic performance for non-immigrant kids, and <span class="math inline">\(\beta_2 + \beta_3\)</span> is the relationship for immigrant kids. In this case, non-immigrant kids are the <em>reference</em> category.</p>
<p>Here’s the data generation:</p>
<pre class="r"><code>library(simstudy)
library(broom)
set.seed(87265145)
def <- defData(varname = "I", formula = .4, dist = "binary")
def <- defData(def, varname = "EK", formula = "0 + 0.5*I", variance = 4)
def <- defData(def, varname = "T",
formula = "10 + 2*I + 0.5*EK + 1.5*I*EK", variance = 4 )
dT <- genData(250, def)
genFactor(dT, "I", labels = c("not Imm", "Imm"))</code></pre>
<pre><code>## id I EK T fI
## 1: 1 1 -1.9655562 5.481254 Imm
## 2: 2 1 0.9230118 16.140710 Imm
## 3: 3 0 -2.5315312 9.443148 not Imm
## 4: 4 1 0.9103722 15.691873 Imm
## 5: 5 0 -0.2126550 9.524948 not Imm
## ---
## 246: 246 0 -1.2727195 7.546245 not Imm
## 247: 247 0 -1.2025184 6.658869 not Imm
## 248: 248 0 -1.7555451 11.027569 not Imm
## 249: 249 0 2.2967681 10.439577 not Imm
## 250: 250 1 -0.3056299 11.673933 Imm</code></pre>
<p>Let’s say our primary interest in this exploration is point estimates of <span class="math inline">\(\beta_2\)</span> and <span class="math inline">\(\beta_2 + \beta_3\)</span>, along with 95% confidence intervals of the estimates. (We could have just as easily reported <span class="math inline">\(\beta_3\)</span>, but we decided the point estimates would be more intuitive to understand.) The point estimates are quite straightforward: we can estimate them directly from the estimates of <span class="math inline">\(\beta_2\)</span> and <span class="math inline">\(\beta_3\)</span>. And the standard error (and confidence interval) for <span class="math inline">\(\beta_2\)</span> can be read directly off of the model output table. But what about the standard error for the relationship of EK and academic performance for the immigrant kids? How do we handle that?</p>
<p>I’ve always done this the cumbersome way, using this definition:</p>
<p><span class="math display">\[
\begin{aligned}
se_{\beta_2 + \beta_3} &= [Var(\beta_2 + \beta_3)]^\frac{1}{2} \\
&=[Var(\beta_2) + Var(\beta_3) + 2 \times Cov(\beta_2,\beta_3)]^\frac{1}{2}
\end{aligned}
\]</span></p>
<p>In R, this is relatively easy (though maybe not super convenient) to do manually, by extracting the information from the estimated parameter variance-covariance matrix.</p>
<p>First, fit a linear model with an interaction term:</p>
<pre class="r"><code>lm1 <- lm(T ~ fI*EK, data = dT)
tidy(lm1)</code></pre>
<pre><code>## term estimate std.error statistic p.value
## 1 (Intercept) 10.161842 0.16205385 62.706574 2.651774e-153
## 2 fIImm 1.616929 0.26419189 6.120281 3.661090e-09
## 3 EK 0.461628 0.09252734 4.989098 1.147653e-06
## 4 fIImm:EK 1.603680 0.13960763 11.487049 9.808529e-25</code></pre>
<p>The estimate for the relationship of EK and academic performance for non-immigrant kids is 0.46 (se = 0.093). And the point estimate for the relationship for immigrant kids is <span class="math inline">\(2.06=0.46 + 1.60\)</span></p>
<p>The standard error can be calculated from the variance-covariance matrix that is derived from the linear model:</p>
<pre class="r"><code>vcov(lm1)</code></pre>
<pre><code>## (Intercept) fIImm EK fIImm:EK
## (Intercept) 0.026261449 -0.026261449 -0.000611899 0.000611899
## fIImm -0.026261449 0.069797354 0.000611899 -0.006838297
## EK -0.000611899 0.000611899 0.008561309 -0.008561309
## fIImm:EK 0.000611899 -0.006838297 -0.008561309 0.019490291</code></pre>
<p><span class="math display">\[Var(\beta_2+\beta_3) = 0.0086 + 0.0195 + 2\times(-.0086) = 0.0109\]</span></p>
<p>The standard error of the estimate is <span class="math inline">\(\sqrt{0.0109} = 0.105\)</span>.</p>
<div id="so" class="section level3">
<h3>So?</h3>
<p>OK - so maybe that isn’t really all that interesting. Why am I even talking about this? Well, in the actual study, we have a fair amount of missing data. In some cases we don’t have an EK measure, and in others we don’t have an outcome measure. And since the missingness is on the order of 15% to 20%, we decided to use multiple imputation. We used the <a href="https://www.jstatsoft.org/article/view/v045i03"><code>mice</code> package</a> in <code>R</code> to impute the data, and we pooled the model estimates from the completed data sets to get our final estimates. <code>mice</code> is a fantastic package, but one thing that is does not supply is some sort of pooled variance-covariance matrix. Looking for a relatively quick solution, I decided to use bootstrap methods to estimate the confidence intervals.</p>
<p>(“Relatively quick” is itself a relative term, since bootstrapping and imputing together is not exactly a quick process - maybe something to work on. I was also not fitting standard linear models but mixed effect models. Needless to say, it took a bit of computing time to get my estimates.)</p>
<p>Seeking credit (and maybe some sympathy) for all of my hard work, I mentioned this laborious process to my collaborator. She told me that you can easily estimate the group specific effects merely by changing the reference group and refitting the model. I could see right away that the point estimates would be fine, but surely the standard errors would not be estimated correctly? Of course, a few simulations ensued.</p>
<p>First, I just changed the reference group so that <span class="math inline">\(\beta_2\)</span> would be measuring the relationship of EK and academic performance for <em>immigrant</em> kids, and <span class="math inline">\(\beta_2 + \beta_3\)</span> would represent the relationship for the <em>non-immigrant</em> kids. Here are the levels before the change:</p>
<pre class="r"><code>head(dT$fI)</code></pre>
<pre><code>## [1] Imm Imm not Imm Imm not Imm not Imm
## Levels: not Imm Imm</code></pre>
<p>And after:</p>
<pre class="r"><code>dT$fI <- relevel(dT$fI, ref="Imm")
head(dT$fI)</code></pre>
<pre><code>## [1] Imm Imm not Imm Imm not Imm not Imm
## Levels: Imm not Imm</code></pre>
<p>And the model:</p>
<pre class="r"><code>lm2 <- lm(T ~ fI*EK, data = dT)
tidy(lm2)</code></pre>
<pre><code>## term estimate std.error statistic p.value
## 1 (Intercept) 11.778770 0.2086526 56.451588 8.367177e-143
## 2 fInot Imm -1.616929 0.2641919 -6.120281 3.661090e-09
## 3 EK 2.065308 0.1045418 19.755813 1.112574e-52
## 4 fInot Imm:EK -1.603680 0.1396076 -11.487049 9.808529e-25</code></pre>
<p>The estimate for this new <span class="math inline">\(\beta_2\)</span> is 2.07 (se=0.105), pretty much aligned with our estimate that required a little more work. While this is not a proof by any means, I did do variations on this simulation (adding other covariates, changing the strength of association, changing sample size, changing variation, etc.) and both approaches seem to be equivalent. I even created 10000 samples to see if the coverage rates of the 95% confidence intervals were correct. They were. My collaborator was right. And I felt a little embarrassed, because it seems like something I should have known.</p>
</div>
<div id="but" class="section level3">
<h3>But …</h3>
<p>Would this still work with missing data? Surely, things would go awry in the pooling process. So, I did one last simulation, generating the same data, but then added missingness. I imputed the missing data, fit the models, and pooled the results (including pooled 95% confidence intervals). And then I looked at the coverage rates.</p>
<p>First I added some missingness into the data</p>
<pre class="r"><code>defM <- defMiss(varname = "EK", formula = "0.05 + 0.10*I",
logit.link = FALSE)
defM <- defMiss(defM, varname = "T", formula = "0.05 + 0.05*I",
logit.link = FALSE)
defM</code></pre>
<pre><code>## varname formula logit.link baseline monotonic
## 1: EK 0.05 + 0.10*I FALSE FALSE FALSE
## 2: T 0.05 + 0.05*I FALSE FALSE FALSE</code></pre>
<p>And then I generated 500 data sets, imputed the data, and fit the models. Each iteration, I stored the final model results for both models (in one where the reference is <em>non-immigrant</em> and the the other where the reference group is <em>immigrant</em>).</p>
<pre class="r"><code>library(mice)
nonRes <- list()
immRes <- list()
set.seed(3298348)
for (i in 1:500) {
dT <- genData(250, def)
dT <- genFactor(dT, "I", labels = c("non Imm", "Imm"), prefix = "non")
dT$immI <- relevel(dT$nonI, ref = "Imm")
# generate a missing data matrix
missMat <- genMiss(dtName = dT, missDefs = defM, idvars = "id")
# create obseverd data set
dtObs <- genObs(dT, missMat, idvars = "id")
dtObs <- dtObs[, .(I, EK, nonI, immI, T)]
# impute the missing data (create 20 data sets for each iteration)
dtImp <- mice(data = dtObs, method = 'cart', m = 20, printFlag = FALSE)
# non-immgrant is the reference group
estImp <- with(dtImp, lm(T ~ nonI*EK))
lm1 <- summary(pool(estImp), conf.int = TRUE)
dt1 <- as.data.table(lm1)
dt1[, term := rownames(lm1)]
setnames(dt1, c("2.5 %", "97.5 %"), c("conf.low", "conf.high"))
dt1[, iter := i]
nonRes[[i]] <- dt1
# immgrant is the reference group
estImp <- with(dtImp, lm(T ~ immI*EK))
lm2 <- summary(pool(estImp), conf.int = TRUE)
dt2 <- as.data.table(lm2)
dt2[, term := rownames(lm2)]
setnames(dt2, c("2.5 %", "97.5 %"), c("conf.low", "conf.high"))
dt2[, iter := i]
immRes[[i]] <- dt2
}
nonRes <- rbindlist(nonRes)
immRes <- rbindlist(immRes)</code></pre>
<p>The proportion of confidence intervals that contain the true values is pretty close to 95% for both estimates:</p>
<pre class="r"><code>mean(nonRes[term == "EK", conf.low < 0.5 & conf.high > 0.5])</code></pre>
<pre><code>## [1] 0.958</code></pre>
<pre class="r"><code>mean(immRes[term == "EK", conf.low < 2.0 & conf.high > 2.0])</code></pre>
<pre><code>## [1] 0.948</code></pre>
<p>And the estimates of the mean and standard deviations are also pretty good:</p>
<pre class="r"><code>nonRes[term == "EK", .(mean = round(mean(estimate),3),
obs.SD = round(sd(estimate),3),
avgEst.SD = round(sqrt(mean(std.error^2)),3))]</code></pre>
<pre><code>## mean obs.SD avgEst.SD
## 1: 0.503 0.086 0.088</code></pre>
<pre class="r"><code>immRes[term == "EK", .(mean = round(mean(estimate),3),
obs.SD = round(sd(estimate),3),
avgEst.SD = round(sqrt(mean(std.error^2)),3))]</code></pre>
<pre><code>## mean obs.SD avgEst.SD
## 1: 1.952 0.117 0.124</code></pre>
<p>Because I like to include at least one visual in a post, here is a plot of the 95% confidence intervals, with the CIs not covering the true values colored blue:</p>
<p><img src="https://www.rdatagen.net/post/2018-06-26-re-referencing-with-interaction_files/figure-html/unnamed-chunk-11-1.png" width="384" /></p>
<p>The re-reference approach seems to work quite well (in the context of this simulation, at least). My guess is the hours of bootstrapping may have been unnecessary, though I haven’t fully tested all of this out in the context of clustered data. My guess is it will turn out OK in that case as well.</p>
</div>
<div id="appendix-ggplot-code" class="section level3">
<h3>Appendix: ggplot code</h3>
<pre class="r"><code>nonEK <- nonRes[term == "EK", .(iter, ref = "Non-immigrant",
estimate, conf.low, conf.high,
cover = (conf.low < 0.5 & conf.high > 0.5))]
immEK <- immRes[term == "EK", .(iter, ref = "Immigrant",
estimate, conf.low, conf.high,
cover = (conf.low < 2 & conf.high > 2))]
EK <- rbindlist(list(nonEK, immEK))
vline <- data.table(xint = c(.5, 2),
ref = c("Non-immigrant", "Immigrant"))
ggplot(data = EK, aes(x = conf.low, xend = conf.high, y = iter, yend = iter)) +
geom_segment(aes(color = cover)) +
geom_vline(data=vline, aes(xintercept=xint), lty = 3) +
facet_grid(.~ ref, scales = "free") +
theme(panel.grid = element_blank(),
axis.ticks.y = element_blank(),
axis.text.y = element_blank(),
axis.title.y = element_blank(),
legend.position = "none") +
scale_color_manual(values = c("#5c81ba","grey75")) +
scale_x_continuous(expand = c(.1, 0), name = "95% CI")</code></pre>
</div>
Late anniversary edition redux: conditional vs marginal models for clustered data
https://www.rdatagen.net/post/mixed-effect-models-vs-gee/
Wed, 13 Jun 2018 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/mixed-effect-models-vs-gee/<p>This afternoon, I was looking over some simulations I plan to use in an upcoming lecture on multilevel models. I created these examples a while ago, before I started this blog. But since it was just about a year ago that I first wrote about this topic (and started the blog), I thought I’d post this now to mark the occasion.</p>
<p>The code below provides another way to visualize the difference between marginal and conditional logistic regression models for clustered data (see <a href="https://www.rdatagen.net/post/marginal-v-conditional/">here</a> for an earlier post that discusses in greater detail some of the key issues raised here.) The basic idea is that both models for a binary outcome are valid, but they provide estimates for different quantities.</p>
<p>The marginal model is estimated using a generalized estimating equation (GEE) model (here using function <code>geeglm</code> in package <code>geepack</code>). If the intervention is binary, the intervention effect (log-odds ratio) is interpreted as the average effect across all individuals regardless of the group or cluster they might belong to. (This estimate is sensitive to the relative sizes of the clusters.)</p>
<p>The conditional model is estimated using a random mixed effect generalized linear model (using function <code>glmer</code> in package <code>lme4</code>), and provides the log-odds ratio conditional on the cluster. (The estimate is not as sensitive to the relative sizes of the clusters since it is essentially providing a within-cluster effect.)</p>
<p>As the variation across clusters increases, so does the discrepancy between the conditional and marginal models. Using a generalized linear model that ignores clustering altogether will provide the correct (marginal) point estimate, but will underestimate the underlying variance (and standard errors) as long as there is between cluster variation. If there is no between cluster variation, the GLM model should be fine.</p>
<div id="simulation" class="section level3">
<h3>Simulation</h3>
<p>To start, here is a function that uses <code>simstudy</code> to define and generate a data set of individuals that are clustered in groups. A key argument passed to this function is the across cluster variation.</p>
<pre class="r"><code>library(lme4)
library(geepack)
library(broom)
genFunc <- function(nClusters, effVar) {
# define the cluster
def1 <- defData(varname = "clustEff", formula = 0,
variance = effVar, id = "cID")
def1 <- defData(def1, varname = "nInd", formula = 40,
dist = "noZeroPoisson")
# define individual level data
def2 <- defDataAdd(varname = "Y", formula = "-2 + 2*grp + clustEff",
dist = "binary", link = "logit")
# generate cluster level data
dtC <- genData(nClusters, def1)
dtC <- trtAssign(dtC, grpName = "grp")
# generate individual level data
dt <- genCluster(dtClust = dtC, cLevelVar = "cID", numIndsVar = "nInd",
level1ID = "id")
dt <- addColumns(def2, dt)
return(dt)
}</code></pre>
<p>A plot of the average site level outcome from data generated with across site variance of 1 (on the log-odds scale) shows the treatment effect:</p>
<pre class="r"><code>set.seed(123)
dt <- genFunc(100, 1)
dt</code></pre>
<pre><code>## cID grp clustEff nInd id Y
## 1: 1 0 -0.5604756 35 1 1
## 2: 1 0 -0.5604756 35 2 0
## 3: 1 0 -0.5604756 35 3 0
## 4: 1 0 -0.5604756 35 4 0
## 5: 1 0 -0.5604756 35 5 0
## ---
## 3968: 100 1 -1.0264209 45 3968 0
## 3969: 100 1 -1.0264209 45 3969 0
## 3970: 100 1 -1.0264209 45 3970 1
## 3971: 100 1 -1.0264209 45 3971 0
## 3972: 100 1 -1.0264209 45 3972 0</code></pre>
<pre class="r"><code>dplot <- dt[, mean(Y), keyby = .(grp, cID)]
davg <- dt[, mean(Y)]
ggplot(data = dplot, aes(x=factor(grp), y = V1)) +
geom_jitter(aes(color=factor(grp)), width = .10) +
theme_ksg("grey95") +
xlab("group") +
ylab("mean(Y)") +
theme(legend.position = "none") +
ggtitle("Site level means by group") +
scale_color_manual(values = c("#264e76", "#764e26"))</code></pre>
<p><img src="https://www.rdatagen.net/post/2018-06-13-mixed-effect-models-vs-gee_files/figure-html/unnamed-chunk-2-1.png" width="480" /></p>
</div>
<div id="model-fits" class="section level3">
<h3>Model fits</h3>
<p>First, the conditional model estimates a log-odds ratio of 1.89, close to the actual log-odds ratio of 2.0.</p>
<pre class="r"><code>glmerFit <- glmer(Y ~ grp + (1 | cID), data = dt, family="binomial")
tidy(glmerFit)</code></pre>
<pre><code>## term estimate std.error statistic p.value group
## 1 (Intercept) -1.8764913 0.1468104 -12.781729 2.074076e-37 fixed
## 2 grp 1.8936999 0.2010359 9.419711 4.523292e-21 fixed
## 3 sd_(Intercept).cID 0.9038166 NA NA NA cID</code></pre>
<p>The marginal model that takes into account clustering yields an estimate of 1.63. This model is not wrong, just estimating a different quantity:</p>
<pre class="r"><code>geeFit <- geeglm(Y ~ grp, family = binomial, data = dt,
corstr = "exchangeable", id = dt$cID)
tidy(geeFit)</code></pre>
<pre><code>## term estimate std.error statistic p.value
## 1 (Intercept) -1.620073 0.1303681 154.42809 0
## 2 grp 1.628075 0.1740666 87.48182 0</code></pre>
<p>The marginal model that ignores clustering also estimates a log-odds ratio, 1.67, but the standard error estimate is much smaller than in the previous model (0.076 vs. 0.174). We could say that this model is not appropriate given the clustering of individuals:</p>
<pre class="r"><code>glmFit <- glm(Y ~ grp, data = dt, family="binomial")
tidy(glmFit)</code></pre>
<pre><code>## term estimate std.error statistic p.value
## 1 (Intercept) -1.639743 0.0606130 -27.05267 3.553136e-161
## 2 grp 1.668143 0.0755165 22.08978 3.963373e-108</code></pre>
</div>
<div id="multiple-replications" class="section level3">
<h3>Multiple replications</h3>
<p>With multiple replications (in this case 100), we can see how each model performs under different across cluster variance assumptions. I have written two functions (that are shown at the end in the appendix) to generate multiple datasets and create a plot. The plot shows (1) the average point estimate across all the replications in black, (2) the true standard deviation of all the point estimates across all replications in blue, (3) the average estimate of the standard errors in orange.</p>
<p>In the first case, the variability across sites is highest. The discrepancy between the marginal and conditional models is relatively large, but both the GEE and mixed effects models estimate the standard errors correctly (the orange line overlaps perfectly with blue line). The generalized linear model, however, provides a biased estimate of the standard error - the orange line does not cover the blue line:</p>
<pre class="r"><code>set.seed(235)
res1.00 <- iterFunc(40, 1.00, 100)
s1 <- sumFunc(res1.00)
s1$p</code></pre>
<p><img src="https://www.rdatagen.net/post/2018-06-13-mixed-effect-models-vs-gee_files/figure-html/unnamed-chunk-7-1.png" width="672" /></p>
<p>When the across cluster variation is reduced, the discrepancy between the marginal and conditional models is reduced, as is the bias of standard error estimate for the GLM model:</p>
<pre class="r"><code>res0.50 <- iterFunc(40, 0.50, 100)
s2 <- sumFunc(res0.50)
s2$p</code></pre>
<p><img src="https://www.rdatagen.net/post/2018-06-13-mixed-effect-models-vs-gee_files/figure-html/unnamed-chunk-8-1.png" width="672" /></p>
<p>Finally, when there is negligible variation across sites, the conditional and marginal models are pretty much one and the same. And even the GLM model that ignores clustering is unbiased (which makes sense, since there really is no clustering):</p>
<pre class="r"><code>res0.05 <- iterFunc(40, 0.05, 100)
s3 <- sumFunc(res0.05)
s3$p</code></pre>
<p><img src="https://www.rdatagen.net/post/2018-06-13-mixed-effect-models-vs-gee_files/figure-html/unnamed-chunk-9-1.png" width="672" /></p>
</div>
<div id="appendix" class="section level3">
<h3>Appendix</h3>
<p>Here are the two functions that generated the the replications and created the plots shown above.</p>
<pre class="r"><code>iterFunc <- function(nClusters, effVar, iters = 250) {
results <- list()
for (i in 1:iters) {
dt <- genFunc(nClusters, effVar)
glmerFit <- glmer(Y ~ grp + (1 | cID), data = dt, family="binomial")
glmFit <- glm(Y ~ grp, data = dt, family="binomial")
geeFit <- geeglm(Y ~ grp, family = binomial, data = dt,
corstr = "exchangeable", id = dt$cID)
res <- unlist(c(coef(summary(glmerFit))[2,1:2],
coef(summary(glmFit))[2,1:2],
as.vector(coef(summary(geeFit))[2,1:2])))
results[[i]] <- data.table(t(res))
}
return(rbindlist(results))
}
sumFunc <- function(dtRes, precision = 2) {
setnames(dtRes, c("estGlmer", "sdGlmer",
"estGlm","sdGlm",
"estGEE", "sdGEE"))
meanEst <- round(apply(dtRes[, c(1, 3, 5)], 2, mean), precision)
estSd <- round(sqrt(apply(dtRes[, c(2, 4, 5)]^2, 2, mean)), precision)
sdEst <- round(apply(dtRes[, c(1, 3, 5)], 2, sd), precision)
x <- data.table(rbind(c(meanEst[1], estSd[1], sdEst[1]),
c(meanEst[2], estSd[2], sdEst[2]),
c(meanEst[3], estSd[3], sdEst[3])
))
setnames(x, c("estMean","estSD","sd"))
x[, method := c("glmer","glm","gee")]
p <- ggplot(data = x, aes(x = method, y = estMean)) +
geom_errorbar(aes(ymin = estMean - sd, ymax = estMean + sd),
width = 0.1, color = "#2329fe", size = 1) +
geom_errorbar(aes(ymin = estMean - estSD, ymax = estMean + estSD),
width = 0.0, color = "#fe8b23", size = 1.5) +
geom_point(size = 2) +
ylim(1,2.75) +
theme_ksg("grey95") +
geom_hline(yintercept = 2, lty = 3, color = "grey50") +
theme(axis.title.x = element_blank()) +
ylab("Treatment effect")
return(list(mean=meanEst, sd=sdEst, p=p))
}</code></pre>
</div>
A little function to help generate ICCs in simple clustered data
https://www.rdatagen.net/post/a-little-function-to-help-generate-iccs-in-simple-clustered-data/
Thu, 24 May 2018 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/a-little-function-to-help-generate-iccs-in-simple-clustered-data/<p>In health services research, experiments are often conducted at the provider or site level rather than the patient level. However, we might still be interested in the outcome at the patient level. For example, we could be interested in understanding the effect of a training program for physicians on their patients. It would be very difficult to randomize patients to be exposed or not to the training if a group of patients all see the same doctor. So the experiment is set up so that only some doctors get the training and others serve as the control; we still compare the outcome at the patient level.</p>
<p>Typically, when conducting an experiment we assume that individual outcomes are not related to each other (other than the common effect of the exposure). With site-level randomization, we can’t make that assumption - groups of patients are all being treated by the same doctor. In general, even before the intervention, there might be variation across physicians. At the same time, patients within a practice will vary. So, we have two sources of variation: <em>between</em> practice and <em>within</em> practice variation that explain overall variation.</p>
<p>I touched on this when I discussed issues related to <a href="https://www.rdatagen.net/post/icc-for-gamma-distribution/">Gamma distributed clustered data</a>. A key concept is the intra-class coefficient or ICC, which is a measure of how <em>between</em> variation relates to overall variation. The ICC ranges from 0 (where there is no <em>between</em> variation - all site averages are the same) to 1 (where there is no variation within a site - all patients within the site have the same outcomes). Take a look at the earlier post for a bit more detail.</p>
<p>My goal here is to highlight a little function recently added to <code>simstudy</code> (v0.1.9, now available on <code>CRAN</code>). In the course of exploring study designs for cluster randomized trials, it is often useful to understand what happens (to sample size requirements, for example) when the ICC changes. When generating the data, it is difficult to control the ICC directly - we do this by controlling the variation. With normally distributed data, the ICC is an obvious function of the variances used to generate the data, so the connection is pretty clear. But, when the outcomes have binary, Poisson, or Gamma distributions (or anything else really), the connection between variation and the ICC is not always so obvious. Figuring out how to specify the data to generate a particular ICC might require quite a bit of trial and error.</p>
<p>The new function, <code>iccRE</code> (short for ICC random effects), allows users to specify target ICCs for a desired distribution (along with relevant parameters). The function returns the corresponding random effect variances that would be specified at the cluster level to generate the desired ICC(s).</p>
<p>Here’s an example for three possible ICCs in the context of the normal distribution:</p>
<pre class="r"><code>library(simstudy)
targetICC <- c(0.05, 0.075, 0.10)
setVars <- iccRE(ICC = targetICC, dist = "normal", varWithin = 4)
round(setVars, 4)</code></pre>
<pre><code>## [1] 0.2105 0.3243 0.4444</code></pre>
<p>In the case when the target ICC is 0.075:</p>
<p><span class="math display">\[ ICC = \frac{\sigma_b^2}{\sigma_b ^2 + \sigma_w ^2} = \frac{0.324}{0.324 + 4} \approx 0.075\]</span></p>
<div id="simulating-from-the-normal-distribution" class="section level3">
<h3>Simulating from the normal distribution</h3>
<p>If we specify the variance for the site-level random effect to be 0.2105 in conjunction with the individual-level (within) variance of 4, the observed ICC from the simulated data will be approximately 0.05:</p>
<pre class="r"><code>set.seed(73632)
# specify between site variation
d <- defData(varname = "a", formula = 0, variance = 0.2105, id = "grp")
d <- defData(d, varname = "size", formula = 1000, dist = "nonrandom")
a <- defDataAdd(varname = "y1", formula = "30 + a",
variance = 4, dist = "normal")
dT <- genData(10000, d)
# add patient level data
dCn05 <- genCluster(dtClust = dT, cLevelVar = "grp",
numIndsVar = "size", level1ID = "id")
dCn05 <- addColumns(a, dCn05)
dCn05</code></pre>
<pre><code>## grp a size id y1
## 1: 1 -0.3255465 1000 1 32.08492
## 2: 1 -0.3255465 1000 2 27.21180
## 3: 1 -0.3255465 1000 3 28.37411
## 4: 1 -0.3255465 1000 4 27.70485
## 5: 1 -0.3255465 1000 5 32.11814
## ---
## 9999996: 10000 0.3191311 1000 9999996 30.15837
## 9999997: 10000 0.3191311 1000 9999997 32.66302
## 9999998: 10000 0.3191311 1000 9999998 28.34583
## 9999999: 10000 0.3191311 1000 9999999 28.56443
## 10000000: 10000 0.3191311 1000 10000000 30.06957</code></pre>
<p>The <em>between</em> variance can be roughly estimated as the variance of the group means, and the <em>within</em> variance can be estimated as the average of the variances calculated for each group (this works well here, because we have so many clusters and patients per cluster):</p>
<pre class="r"><code>between <- dCn05[, mean(y1), keyby = grp][, var(V1)]
within <- dCn05[, var(y1), keyby = grp][, mean(V1)]
total <- dCn05[, var(y1)]
round(c(between, within, total), 3)</code></pre>
<pre><code>## [1] 0.212 3.996 4.203</code></pre>
<p>The ICC is the ratio of the <em>between</em> variance to the <em>total</em>, which is also the sum of the two component variances:</p>
<pre class="r"><code>round(between/(total), 3)</code></pre>
<pre><code>## [1] 0.05</code></pre>
<pre class="r"><code>round(between/(between + within), 3)</code></pre>
<pre><code>## [1] 0.05</code></pre>
<p>Setting the site-level variance at 0.4444 gives us the ICC of 0.10:</p>
<pre class="r"><code>d <- defData(varname = "a", formula = 0, variance = 0.4444, id = "grp")
d <- defData(d, varname = "size", formula = 1000, dist = "nonrandom")
a <- defDataAdd(varname = "y1", formula = "30 + a",
variance = 4, dist = "normal")
dT <- genData(10000, d)
dCn10 <- genCluster(dtClust = dT, cLevelVar = "grp",
numIndsVar = "size", level1ID = "id")
dCn10 <- addColumns(a, dCn10)
between <- dCn10[, mean(y1), keyby = grp][, var(V1)]
within <- dCn10[, var(y1), keyby = grp][, mean(V1)]
round(between / (between + within), 3)</code></pre>
<pre><code>## [1] 0.102</code></pre>
</div>
<div id="other-distributions" class="section level3">
<h3>Other distributions</h3>
<p>The ICC is a bit more difficult to interpret using other distributions where the variance is a function of the mean, such as with the binomial, Poisson, or Gamma distributions. However, we can still use the notion of <em>between</em> and <em>within</em>, but it may need to be transformed to another scale.</p>
<p>In the case of <strong>binary</strong> outcomes, we have to imagine an underlying or latent continuous process that takes place on the logistic scale. (I talked a bit about this <a href="https://www.rdatagen.net/post/ordinal-regression/">here</a>.)</p>
<pre class="r"><code>### binary
(setVar <- iccRE(ICC = 0.05, dist = "binary"))</code></pre>
<pre><code>## [1] 0.173151</code></pre>
<pre class="r"><code>d <- defData(varname = "a", formula = 0, variance = 0.1732, id = "grp")
d <- defData(d, varname = "size", formula = 1000, dist = "nonrandom")
a <- defDataAdd(varname = "y1", formula = "-1 + a", dist = "binary",
link = "logit")
dT <- genData(10000, d)
dCb05 <- genCluster(dtClust = dT, cLevelVar = "grp", numIndsVar = "size",
level1ID = "id")
dCb05 <- addColumns(a, dCb05)
dCb05</code></pre>
<pre><code>## grp a size id y1
## 1: 1 -0.20740274 1000 1 0
## 2: 1 -0.20740274 1000 2 0
## 3: 1 -0.20740274 1000 3 0
## 4: 1 -0.20740274 1000 4 1
## 5: 1 -0.20740274 1000 5 0
## ---
## 9999996: 10000 -0.05448775 1000 9999996 0
## 9999997: 10000 -0.05448775 1000 9999997 1
## 9999998: 10000 -0.05448775 1000 9999998 0
## 9999999: 10000 -0.05448775 1000 9999999 0
## 10000000: 10000 -0.05448775 1000 10000000 0</code></pre>
<p>The ICC for the binary distribution is on the logistic scale, and the <em>within</em> variance is constant. The <em>between</em> variance is estimated on the log-odds scale:</p>
<pre class="r"><code>within <- (pi ^ 2) / 3
means <- dCb05[,mean(y1), keyby = grp]
between <- means[, log(V1/(1-V1)), keyby = grp][abs(V1) != Inf, var(V1)]
round(between / (between + within), 3)</code></pre>
<pre><code>## [1] 0.051</code></pre>
<p>The ICC for the <strong>Poisson</strong> distribution is interpreted on the scale of the count measurements, even though the random effect variance is on the log scale. If you want to see the details behind the random effect variance derivation, see this <a href="https://onlinelibrary.wiley.com/doi/abs/10.1002/sim.7532">paper</a> by <em>Austin et al.</em>, which was based on original work by <em>Stryhn et al.</em> that can be found <a href="http://www.sciquest.org.nz/node/64294">here</a>.</p>
<pre class="r"><code>(setVar <- iccRE(ICC = 0.05, dist = "poisson", lambda = 30))</code></pre>
<pre><code>## [1] 0.0017513</code></pre>
<pre class="r"><code>d <- defData(varname = "a", formula = 0, variance = 0.0018, id = "grp")
d <- defData(d, varname = "size", formula = 1000, dist = "nonrandom")
a <- defDataAdd(varname = "y1", formula = "log(30) + a",
dist = "poisson", link = "log")
dT <- genData(10000, d)
dCp05 <- genCluster(dtClust = dT, cLevelVar = "grp",
numIndsVar = "size", level1ID = "id")
dCp05 <- addColumns(a, dCp05)
dCp05</code></pre>
<pre><code>## grp a size id y1
## 1: 1 0.035654485 1000 1 26
## 2: 1 0.035654485 1000 2 36
## 3: 1 0.035654485 1000 3 31
## 4: 1 0.035654485 1000 4 34
## 5: 1 0.035654485 1000 5 21
## ---
## 9999996: 10000 0.002725561 1000 9999996 26
## 9999997: 10000 0.002725561 1000 9999997 25
## 9999998: 10000 0.002725561 1000 9999998 27
## 9999999: 10000 0.002725561 1000 9999999 28
## 10000000: 10000 0.002725561 1000 10000000 37</code></pre>
<p>The variance components and ICC for the Poisson can be estimated using the same approach as the normal distribution:</p>
<pre class="r"><code>between <- dCp05[, mean(y1), keyby = grp][, var(V1)]
within <- dCp05[, var(y1), keyby = grp][, mean(V1)]
round(between / (between + within), 3)</code></pre>
<pre><code>## [1] 0.051</code></pre>
<p>Finally, here are the results for the <strong>Gamma</strong> distribution, which I talked about in great length in an <a href="https://www.rdatagen.net/post/icc-for-gamma-distribution/">earlier post</a>:</p>
<pre class="r"><code>(setVar <- iccRE(ICC = 0.05, dist = "gamma", disp = 0.25 ))</code></pre>
<pre><code>## [1] 0.01493805</code></pre>
<pre class="r"><code>d <- defData(varname = "a", formula = 0, variance = 0.0149, id = "grp")
d <- defData(d, varname = "size", formula = 1000, dist = "nonrandom")
a <- defDataAdd(varname = "y1", formula = "log(30) + a", variance = 0.25,
dist = "gamma", link = "log")
dT <- genData(10000, d)
dCg05 <- genCluster(dtClust = dT, cLevelVar = "grp", numIndsVar = "size",
level1ID = "id")
dCg05 <- addColumns(a, dCg05)
dCg05</code></pre>
<pre><code>## grp a size id y1
## 1: 1 0.09466305 1000 1 14.31268
## 2: 1 0.09466305 1000 2 39.08884
## 3: 1 0.09466305 1000 3 28.08050
## 4: 1 0.09466305 1000 4 53.27853
## 5: 1 0.09466305 1000 5 37.93855
## ---
## 9999996: 10000 0.25566417 1000 9999996 14.16145
## 9999997: 10000 0.25566417 1000 9999997 42.54838
## 9999998: 10000 0.25566417 1000 9999998 76.33642
## 9999999: 10000 0.25566417 1000 9999999 34.16727
## 10000000: 10000 0.25566417 1000 10000000 21.06282</code></pre>
<p>The ICC for the Gamma distribution is on the log scale:</p>
<pre class="r"><code>between <- dCg05[, mean(log(y1)), keyby = grp][, var(V1)]
within <- dCg05[, var(log(y1)), keyby = grp][, mean(V1)]
round(between / (between + within), 3)</code></pre>
<pre><code>## [1] 0.05</code></pre>
<p>It is possible to think about the ICC in the context of covariates, but interpretation is less straightforward. The ICC itself will likely vary across different levels of the covariates. For this reason, I like to think of the ICC in the marginal context.</p>
<p>I leave you with some visuals of clustered binary data with ICC’s ranging from 0 to 0.075, both on the log-odds and probability scales:</p>
<p><img src="https://www.rdatagen.net/post/2018-05-24-a-little-function-to-help-generate-iccs-in-simple-clustered-data_files/figure-html/unnamed-chunk-12-1.png" width="864" /></p>
</div>
Is non-inferiority on par with superiority?
https://www.rdatagen.net/post/are-non-inferiority-trials-inferior/
Mon, 14 May 2018 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/are-non-inferiority-trials-inferior/<p>It is grant season around here (actually, it is pretty much always grant season), which means another series of problems to tackle. Even with the most straightforward study designs, there is almost always some interesting twist, or an approach that presents a subtle issue or two. In this case, the investigator wants compare two interventions, but doesn’t feel the need to show that one is better than the other. He just wants to see if the newer intervention is <em>not inferior</em> to the more established intervention.</p>
<p>The shift from a superiority trial to a non-inferiority trial leads to a fundamental shift in the hypothesis testing framework. In the more traditional superiority trial, where we want to see if an intervention is an improvement over another intervention, we can set up the hypothesis test with null and alternative hypotheses based on the difference of the intervention proportions <span class="math inline">\(p_{old}\)</span> and <span class="math inline">\(p_{new}\)</span> (under the assumption of a binary outcome):</p>
<p><span class="math display">\[
\begin{aligned}
H_0: p_{new} - p_{old} &\le 0 \\
H_A: p_{new} - p_{old} &> 0
\end{aligned}
\]</span> In this context, if we reject the null hypothesis that the difference in proportions is less than zero, we conclude that the new intervention is an improvement over the old one, at least for the population under study. (A crucial element of the test is the <span class="math inline">\(\alpha\)</span>-level that determines the Type 1 error (probability of rejecting <span class="math inline">\(H_0\)</span> when <span class="math inline">\(H_0\)</span> is actually true. If we use <span class="math inline">\(\alpha = 0.025\)</span>, then that is analogous to doing a two-sided test with <span class="math inline">\(\alpha = .05\)</span> and hypotheses <span class="math inline">\(H_0: p_{new} - p_{old} = 0\)</span> and <span class="math inline">\(H_A: p_{new} - p_{old} \ne 0\)</span>.)</p>
<p>In the case of an inferiority trial, we add a little twist. Really, we subtract a little twist. In this case the hypotheses are:</p>
<p><span class="math display">\[
\begin{aligned}
H_0: p_{new} - p_{old} &\le -\Delta \\
H_A: p_{new} - p_{old} &> -\Delta
\end{aligned}
\]</span></p>
<p>where <span class="math inline">\(\Delta\)</span> is some threshold that sets the non-inferiority bounds. Clearly, if <span class="math inline">\(\Delta = 0\)</span> then this is equivalent to a superiority test. However, for any other <span class="math inline">\(\Delta\)</span>, there is a bit of a cushion so that the new intervention will still be considered <em>non-inferior</em> even if we observe a <em>lower</em> proportion for the new intervention compared to the older intervention.</p>
<p>As long as the confidence interval around the observed estimate for the difference in proportions does not cross the <span class="math inline">\(-\Delta\)</span> threshold, we can conclude the new intervention is non-inferior. If we construct a 95% confidence interval, this procedure will have a Type 1 error rate <span class="math inline">\(\alpha = 0.025\)</span>, and a 90% CI will yield an <span class="math inline">\(\alpha = 0.05\)</span>. (I will demonstrate this with a simulation.)</p>
<p>The following figures show how different confident intervals imply different conclusions. I’ve added an equivalence trial here as well, but won’t discuss in detail except to say that in this situation we would conclude that two interventions are <em>equivalent</em> if the confidence interval falls between <span class="math inline">\(-\Delta\)</span> and <span class="math inline">\(\Delta\)</span>). The bottom interval crosses the non-inferiority threshold, so is considered inconclusive. The second interval from the top crosses zero, but does not cross the non-inferiority threshold, so we conclude that the new intervention is at least as effective as the old one. And the top interval excludes zero, so we conclude that the new intervention is an improvement:</p>
<p><img src="https://www.rdatagen.net/post/2018-05-14-are-non-inferiority-trials-inferior_files/figure-html/unnamed-chunk-1-1.png" width="672" /></p>
<p>This next figure highlights the key challenge of the the non-inferiority trial: where do we set <span class="math inline">\(\Delta\)</span>? By shifting the threshold towards zero in this example (and not changing anything else), we can no longer conclude non-inferiority. But, the superiority test is not affected, and never will be. The comparison for a superiority test is made relative to zero only, and has nothing to do with <span class="math inline">\(\Delta\)</span>. So, unless there is a principled reason for selecting <span class="math inline">\(\Delta\)</span>, the process (and conclusions) and feel a little arbitrary. (Check out this interactive <a href="http://rpsychologist.com/d3/equivalence/">post</a> for a really cool way to explore some of these issues.)</p>
<p><img src="https://www.rdatagen.net/post/2018-05-14-are-non-inferiority-trials-inferior_files/figure-html/unnamed-chunk-2-1.png" width="672" /></p>
<div id="type-1-error-rate" class="section level2">
<h2>Type 1 error rate</h2>
<p>To calculate the Type 1 error rate, we generate data under the null hypothesis, or in this case on the rightmost boundary of the null hypothesis since it is a composite hypothesis. First, let’s generate one data set:</p>
<pre class="r"><code>library(magrittr)
library(broom)
set.seed(319281)
def <- defDataAdd(varname = "y", formula = "0.30 - 0.15*rx",
dist = "binary")
DT <- genData(1000) %>% trtAssign(dtName = ., grpName = "rx")
DT <- addColumns(def, DT)
DT</code></pre>
<pre><code>## id rx y
## 1: 1 0 0
## 2: 2 1 0
## 3: 3 1 0
## 4: 4 0 0
## 5: 5 1 0
## ---
## 996: 996 0 1
## 997: 997 0 0
## 998: 998 1 0
## 999: 999 0 0
## 1000: 1000 0 0</code></pre>
<p>And we can estimate a confidence interval for the difference between the two means:</p>
<pre class="r"><code>props <- DT[, .(success = sum(y), n=.N), keyby = rx]
setorder(props, -rx)
round(tidy(prop.test(props$success, props$n,
correct = FALSE, conf.level = 0.95))[ ,-c(5, 8,9)], 3)</code></pre>
<pre><code>## estimate1 estimate2 statistic p.value conf.low conf.high
## 1 0.142 0.276 27.154 0 -0.184 -0.084</code></pre>
<p>If we generate 1000 data sets in the same way, we can count the number of occurrences where the where we would incorrectly reject the null hypothesis (i.e. commit a Type 1 error):</p>
<pre class="r"><code>powerRet <- function(nPerGrp, level, effect, d = NULL) {
Form <- genFormula(c(0.30, -effect), c("rx"))
def <- defDataAdd(varname = "y", formula = Form, dist = "binary")
DT <- genData(nPerGrp*2) %>% trtAssign(dtName = ., grpName = "rx")
iter <- 1000
ci <- data.table()
# generate 1000 data sets and store results each time in "ci"
for (i in 1: iter) {
dx <- addColumns(def, DT)
props <- dx[, .(success = sum(y), n=.N), keyby = rx]
setorder(props, -rx)
ptest <- prop.test(props$success, props$n, correct = FALSE,
conf.level = level)
ci <- rbind(ci, data.table(t(ptest$conf.int),
diff = ptest$estimate[1] - ptest$estimate[2]))
}
setorder(ci, V1)
ci[, i:= 1:.N]
# for sample size calculation at 80% power
if (is.null(d)) d <- ci[i==.2*.N, V1]
ci[, d := d]
# determine if interval crosses threshold
ci[, nullTrue := (V1 <= d)]
return(ci[])
}</code></pre>
<p>Using 95% CIs, we expect 2.5% of the intervals to lie to the right of the non-inferiority threshold. That is, 2.5% of the time we would reject the null hypothesis when we shouldn’t:</p>
<pre class="r"><code>ci <- powerRet(nPerGrp = 500, level = 0.95, effect = 0.15, d = -0.15)
formattable::percent(ci[, mean(!(nullTrue))], 1)</code></pre>
<pre><code>## [1] 2.4%</code></pre>
<p>And using 90% CIs, we expect 5% of the intervals to lie to the right of the threshold:</p>
<pre class="r"><code>ci <- powerRet(nPerGrp = 500, level = 0.90, effect = 0.15, d = -0.15)
formattable::percent(ci[, mean(!(nullTrue))], 1)</code></pre>
<pre><code>## [1] 5.1%</code></pre>
</div>
<div id="sample-size-estimates" class="section level2">
<h2>Sample size estimates</h2>
<p>If we do not expect the effect sizes to be different across interventions, it seems reasonable to find the sample size under this assumption of no effect. Assuming we want to set <span class="math inline">\(\alpha = 0.025\)</span>, we generate many data sets and estimate the 95% confidence interval for each one. The power is merely the proportion of these confidence intervals lie entirely to the right of <span class="math inline">\(-\Delta\)</span>.</p>
<p>But how should we set <span class="math inline">\(\Delta\)</span>? I’d propose that for each candidate sample size level, we find <span class="math inline">\(-\Delta\)</span> such that 80% of the simulated confidence intervals lie to the right of some value, where 80% is the desired power of the test (i.e., given that there is no treatment effect, 80% of the (hypothetical) experiments we conduct will lead us to conclude that the new treatment is <em>non-inferior</em> to the old treatment).</p>
<pre class="r"><code>ci <- powerRet(nPerGrp = 200, level = 0.95, effect = 0)
p1 <- plotCIs(ci, 200, 0.95)
ci <- powerRet(nPerGrp = 500, level = 0.95, effect = 0)
p2 <- plotCIs(ci, 500, 0.95)</code></pre>
<pre class="r"><code>library(gridExtra)
grid.arrange(p1, p2, nrow = 1,
bottom = "difference in proportion", left = "iterations")</code></pre>
<p><img src="https://www.rdatagen.net/post/2018-05-14-are-non-inferiority-trials-inferior_files/figure-html/unnamed-chunk-10-1.png" width="912" /></p>
<p>It is clear that increasing the sample size reduces the width of the 95% confidence intervals. As a result, the non-inferiority threshold based on 80% power is shifted closer towards zero when sample size increases. This implies that a larger sample size allows us to make a more compelling statement about non-inferiority.</p>
<p>Unfortunately, not all non-inferiority statements are alike. If we set <span class="math inline">\(\Delta\)</span> too large, we may expand the bounds of non-inferiority beyond a reasonable, justifiable point. Given that there is no actual constraint on what <span class="math inline">\(\Delta\)</span> can be, I would say that the non-inferiority test is somewhat more problematic than its closely related cousin, the superiority test, where <span class="math inline">\(\Delta\)</span> is in effect fixed at zero. But, if we take this approach, where we identify <span class="math inline">\(\Delta\)</span> that satisfies the desired power, we can make a principled decision about whether or not the threshold is within reasonable bounds.</p>
</div>
How efficient are multifactorial experiments?
https://www.rdatagen.net/post/so-how-efficient-are-multifactorial-experiments-part/
Wed, 02 May 2018 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/so-how-efficient-are-multifactorial-experiments-part/<p>I <a href="https://www.rdatagen.net/post/testing-many-interventions-in-a-single-experiment/">recently described</a> why we might want to conduct a multi-factorial experiment, and I alluded to the fact that this approach can be quite efficient. It is efficient in the sense that it is possible to test simultaneously the impact of <em>multiple</em> interventions using an overall sample size that would be required to test a <em>single</em> intervention in a more traditional RCT. I demonstrate that here, first with a continuous outcome and then with a binary outcome.</p>
<p>In all of the examples that follow, I am assuming we are in an exploratory phase of research, so our alpha levels are relaxed a bit to <span class="math inline">\(\alpha = 0.10\)</span>. In addition, we make no adjustments for multiple testing. This might be justifiable, since we are not as concerned about making a Type 1 error (concluding an effect is real when there isn’t actually one). Because this is a screening exercise, the selected interventions will be re-evaluated. At the same time, we are setting desired power to be 90%. This way, if an effect really exists, we are more likely to select it for further review.</p>
<div id="two-scenarios-with-a-continuous-outcome" class="section level2">
<h2>Two scenarios with a continuous outcome</h2>
<p>To start, I have created two sets of underlying assumptions. In the first, the effects of the four interventions (labeled <em>fac1</em>, <em>fac2</em>, <em>fac3</em>, and <em>fac4</em>) are additive. (The factor variables are parameterized using <em>effect</em>-style notation, where the value -1 represents no intervention and 1 represents the intervention.) So, with no interventions the outcome is 0, and each successive intervention adds 0.8 to the observed outcome (on average), so that individuals exposed to all four factors will have an average outcome <span class="math inline">\(4 \times 0.8 = 3.2\)</span>.</p>
<pre class="r"><code>cNoX <- defReadCond("DataMF/FacSumContNoX.csv")
cNoX</code></pre>
<pre><code>## condition formula variance dist link
## 1: (fac1 + fac2 + fac3 + fac4) == -4 0.0 9.3 normal identity
## 2: (fac1 + fac2 + fac3 + fac4) == -2 0.8 9.3 normal identity
## 3: (fac1 + fac2 + fac3 + fac4) == 0 1.6 9.3 normal identity
## 4: (fac1 + fac2 + fac3 + fac4) == 2 2.4 9.3 normal identity
## 5: (fac1 + fac2 + fac3 + fac4) == 4 3.2 9.3 normal identity</code></pre>
<p>In the second scenario, each successive exposure continues to add to the effect, but each additional intervention adds a little less. The first intervention adds 0.8, the second adds 0.6, the third adds 0.4, and the fourth adds 0.2. This is a form of interaction.</p>
<pre class="r"><code>cX <- defReadCond("DataMF/FacSumContX.csv")
cX</code></pre>
<pre><code>## condition formula variance dist link
## 1: (fac1 + fac2 + fac3 + fac4) == -4 0.0 9.3 normal identity
## 2: (fac1 + fac2 + fac3 + fac4) == -2 0.8 9.3 normal identity
## 3: (fac1 + fac2 + fac3 + fac4) == 0 1.4 9.3 normal identity
## 4: (fac1 + fac2 + fac3 + fac4) == 2 1.8 9.3 normal identity
## 5: (fac1 + fac2 + fac3 + fac4) == 4 2.0 9.3 normal identity</code></pre>
<p>This is what a plot of the means might look like for each of the scenarios. The straight line represents the additive (non-interactive) scenario, and the bent line is the interaction scenario:</p>
<p><img src="https://www.rdatagen.net/post/2018-05-02-how-efficient-are-multifactorial-experiments-part-2-of-2_files/figure-html/unnamed-chunk-4-1.png" width="576" /></p>
<div id="sample-size-requirement-for-a-single-intervention-compared-to-control" class="section level3">
<h3>Sample size requirement for a single intervention compared to control</h3>
<p>If we were to conduct a more traditional randomized experiment with two groups - treatment and control - we would need about 500 total subjects under the assumptions that we are using:</p>
<pre class="r"><code>power.t.test(power = 0.90, delta = .8, sd = 3.05, sig.level = 0.10)</code></pre>
<pre><code>##
## Two-sample t test power calculation
##
## n = 249.633
## delta = 0.8
## sd = 3.05
## sig.level = 0.1
## power = 0.9
## alternative = two.sided
##
## NOTE: n is number in *each* group</code></pre>
<p>To take a look at the sample size requirements for a multi-factorial study, I’ve written this function that repeatedly samples data based on the definitions and fits the appropriate model, storing the results after each model estimation.</p>
<pre class="r"><code>library(simstudy)
iterFunc <- function(dc, dt, seed = 464653, iter = 1000, binary = FALSE) {
set.seed(seed)
res <- list()
for (i in 1:iter) {
dx <- addCondition(dc, dt, "Y")
if (binary == FALSE) {
fit <- lm(Y~fac1*fac2*fac3*fac4, data = dx)
} else {
fit <- glm(Y~fac1*fac2*fac3*fac4, data = dx, family = binomial)
}
# A simple function to pull data from the fit
res <- appendRes(res, fit)
}
return(res)
}</code></pre>
<p>And finally, here are the results for the sample size requirements based on no interaction across interventions. (I am using function <code>genMultiFac</code> to generate replications of all the combinations of four factors. This function is now part of <code>simstudy</code>, which is available on github, and will hopefully soon be up on CRAN.)</p>
<pre class="r"><code>dt <- genMultiFac(32, nFactors = 4, coding = "effect",
colNames = paste0("fac", c(1:4)))
res <- iterFunc(cNoX, dt)</code></pre>
<pre class="r"><code>apply(res$p[, .(fac1, fac2, fac3, fac4)] < 0.10, 2, mean)</code></pre>
<pre><code>## fac1 fac2 fac3 fac4
## 0.894 0.895 0.905 0.902</code></pre>
<p>A sample size of <span class="math inline">\(32 \times 16 = 512\)</span> gives us 90% power that we are seeking. In case you don’t believe my simulation, we can compare the estimate provided by the <code>MOST</code> package, created by the <a href="https://methodology.psu.edu/ra/most">Methodology Center at Penn State</a>:</p>
<pre class="r"><code>library(MOST)
FactorialPowerPlan(alpha = 0.10, model_order = 1, nfactors = 4,
ntotal = 500, sigma_y = 3.05, raw_main = 0.8)$power</code></pre>
<pre><code>## [1] "------------------------------------------------------------"
## [1] "FactorialPowerPlan Macro"
## [1] "The Methodology Center"
## [1] "(c) 2012 Pennsylvania State University"
## [1] "------------------------------------------------------------"
## [1] "Assumptions:"
## [1] "There are 4 dichotomous factors."
## [1] "There is independent random assignment."
## [1] "Analysis will be based on main effects only."
## [1] "Two-sided alpha: 0.10"
## [1] "Total number of participants: 500"
## [1] "Effect size as unstandardized difference in means: 0.80"
## [1] "Assumed standard deviation for the response variable is 3.05"
## [1] "Attempting to calculate the estimated power."
## [1] "------------------------------------------------------------"
## [1] "Results:"
## [1] "The calculated power is 0.9004"</code></pre>
<pre><code>## [1] 0.9004</code></pre>
</div>
<div id="interaction" class="section level3">
<h3>Interaction</h3>
<p>A major advantage of the multi-factorial experiment over the traditional RCT, of course, is that it allows us to investigate if the interventions interact in any interesting ways. However, in practice it may be difficult to generate sample sizes large enough to measure these interactions with much precision.</p>
<p>In the next pair of simulations, we see that even if we are only interested in exploring the main effects, underlying interaction reduces power. If there is actually interaction (as in the second scenario defined above), the original sample size of 500 may be inadequate to estimate the main effects:</p>
<pre class="r"><code>dt <- genMultiFac(31, nFactors = 4, coding = "effect",
colNames = paste0("fac", c(1:4)))
res <- iterFunc(cX, dt)
apply(res$p[, .(fac1, fac2, fac3, fac4)] < 0.10, 2, mean)</code></pre>
<pre><code>## fac1 fac2 fac3 fac4
## 0.567 0.556 0.588 0.541</code></pre>
<p>Here, a total sample of about 1300 does the trick:</p>
<pre class="r"><code>dt <- genMultiFac(81, nFactors = 4, coding = "effect",
colNames = paste0("fac", c(1:4)))
res <- iterFunc(cX, dt)
apply(res$p[, .(fac1, fac2, fac3, fac4)] < 0.10, 2, mean)</code></pre>
<pre><code>## fac1 fac2 fac3 fac4
## 0.898 0.893 0.908 0.899</code></pre>
<p>But this sample size is not adequate to estimate the actual second degree interaction terms:</p>
<pre class="r"><code>apply(res$p[, .(`fac1:fac2`, `fac1:fac3`, `fac1:fac4`,
`fac2:fac3`, `fac2:fac4`, `fac3:fac4`)] < 0.10, 2, mean)</code></pre>
<pre><code>## fac1:fac2 fac1:fac3 fac1:fac4 fac2:fac3 fac2:fac4 fac3:fac4
## 0.144 0.148 0.163 0.175 0.138 0.165</code></pre>
<p>You would actually need a sample size of about 32,000 to be adequately powered to estimate the interaction! Of course, this requirement is driven by the size of the interaction effects and the variation, so maybe this is a bit extreme:</p>
<pre class="r"><code>dt <- genMultiFac(2000, nFactors = 4, coding = "effect",
colNames = paste0("fac", c(1:4)))
res <- iterFunc(cX, dt)
apply(res$p[, .(`fac1:fac2`, `fac1:fac3`, `fac1:fac4`,
`fac2:fac3`, `fac2:fac4`, `fac3:fac4`)] < 0.10, 2, mean)</code></pre>
<pre><code>## fac1:fac2 fac1:fac3 fac1:fac4 fac2:fac3 fac2:fac4 fac3:fac4
## 0.918 0.902 0.888 0.911 0.894 0.886</code></pre>
</div>
</div>
<div id="a-binary-outcome" class="section level2">
<h2>A binary outcome</h2>
<p>The situation with the binary outcome is really no different than the continuous outcome, except for the fact that sample size requirements might be much more sensitive to the strength of underlying interaction.</p>
<p>Again, we have two scenarios - one with interaction and one without. When I talk about an additive (non-interaction) model in this context, the additivity is on the log-odds scale. This becomes apparent when looking at a plot.</p>
<p>I want to reiterate here that we have interaction when there are limits to how much marginal effect an additional intervention can have conditional on the presence of other interventions. In a recent project (one that motivated this pair of blog entries), we started with the assumption that a single intervention would have a 5 percentage point effect on the outcome (which was smoking cessation), but a combination of all four interventions might only get a 10 percentage point reduction. This cap generates severe interaction which dramatically affected sample size requirements, as we see below (using even less restrictive interaction assumptions).</p>
<p>No interaction:</p>
<pre><code>## condition formula variance dist link
## 1: (fac1 + fac2 + fac3 + fac4) == -4 0.10 NA binary identity
## 2: (fac1 + fac2 + fac3 + fac4) == -2 0.18 NA binary identity
## 3: (fac1 + fac2 + fac3 + fac4) == 0 0.30 NA binary identity
## 4: (fac1 + fac2 + fac3 + fac4) == 2 0.46 NA binary identity
## 5: (fac1 + fac2 + fac3 + fac4) == 4 0.63 NA binary identity</code></pre>
<p>Interaction:</p>
<pre><code>## condition formula variance dist link
## 1: (fac1 + fac2 + fac3 + fac4) == -4 0.10 NA binary identity
## 2: (fac1 + fac2 + fac3 + fac4) == -2 0.18 NA binary identity
## 3: (fac1 + fac2 + fac3 + fac4) == 0 0.24 NA binary identity
## 4: (fac1 + fac2 + fac3 + fac4) == 2 0.28 NA binary identity
## 5: (fac1 + fac2 + fac3 + fac4) == 4 0.30 NA binary identity</code></pre>
<p>The plot highlights that additivity is on the log-odds scale only:</p>
<p><img src="https://www.rdatagen.net/post/2018-05-02-how-efficient-are-multifactorial-experiments-part-2-of-2_files/figure-html/unnamed-chunk-16-1.png" width="576" /></p>
<p>The sample size requirement for a treatment effect of 8 percentage points for a single intervention compared to control is about 640 total participants:</p>
<pre class="r"><code>power.prop.test(power = 0.90, p1 = .10, p2 = .18, sig.level = 0.10)</code></pre>
<pre><code>##
## Two-sample comparison of proportions power calculation
##
## n = 320.3361
## p1 = 0.1
## p2 = 0.18
## sig.level = 0.1
## power = 0.9
## alternative = two.sided
##
## NOTE: n is number in *each* group</code></pre>
<p>Simulation shows that the multi-factorial experiment requires only 500 participants, a pretty surprising reduction:</p>
<pre class="r"><code>dt <- genMultiFac(31, nFactors = 4, coding = "effect",
colNames = paste0("fac", c(1:4)))
res <- iterFunc(bNoX, dt, binary = TRUE)
apply(res$p[, .(fac1, fac2, fac3, fac4)] < 0.10, 2, mean)</code></pre>
<pre><code>## fac1 fac2 fac3 fac4
## 0.889 0.910 0.916 0.901</code></pre>
<p>But, if there is a cap to how much we can effect the outcome (i.e. there is underlying interaction), estimated power is considerably reduced:</p>
<pre class="r"><code>dt <- genMultiFac(31, nFactors = 4, coding = "effect",
colNames = paste0("fac", c(1:4)))
res <- iterFunc(bX, dt, binary = TRUE)
apply(res$p[, .(fac1, fac2, fac3, fac4)] < 0.10, 2, mean)</code></pre>
<pre><code>## fac1 fac2 fac3 fac4
## 0.398 0.409 0.405 0.392</code></pre>
<p>We need to increase the sample size to about <span class="math inline">\(125 \times 16 = 2000\)</span> just to explore the main effects:</p>
<pre class="r"><code>dt <- genMultiFac(125, nFactors = 4, coding = "effect",
colNames = paste0("fac", c(1:4)))
res <- iterFunc(bX, dt, binary = TRUE)
apply(res$p[, .(fac1, fac2, fac3, fac4)] < 0.10, 2, mean)</code></pre>
<pre><code>## fac1 fac2 fac3 fac4
## 0.910 0.890 0.895 0.887</code></pre>
<p>I think the biggest take away from all of this is that multi-factorial experiments are a super interesting option when exploring possible interventions or combinations of interventions, particularly when the outcome is continuous. However, this approach may not be as feasible when the outcome is binary, as sample size requirements may quickly become prohibitive, given the number of factors, sample sizes, and extent of interaction.</p>
</div>
Testing multiple interventions in a single experiment
https://www.rdatagen.net/post/testing-many-interventions-in-a-single-experiment/
Thu, 19 Apr 2018 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/testing-many-interventions-in-a-single-experiment/<p>A reader recently inquired about functions in <code>simstudy</code> that could generate data for a balanced multi-factorial design. I had to report that nothing really exists. A few weeks later, a colleague of mine asked if I could help estimate the appropriate sample size for a study that plans to use a multi-factorial design to choose among a set of interventions to improve rates of smoking cessation. In the course of exploring this, I realized it would be super helpful if the function suggested by the reader actually existed. So, I created <code>genMultiFac</code>. And since it is now written (though not yet implemented), I thought I’d share some of what I learned (and maybe not yet learned) about this innovative study design.</p>
<div id="generating-multi-factorial-data" class="section level3">
<h3>Generating multi-factorial data</h3>
<p>First, a bit about multi-factorial data. A single factor is a categorical variable that can have any number of levels. In this context, the factor is usually describing some level of intervention or exposure. As an example, if we want to expose some material to one of three temperature settings, the variable would take on the values “cold”, “moderate”, or “hot”.</p>
<p>In the case of multiple factors, we would have, yes, more than one factor. If we wanted to expose the material to different temperatures as well as varying wind conditions, we would have two factors to contend with. We could characterize the wind level as “low” or “high”. In a multi-factorial experiment, we would expose different pieces of the same material to all possible combinations of these two factors. Ideally, each combination would be represented the same number of times - in which case we have a <em>balanced</em> experiment. In this simple example, there are <span class="math inline">\(2 \times 3 = 6\)</span> possible combinations.</p>
<p>The function <code>genMultiFac</code> has not yet been implemented in simstudy, but the next version will include it. (I am including the code in an appendix at the end of this post in case you can’t wait.) To generate a dataset, specify the number of replications, the number number of factors, and the number of levels within each factor:</p>
<pre class="r"><code>library(simstudy)
dmf <- genMultiFac(each = 2, nFactors = 2, levels = c(3, 2),
colNames = c("temp", "wind"))
genFactor(dmf, "temp", labels = c("cold", "moderate", "hot"),
replace = TRUE)</code></pre>
<pre><code>## id wind ftemp
## 1: 1 1 cold
## 2: 2 1 cold
## 3: 3 1 moderate
## 4: 4 1 moderate
## 5: 5 1 hot
## 6: 6 1 hot
## 7: 7 2 cold
## 8: 8 2 cold
## 9: 9 2 moderate
## 10: 10 2 moderate
## 11: 11 2 hot
## 12: 12 2 hot</code></pre>
<pre class="r"><code>genFactor(dmf, "wind", labels = c("low", "high"),
replace = TRUE)</code></pre>
<pre><code>## id ftemp fwind
## 1: 1 cold low
## 2: 2 cold low
## 3: 3 moderate low
## 4: 4 moderate low
## 5: 5 hot low
## 6: 6 hot low
## 7: 7 cold high
## 8: 8 cold high
## 9: 9 moderate high
## 10: 10 moderate high
## 11: 11 hot high
## 12: 12 hot high</code></pre>
<pre class="r"><code>dmf</code></pre>
<pre><code>## id ftemp fwind
## 1: 1 cold low
## 2: 2 cold low
## 3: 3 moderate low
## 4: 4 moderate low
## 5: 5 hot low
## 6: 6 hot low
## 7: 7 cold high
## 8: 8 cold high
## 9: 9 moderate high
## 10: 10 moderate high
## 11: 11 hot high
## 12: 12 hot high</code></pre>
<p>Here is a second example using four factors with two levels each using dummy style coding. In this case, there are <span class="math inline">\(2^4=16\)</span> possible combinations (though we are only showing the first eight rows). In general, if there are <span class="math inline">\(k\)</span> factors each with 2 levels, there will be <span class="math inline">\(2^k\)</span> possible combinations:</p>
<pre class="r"><code>genMultiFac(each = 1, nFactors = 4)[1:8, ]</code></pre>
<pre><code>## id Var1 Var2 Var3 Var4
## 1: 1 0 0 0 0
## 2: 2 1 0 0 0
## 3: 3 0 1 0 0
## 4: 4 1 1 0 0
## 5: 5 0 0 1 0
## 6: 6 1 0 1 0
## 7: 7 0 1 1 0
## 8: 8 1 1 1 0</code></pre>
</div>
<div id="the-multi-factorial-study-design" class="section level3">
<h3>The multi-factorial study design</h3>
<p>A multi-factorial experiment is an innovative way to efficiently explore the effectiveness of a large number of innovations in a single experiment. There is a vast literature on the topic, much of which has been written by the <a href="https://methodology.psu.edu/">Penn State Methodology Center</a>. My colleague plans on using this design in the context of a multi phase optimization strategy (MOST), which is described in a excellent <a href="https://www.springer.com/us/book/9783319722054">new book</a> by Linda Collins.</p>
<p>My colleague is interested in conducting a smallish-scale study of four possible interventions in order to identify the most promising one for a considerably larger follow-up study. He is open to the idea that the best intervention might actually be a combination of two (though probably not three). One way to do this would be to conduct an RCT with 5 groups, one for each intervention plus a control. The RCT has two potential problems: the sample size requirements could be prohibitive since we are essentially doing 4 RCTs, and there would be no way to assess how interventions work together. The second shortcoming could be addressed by explicitly testing certain combinations, but this would only exacerbate the sample size requirements.</p>
<p>The multi-factorial design addresses both of these potential problems. A person (or unit of analysis) is randomized to a combination of factors. So, in the case of 4 factors, an individual would be assigned to 1 of 16 groups. We can assess the effect of a specific intervention by averaging the effect size across different combinations of the other two interventions. This is easy to see with the aid of a simulation - so let’s do that (using 3 interventions to keep it a bit simpler).</p>
<pre class="r"><code># define the outcome
def <- defCondition(condition = "(f1 + f2 + f3) == 0",
formula = 10, variance = 1, dist = "normal")
def <- defCondition(def, condition = "(f1 + f2 + f3) == 1",
formula = 14, variance = 1, dist = "normal")
def <- defCondition(def, condition = "(f1 + f2 + f3) == 2",
formula = 18, variance = 1, dist = "normal")
def <- defCondition(def, condition = "(f1 + f2 + f3) == 3",
formula = 22, variance = 1, dist = "normal")
# generate the data
set.seed(19287623)
dx <- genMultiFac(20, nFactors = 3, coding = "dummy",
colNames = c("f1","f2", "f3"))
dx <- addCondition(def, dx, newvar = "Y")
# take a look at the data
dx</code></pre>
<pre><code>## id Y f1 f2 f3
## 1: 1 7.740147 0 0 0
## 2: 2 8.718723 0 0 0
## 3: 3 11.538076 0 0 0
## 4: 4 10.669877 0 0 0
## 5: 5 10.278514 0 0 0
## ---
## 156: 156 22.516949 1 1 1
## 157: 157 20.372538 1 1 1
## 158: 158 22.741737 1 1 1
## 159: 159 20.066335 1 1 1
## 160: 160 21.043386 1 1 1</code></pre>
<p>We can estimate the average outcome for each level of Factor 1 within each combination of Factors 2 and 3. When we do this, it is readily apparent the the effect size (comparing <span class="math inline">\(\bar{Y}_{f1=1}\)</span> and <span class="math inline">\(\bar{Y}_{f1=0}\)</span> within each combination) is about 4:</p>
<pre class="r"><code>dx[f2 == 0 & f3 == 0, round(mean(Y),1), keyby = f1]</code></pre>
<pre><code>## f1 V1
## 1: 0 9.7
## 2: 1 14.4</code></pre>
<pre class="r"><code>dx[f2 == 0 & f3 == 1, round(mean(Y),1), keyby = f1]</code></pre>
<pre><code>## f1 V1
## 1: 0 14.1
## 2: 1 18.3</code></pre>
<pre class="r"><code>dx[f2 == 1 & f3 == 0, round(mean(Y),1), keyby = f1]</code></pre>
<pre><code>## f1 V1
## 1: 0 14
## 2: 1 18</code></pre>
<pre class="r"><code>dx[f2 == 1 & f3 == 1, round(mean(Y),1), keyby = f1]</code></pre>
<pre><code>## f1 V1
## 1: 0 17.9
## 2: 1 21.6</code></pre>
<p>And if we actually calculate the average across the four combinations, we see that the overall average effect is also 4:</p>
<pre class="r"><code>d1 <- dx[f1 == 1, .(avg = mean(Y)), keyby = .(f2, f3)]
d0 <- dx[f1 == 0, .(avg = mean(Y)), keyby = .(f2, f3)]
mean(d1$avg - d0$avg)</code></pre>
<pre><code>## [1] 4.131657</code></pre>
<p>The same is true for the other two interventions:</p>
<pre class="r"><code>d1 <- dx[f2 == 1, .(avg = mean(Y)), keyby = .(f1, f3)]
d0 <- dx[f2 == 0, .(avg = mean(Y)), keyby = .(f1, f3)]
mean(d1$avg - d0$avg)</code></pre>
<pre><code>## [1] 3.719557</code></pre>
<pre class="r"><code>d1 <- dx[f3 == 1, .(avg = mean(Y)), keyby = .(f1, f2)]
d0 <- dx[f3 == 0, .(avg = mean(Y)), keyby = .(f1, f2)]
mean(d1$avg - d0$avg)</code></pre>
<pre><code>## [1] 3.933804</code></pre>
<p>Of course, these adjusted intervention effects are much easier to estimate using linear regression.</p>
<pre class="r"><code>library(broom)
tidy(lm(Y ~ f1 + f2 + f3, data = dx))[1:3]</code></pre>
<pre><code>## # A tibble: 4 x 3
## term estimate std.error
## <chr> <dbl> <dbl>
## 1 (Intercept) 10.1 0.161
## 2 f1 4.13 0.161
## 3 f2 3.72 0.161
## 4 f3 3.93 0.161</code></pre>
</div>
<div id="compare-with-an-rct" class="section level3">
<h3>Compare with an RCT</h3>
<p>In the scenario I just simulated, there was no interaction between the various interventions. That is, the treatment effect of Factor 1 does not depend on the exposure to the other two factors. This was the second limitation of using a more standard RCT approach - but I will not address this just yet.</p>
<p>Here, I want to take a look at how sample size requirements can increase pretty dramatically if we take a more straightforward RCT approach. Previously, a sample of 160 individuals in the multi-factorial design resulted in a standard error of the treatment effect estimates close to 0.16. In order to get comparable precision in the RCT design, we would need about 300 total patients:</p>
<pre class="r"><code>defRCT <- defDataAdd(varname = "Y", formula = "10 + (trt != 1)*4",
variance = 1, dist = "normal")
dr <- genData(300)
dr <- trtAssign(dr, nTrt = 4, grpName = "trt")
dr <- addColumns(defRCT, dr)
tidy(lm(Y ~ factor(trt), data = dr))[1:3]</code></pre>
<pre><code>## # A tibble: 4 x 3
## term estimate std.error
## <chr> <dbl> <dbl>
## 1 (Intercept) 9.90 0.114
## 2 factor(trt)2 3.97 0.161
## 3 factor(trt)3 4.07 0.161
## 4 factor(trt)4 3.74 0.161</code></pre>
</div>
<div id="interaction" class="section level3">
<h3>Interaction</h3>
<p>It may be the case that an intervention is actually more effective in the presence of a second intervention - and this might be useful information to have when developing the ideal approach (which could be combination of more than one). In the following 3-factor scenario, Factors 1 and 2 each have an effect alone, but together the effect is even stronger. Factor 3 has no effect.</p>
<pre class="r"><code>dint <- genMultiFac(100, nFactors = 3, coding = "dummy",
colNames = c("f1","f2", "f3"))
defA <- defDataAdd(varname = "Y",
formula = "10 + 5*f1 + 5*f2 + 0*f3 + 5*f1*f2",
variance = 1, dist = "normal")
dint <- addColumns(defA, dint)</code></pre>
<p>If we look at a plot of the averages, we can see that the effect of Factor 1 alone without Factor 2 is about 5, regardless of what Factor 3 is. However, the effect of Factor 1 when Factor 2 is implemented as well is 10:</p>
<p><img src="https://www.rdatagen.net/post/2018-04-19-testing-many-interventions-in-a-single-experiment_files/figure-html/unnamed-chunk-10-1.png" width="672" /></p>
<p>We can fit a linear model with the interaction term and draw the same conclusion. In this case, we might opt for a combination of Factors 1 & 2 to test in a larger study:</p>
<pre class="r"><code>tidy(lm(Y ~ f1 * f2 * f3, data = dint))[1:3]</code></pre>
<pre><code>## # A tibble: 8 x 3
## term estimate std.error
## <chr> <dbl> <dbl>
## 1 (Intercept) 9.95 0.103
## 2 f1 4.89 0.146
## 3 f2 5.33 0.146
## 4 f3 0.00175 0.146
## 5 f1:f2 4.96 0.206
## 6 f1:f3 0.293 0.206
## 7 f2:f3 -0.392 0.206
## 8 f1:f2:f3 -0.0973 0.291</code></pre>
<p>With a more traditional RCT approach, we would never have the opportunity to observe the interaction effect, since by definition each randomization group is limited to a single intervention.</p>
</div>
<div id="getting-a-little-technical-effect-vs.dummy-coding" class="section level3">
<h3>Getting a little technical: effect vs. dummy coding</h3>
<p>In the <a href="https://www.springer.com/us/book/9783319722054">book</a> I mentioned earlier, there is a lengthy discussion about about two different ways to indicate the level of a 2-level factor in the estimation model. What I have been doing so far is what is called “dummy” coding, where the two levels are represented by 0 and 1.</p>
<pre class="r"><code>genMultiFac(1, nFactors = 2, coding = "dummy", levels = 2)</code></pre>
<pre><code>## id Var1 Var2
## 1: 1 0 0
## 2: 2 1 0
## 3: 3 0 1
## 4: 4 1 1</code></pre>
<p>An alternative way to code the levels, called “effect” coding in the literature, is to use -1 and +1 instead:</p>
<pre class="r"><code>genMultiFac(1, nFactors = 2, coding = "effect", levels = 2)</code></pre>
<pre><code>## id Var1 Var2
## 1: 1 -1 -1
## 2: 2 1 -1
## 3: 3 -1 1
## 4: 4 1 1</code></pre>
<p>There is not necessarily an ideal approach to take. One of the reasons that effect coding might be preferable is related to the precision of parameter estimates. In a linear regression model, the standard error of the estimated coefficients is proportional to <span class="math inline">\((X^{\prime}X)^{-1}\)</span>, where <span class="math inline">\(X\)</span> is the design matrix. Let’s simulate a small design matrix based on “dummy” coding:</p>
<pre class="r"><code>dx <- genMultiFac(each = 2, nFactors = 2, coding = "dummy",
colNames = (c("f1", "f2")))
dx[, f12 := f1*f2 ]
dm <- as.matrix(dx[, -"id"])
dm <- cbind(rep(1, nrow(dm)), dm)
dm</code></pre>
<pre><code>## f1 f2 f12
## [1,] 1 0 0 0
## [2,] 1 0 0 0
## [3,] 1 1 0 0
## [4,] 1 1 0 0
## [5,] 1 0 1 0
## [6,] 1 0 1 0
## [7,] 1 1 1 1
## [8,] 1 1 1 1</code></pre>
<p>Here is <span class="math inline">\((X^{\prime}X)^{-1}\)</span> for the “dummy” model. The covariance matrix of the coefficients is a scalar function of this matrix. It is possible to see that the standard errors of the interaction term will be larger than the standard errors of the main effects term by looking at the diagonal of the matrix. (And looking at the off-diagonal terms, we can see that the coefficient estimates are not independent; that is, they co-vary.)</p>
<pre class="r"><code>solve(t(dm) %*% dm)</code></pre>
<pre><code>## f1 f2 f12
## 0.5 -0.5 -0.5 0.5
## f1 -0.5 1.0 0.5 -1.0
## f2 -0.5 0.5 1.0 -1.0
## f12 0.5 -1.0 -1.0 2.0</code></pre>
<p>And now the same thing with “effect” coding:</p>
<pre class="r"><code>dx <- genMultiFac(each = 2, nFactors = 2, coding = "effect",
colNames = (c("f1", "f2")))
dx[, f12 := f1*f2 ]
dm <- as.matrix(dx[, -"id"])
dm <- cbind(rep(1, nrow(dm)), dm)
dm</code></pre>
<pre><code>## f1 f2 f12
## [1,] 1 -1 -1 1
## [2,] 1 -1 -1 1
## [3,] 1 1 -1 -1
## [4,] 1 1 -1 -1
## [5,] 1 -1 1 -1
## [6,] 1 -1 1 -1
## [7,] 1 1 1 1
## [8,] 1 1 1 1</code></pre>
<p>Below, the values on the diagonal of the “effect” matrix are constant (and equal the reciprocal of the total number of observations), indicating that the standard errors will be constant across all coefficients. (And here, the off-diagonal terms all equal 0, indicating that the coefficient estimates are independent of each other, which may make it easier to interpret the coefficient estimates.)</p>
<pre class="r"><code>solve(t(dm) %*% dm)</code></pre>
<pre><code>## f1 f2 f12
## 0.125 0.000 0.000 0.000
## f1 0.000 0.125 0.000 0.000
## f2 0.000 0.000 0.125 0.000
## f12 0.000 0.000 0.000 0.125</code></pre>
<p>Here is model estimation of the data set <code>dint</code> we generated earlier with interaction. The first results are based on the original “dummy” coding, which we saw earlier:</p>
<pre class="r"><code>tidy(lm(Y ~ f1 * f2 * f3, data = dint))[1:3]</code></pre>
<pre><code>## # A tibble: 8 x 3
## term estimate std.error
## <chr> <dbl> <dbl>
## 1 (Intercept) 9.95 0.103
## 2 f1 4.89 0.146
## 3 f2 5.33 0.146
## 4 f3 0.00175 0.146
## 5 f1:f2 4.96 0.206
## 6 f1:f3 0.293 0.206
## 7 f2:f3 -0.392 0.206
## 8 f1:f2:f3 -0.0973 0.291</code></pre>
<p>And now changing the coding from “dummy” to “effect”, you can see that the standard error estimates are constant across the coefficients. This consistency can be particularly useful in maintaining statistical power when you are interested not just in main effects but interaction effects as well. (That said, it may still be difficult to have a large enough sample to pick up those interaction effects, just because they are typically smaller than main effects.)</p>
<pre class="r"><code>dint[f1 == 0, f1 := -1]
dint[f2 == 0, f2 := -1]
dint[f3 == 0, f3 := -1]
tidy(lm(Y ~ f1 * f2 * f3, data = dint))[1:3]</code></pre>
<pre><code>## # A tibble: 8 x 3
## term estimate std.error
## <chr> <dbl> <dbl>
## 1 (Intercept) 16.3 0.0364
## 2 f1 3.75 0.0364
## 3 f2 3.80 0.0364
## 4 f3 -0.0361 0.0364
## 5 f1:f2 1.23 0.0364
## 6 f1:f3 0.0610 0.0364
## 7 f2:f3 -0.110 0.0364
## 8 f1:f2:f3 -0.0122 0.0364</code></pre>
</div>
<div id="how-many-people-do-you-need" class="section level3">
<h3>How many people do you need?</h3>
<p>I started looking into these issues when my colleague asked me to estimate how many people he would need to enroll in his study. I won’t go into it here - maybe in a post soon to come - but I was running into a key challenge. The outcome that we are proposing is not continuous, but binary. Did the patient stop smoking or not? And given that it is really hard to get people to stop smoking, we would likely run into ceiling effects. If one intervention increases the proportion of people abstaining from 10% to 15%, two might be able move that another 2% points. And we might max out with 20% abstention rates for all four interventions applied simultaneously.</p>
<p>The implication of these assumptions (what I would call strong ceiling effects) is that there is pretty severe interaction. Not just two-way interaction, but three- and four-way as well. And logistic regression is notorious for having extremely low power when higher order interactions are involved. I am not sure there is a way around this problem, but I am open to suggestions.</p>
</div>
<div id="appendix-genmultifac-code" class="section level3">
<h3>Appendix: genMultiFac code</h3>
<p>I’ll leave you with the code to generate multi-factorial data:</p>
<pre class="r"><code>genMultiFac <- function(each, nFactors = 2, coding = "dummy", levels = 2,
colNames = NULL, idName = "id") {
if (nFactors < 2) stop("Must specify at least 2 factors")
if (length(levels) > 1 & (length(levels) != nFactors))
stop("Number of levels does not match factors")
x <- list()
if ( all(levels==2) ) {
if (coding == "effect") {
opts <- c(-1, 1)
} else if (coding == "dummy") {
opts <- c(0, 1)
} else {
stop("Need to specify 'effect' or 'dummy' coding")
}
for (i in 1:nFactors) {
x[[i]] <- opts
}
} else {
if (length(levels) == 1) levels <- rep(levels, nFactors)
for (i in 1:nFactors) x[[i]] <- c(1 : levels[i])
}
dt <- data.table(as.data.frame(
lapply(expand.grid(x), function(x) rep(x, each = each)))
)
if (!is.null(colNames)) setnames(dt, colNames)
origNames <- copy(names(dt))
dt[ , (idName) := 1:.N]
setcolorder(dt, c(idName, origNames) )
setkeyv(dt, idName)
return(dt[])
}</code></pre>
</div>
Exploring the underlying theory of the chi-square test through simulation - part 2
https://www.rdatagen.net/post/a-little-intuition-and-simulation-behind-the-chi-square-test-of-independence-part-2/
Sun, 25 Mar 2018 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/a-little-intuition-and-simulation-behind-the-chi-square-test-of-independence-part-2/<p>In the last <a href="https://www.rdatagen.net/post/a-little-intuition-and-simulation-behind-the-chi-square-test-of-independence/">post</a>, I tried to provide a little insight into the chi-square test. In particular, I used simulation to demonstrate the relationship between the Poisson distribution of counts and the chi-squared distribution. The key point in that post was the role conditioning plays in that relationship by reducing variance.</p>
<p>To motivate some of the key issues, I talked a bit about recycling. I asked you to imagine a set of bins placed in different locations to collect glass bottles. I will stick with this scenario, but instead of just glass bottle bins, we now also have cardboard, plastic, and metal bins at each location. In this expanded scenario, we are interested in understanding the relationship between location and material. A key question that we might ask: is the distribution of materials the same across the sites? (Assume we are still just counting items and not considering volume or weight.)</p>
<div id="independence" class="section level3">
<h3>Independence</h3>
<p>If we tracked the number of items for a particular day, we could record the data in a contingency table, which in this case would be <span class="math inline">\(4 \times 3\)</span> array. If we included the row and column totals, it might look like this:</p>
<div class="figure">
<img src="https://www.rdatagen.net/img/post-chisquare/ContingencyInd.png" />
</div>
<p>One way to inspect the data would be to calculate row- and column-specific proportions. From this (albeit stylized example), it is apparent that the proportion of each material is constant across locations - 10% of the items are glass, roughly 30% are cardboard, 40% are plastic, and 20% are metal. Likewise, for each material, about 17% are in location 1, 33% in location 2, and 50% in location 3:</p>
<div class="figure">
<img src="https://www.rdatagen.net/img/post-chisquare/PropInd.png" />
</div>
<p>This consistency in proportions across rows and columns is the hallmark of independence. In more formal terms, <span class="math inline">\(P(M = m | L = l) = P(M = m)\)</span> and <span class="math inline">\(P(L = l|M = m) = P(L = l)\)</span>. The conditional probability (what we see in a particular row or column) is equal to the overall probability (what we see in the marginal (total) row or column.</p>
<p>The actual definition of statistical independence with respect to materials and location is</p>
<p><span class="math display">\[ P(M=m \ \& \ L= l) = P(M=m) \times P(L=l) \]</span></p>
<p>The probability on the left is the cell-specific proportion (the count of the number of items with <span class="math inline">\(M=m\)</span> and <span class="math inline">\(L=l\)</span> divided by <span class="math inline">\(N\)</span>, the total number of items in the entire table). The two terms on the right side of the equation are the marginal row and column probabilities respectively. The table of overall proportions gives us an example of data generated from two characteristics that are independent:</p>
<div class="figure">
<img src="https://www.rdatagen.net/img/post-chisquare/PropNInd.png" />
</div>
<p>There are 116 plastic items in location 3, 19% of the overall items (<span class="math inline">\(116 \div 600\)</span>). The overall proportion of plastic items is 40%, the overall proportion of items in location 3 is 50%, and <span class="math inline">\(0.19 \approx 0.4 \times 0.5\)</span>. If we inspect all of the cells, the same approximation will hold.</p>
</div>
<div id="dependence" class="section level3">
<h3>Dependence</h3>
<p>In the case where the distributions of materials differ across locations, we no longer have independence. Here is an example, though note that the marginal totals are unchanged:</p>
<div class="figure">
<img src="https://www.rdatagen.net/img/post-chisquare/ContingencyDep.png" />
</div>
<p>Looking across the row- and column-specific proportions, it is clear that something unique might be going on at each location:</p>
<div class="figure">
<img src="https://www.rdatagen.net/img/post-chisquare/PropDep.png" />
</div>
<p>It is apparent that the formal definition of independence might be violated: <span class="math inline">\(P(M=m \ \& \ L=l) \ne P(M=m)P(L=l\)</span>). Look again at plastics in location 3: <span class="math inline">\(0.30 \ne 0.4 \times 0.5\)</span>.</p>
<div class="figure">
<img src="https://www.rdatagen.net/img/post-chisquare/PropNDep.png" />
</div>
</div>
<div id="the-chi-square-test-of-independence" class="section level3">
<h3>The chi-square test of independence</h3>
<p>I have been making declarations about independence with my made up contingency tables, just because I was the all-knowing creator who made them up. Of course, when we collect actual data, we don’t have that luxury. That is where the chi-square test of independence helps us.</p>
<p>Here’s the general idea. We start off by making the initial assumption that the rows and columns are indeed independent (this is actually our null hypothesis). We then define a test statistic <span class="math inline">\(X^2\)</span> as</p>
<p><span class="math display">\[ X^2 = \sum_{m,l} \frac{(O_{ml} - E_{ml})^2}{E_{ml}}.\]</span> This is just a slight modification of the test statistic we saw in <a href="https://www.rdatagen.net/post/a-little-intuition-and-simulation-behind-the-chi-square-test-of-independence/">part 1</a>, which was presented as a summary of a <span class="math inline">\(k \times 1\)</span> array. In this context, <span class="math inline">\(X^2\)</span> is just a summary of the <span class="math inline">\(M \times \ L\)</span> table. As previously discussed, <span class="math inline">\(X^2\)</span> has a <span class="math inline">\(\chi^2\)</span> distribution with a particular parameter specifying the <span class="math inline">\(k\)</span> degrees freedom.</p>
<p>The question is, how can we calculate <span class="math inline">\(X^2\)</span>? The observed data <span class="math inline">\(O_{ml}\)</span> are just the observed data. But, we don’t necessarily know <span class="math inline">\(E_{ml}\)</span>, the expected value of each cell in the contingency table. These expected values can be defined as <span class="math inline">\(E_{ml} = P(M=m \ \& \ L=l) \times N\)</span>. If we assume that <span class="math inline">\(N\)</span> is fixed, then we are half way there. All that remains is the joint probability of <span class="math inline">\(M\)</span> and <span class="math inline">\(L\)</span>, <span class="math inline">\(P(M=m \ \& \ L=l)\)</span>. Under independence (which is our starting, or null, assumption) <span class="math inline">\(P(M=m \ \& \ L=l) = P(M=m)P(L=l)\)</span>. If we make the additional assumption that the row and column totals (margins) <span class="math inline">\(R_m\)</span> and <span class="math inline">\(C_l\)</span> are fixed, we can calculate <span class="math inline">\(P(M=m) = R_m/N\)</span> and <span class="math inline">\(P(L=l) = C_l/N\)</span>. So now,</p>
<p><span class="math display">\[E_{ml} = \frac{(R_m * C_l)}{N}.\]</span> Where does that leave us? We calculate the test statistic <span class="math inline">\(X^2\)</span> and evaluate that statistic in the context of the theoretical sampling distribution suggested by the assumptions of independence <strong>and</strong> fixed marginal totals. That theoretical sampling distribution is <span class="math inline">\(\chi^2\)</span> with some degrees of freedom. If the observed <span class="math inline">\(X^2\)</span> is very large and defies the theoretical distribution (i.e. seems like an outlier), we will reject the notion of independence. (This is just null hypothesis testing using the <span class="math inline">\(X^2\)</span> statistic.)</p>
</div>
<div id="chi-square-tests-of-our-two-tables" class="section level3">
<h3>Chi-square tests of our two tables</h3>
<p>The test statistic from the first table (which I suggest is from a scenario where material and location are independent) is relatively small. We would <em>not</em> conclude that material and location are associated:</p>
<pre><code>## Sum
## 8 23 29 60
## 28 61 91 180
## 39 85 116 240
## 25 31 64 120
## Sum 100 200 300 600</code></pre>
<pre class="r"><code>chisq.test(im)</code></pre>
<pre><code>##
## Pearson's Chi-squared test
##
## data: im
## X-squared = 5.0569, df = 6, p-value = 0.5365</code></pre>
<p>In the second case, the test statistic <span class="math inline">\(X^2\)</span> is quite large, leading us to conclude that material and location are indeed related, which is as we suspected:</p>
<pre><code>## Sum
## 51 5 4 60
## 22 99 59 180
## 21 40 179 240
## 6 56 58 120
## Sum 100 200 300 600</code></pre>
<pre class="r"><code>chisq.test(dm)</code></pre>
<pre><code>##
## Pearson's Chi-squared test
##
## data: dm
## X-squared = 314.34, df = 6, p-value < 2.2e-16</code></pre>
</div>
<div id="degrees-of-freedom" class="section level2">
<h2>Degrees of freedom</h2>
<p>The paramount question is what <span class="math inline">\(\chi^2\)</span> distribution does <span class="math inline">\(X^2\)</span> have under the independence assumption? If you look at the results of the chi-square tests above, you can see that, under the null hypothesis of independence and fixed margins, these tables have six degree of freedom, so <span class="math inline">\(X^2 \sim \chi^2_6\)</span>. But, how do we get there? What follows is a series of simulations that start with an unconditional data generation process and ends with the final set of marginal conditions. The idea is to show that by progressively adding stricter conditions to the assumptions, we continuously reduce variability and lower the degrees of freedom.</p>
<div id="unconditional-contingency-tables" class="section level3">
<h3>Unconditional contingency tables</h3>
<p>If we start with a data generation process based on the <span class="math inline">\(4 \times 3\)</span> table that has no conditions on the margins or total number of items, the statistic <span class="math inline">\(X^2\)</span> is a function of 12 independent Poisson variables. Each cell has an expected value determined by row and column independence. It should follow that <span class="math inline">\(X^2\)</span> will have 12 degrees of freedom. Simulating a large number of tables under these conditions and evaluating the distribution of the calculated <span class="math inline">\(X^2\)</span> statistics will likely support this.</p>
<p>The initial (independent) table specified above is our starting point:</p>
<pre class="r"><code>addmargins(im)</code></pre>
<pre><code>## Sum
## 8 23 29 60
## 28 61 91 180
## 39 85 116 240
## 25 31 64 120
## Sum 100 200 300 600</code></pre>
<pre class="r"><code>row <- margin.table(im, 1)
col <- margin.table(im, 2)
N <- sum(row)</code></pre>
<p>These are the expected values based on the observed row and column totals:</p>
<pre class="r"><code>(expected <- (row/N) %*% t(col/N) * N)</code></pre>
<pre><code>## [,1] [,2] [,3]
## [1,] 10 20 30
## [2,] 30 60 90
## [3,] 40 80 120
## [4,] 20 40 60</code></pre>
<p>Each randomly generated table is a collection of 12 independent Poisson random variables with <span class="math inline">\(\lambda_{ml}\)</span> defined by the “expected” table. The tables are first generated as a collection columns and stored in a matrix. Here, I am creating 10,000 tables - and print out the first two in column form:</p>
<pre class="r"><code>set.seed(2021)
(lambdas <- as.vector(t(expected)))</code></pre>
<pre><code>## [1] 10 20 30 30 60 90 40 80 120 20 40 60</code></pre>
<pre class="r"><code>condU <- matrix(rpois(n = 10000*length(lambdas),
lambda = lambdas),
nrow = length(lambdas))
condU[, 1:2]</code></pre>
<pre><code>## [,1] [,2]
## [1,] 9 15
## [2,] 22 11
## [3,] 31 37
## [4,] 31 25
## [5,] 66 71
## [6,] 71 81
## [7,] 41 50
## [8,] 74 87
## [9,] 138 96
## [10,] 15 20
## [11,] 36 30
## [12,] 71 53</code></pre>
<p>Now, I convert each column to a table and create a “list” of tables. Here are the first two tables with the row and column margins; you can see that even the totals change from table to table:</p>
<pre class="r"><code>condUm <- lapply(seq_len(ncol(condU)),
function(i) matrix(condU[,i], length(row), length(col), byrow = T))
addmargins(condUm[[1]])</code></pre>
<pre><code>## Sum
## 9 22 31 62
## 31 66 71 168
## 41 74 138 253
## 15 36 71 122
## Sum 96 198 311 605</code></pre>
<pre class="r"><code>addmargins(condUm[[2]])</code></pre>
<pre><code>## Sum
## 15 11 37 63
## 25 71 81 177
## 50 87 96 233
## 20 30 53 103
## Sum 110 199 267 576</code></pre>
<p>A function <code>avgMatrix</code> estimates the average and variance of each of the cells (code can be made available if there is interest). The average of the 10,000 tables mirrors the “expected” table. And since all cells (including the totals) are Poisson distributed, the variance should be quite close to the mean table:</p>
<pre class="r"><code>sumU <- avgMatrix(condUm, addMarg = T, sLabel = "U")
round(sumU$sampAvg, 0)</code></pre>
<pre><code>## [,1] [,2] [,3] [,4]
## [1,] 10 20 30 60
## [2,] 30 60 90 180
## [3,] 40 80 120 240
## [4,] 20 40 60 120
## [5,] 100 200 300 600</code></pre>
<pre class="r"><code>round(sumU$sampVar, 0)</code></pre>
<pre><code>## [,1] [,2] [,3] [,4]
## [1,] 10 19 30 60
## [2,] 30 61 90 180
## [3,] 40 80 124 244
## [4,] 20 40 60 122
## [5,] 100 199 308 613</code></pre>
<p>Function <code>estX2</code> calculates the <span class="math inline">\(X^2\)</span> statistic for each contingency table:</p>
<pre class="r"><code>estX2 <- function(contMat, expMat) {
X2 <- sum( (contMat - expMat)^2 / expMat)
return(X2)
}
X2 <- sapply(condUm, function(x) estX2(x, expected))
head(X2)</code></pre>
<pre><code>## [1] 11.819444 23.162500 17.681944 3.569444 31.123611 14.836111</code></pre>
<p>Comparing the mean and variance of the 10,000 simulated <span class="math inline">\(X^2\)</span> statistics with the mean and variance of data generated from a <span class="math inline">\(\chi^2_{12}\)</span> distribution indicates that the two are quite close:</p>
<pre class="r"><code>trueChisq <- rchisq(10000, 12)
# Comparing means
round(c( mean(X2), mean(trueChisq)), 1)</code></pre>
<pre><code>## [1] 12 12</code></pre>
<pre class="r"><code># Comparing variance
round(c( var(X2), var(trueChisq)), 1)</code></pre>
<pre><code>## [1] 24.5 24.3</code></pre>
</div>
<div id="conditioning-on-n" class="section level3">
<h3>Conditioning on N</h3>
<p>If we assume that the total number of items remains the same from day to day (or sample to sample), but we allow to totals to vary by location and materials, we have a constrained contingency table that looks like this:</p>
<div class="figure">
<img src="https://www.rdatagen.net/img/post-chisquare/ConditionN.png" />
</div>
<p>The table total is highlighted in yellow to indicate that <span class="math inline">\(N\)</span> is fixed. The “metal/location 3” is also highlighted because once <span class="math inline">\(N\)</span> is fixed and all the other cells are allowed to be randomly generated, that last cell is automatically determined as <span class="math display">\[C_{metal, 3} = N - \sum_{ml \ne (metal \ \& \ 3)} C_{ml}.\]</span> The data generation process that reflects this constraint is the multinomial distribution, which is the multivariate analogue to the binomial distribution. The cell probabilities are set based on the proportions of the independence table:</p>
<pre class="r"><code>round(probs <- expected/N, 2)</code></pre>
<pre><code>## [,1] [,2] [,3]
## [1,] 0.02 0.03 0.05
## [2,] 0.05 0.10 0.15
## [3,] 0.07 0.13 0.20
## [4,] 0.03 0.07 0.10</code></pre>
<p>As before with the unconditional scenario, let’s generate a large number of tables, each conditional on N. I’ll show two tables so you can see that N is indeed constrained:</p>
<pre class="r"><code>condN <- rmultinom(n = 10000, size = N, prob = as.vector(t(probs)))
condNm <- lapply(seq_len(ncol(condN)),
function(i) matrix(condN[,i], length(row), length(col),
byrow = T))
addmargins(condNm[[1]])</code></pre>
<pre><code>## Sum
## 12 16 30 58
## 26 67 83 176
## 36 91 119 246
## 21 40 59 120
## Sum 95 214 291 600</code></pre>
<pre class="r"><code>addmargins(condNm[[2]])</code></pre>
<pre><code>## Sum
## 8 20 19 47
## 30 64 97 191
## 36 84 112 232
## 21 52 57 130
## Sum 95 220 285 600</code></pre>
<p><em>And here is the key point</em>: if we look at the mean of the cell counts across the samples, they mirror the expected values. But, the variances are slightly reduced. We are essentially looking at a subset of the samples generated above that were completely unconstrained, and in this subset the total across all cells equals <span class="math inline">\(N\)</span>. As I <a href="https://www.rdatagen.net/post/a-little-intuition-and-simulation-behind-the-chi-square-test-of-independence/">demonstrated</a> in the last post, this constraint effectively removes samples with more extreme values in some of the cells - which reduces the variance of each cell:</p>
<pre class="r"><code>sumN <- avgMatrix(condNm, sLabel = "N")
round(sumN$sampAvg, 0)</code></pre>
<pre><code>## [,1] [,2] [,3]
## [1,] 10 20 30
## [2,] 30 60 90
## [3,] 40 80 120
## [4,] 20 40 60</code></pre>
<pre class="r"><code>round(sumN$sampVar, 0)</code></pre>
<pre><code>## [,1] [,2] [,3]
## [1,] 10 19 29
## [2,] 28 53 76
## [3,] 37 70 95
## [4,] 19 39 54</code></pre>
<p>We lost one degree of freedom (the one cell highlighted in grey in the table above), so it makes sense to compare the distribution of <span class="math inline">\(X^2\)</span> to a <span class="math inline">\(\chi^2_{11}\)</span>:</p>
<pre class="r"><code>X2 <- sapply(condNm, function(x) estX2(x, expected))
trueChisq <- rchisq(10000, 11)
# Comparing means
round(c( mean(X2), mean(trueChisq)), 1)</code></pre>
<pre><code>## [1] 11 11</code></pre>
<pre class="r"><code># Comparing variance
round(c( var(X2), var(trueChisq)), 1)</code></pre>
<pre><code>## [1] 21.7 22.4</code></pre>
</div>
<div id="conditioning-on-row-totals" class="section level3">
<h3>Conditioning on row totals</h3>
<p>We go one step further - and condition on the row totals (I am going to skip conditioning on the column totals, because conceptually it is the same thing). Now, the row totals in the table are highlighted, and all of the cells in “location 3” are grayed out. Once the row total is set, and the first two elements in each row are generated, the last cell in the row is determined. We are losing four degrees of freedom.</p>
<div class="figure">
<img src="https://www.rdatagen.net/img/post-chisquare/ConditionRow.png" />
</div>
<p>These tables can be generated again using the multinomial distribution, but each row of the table is generated individually. The cell probabilities are all based on the overall column proportions:</p>
<pre class="r"><code>round(prob <- col/N, 2)</code></pre>
<pre><code>## [1] 0.17 0.33 0.50</code></pre>
<p>The rows are generated individually based on the count of the total number fixed items in the row. Two of the tables are shown again to show that the generated tables have the same row totals:</p>
<pre class="r"><code>condRow <- lapply(seq_len(length(row)),
function(i) t(rmultinom(10000, size = row[i],prob=prob)))
condRm <- lapply(seq_len(10000),
function(i) {
do.call(rbind, lapply(condRow, function(x) x[i,]))
}
)
addmargins(condRm[[1]])</code></pre>
<pre><code>## Sum
## 5 19 36 60
## 32 55 93 180
## 39 76 125 240
## 19 42 59 120
## Sum 95 192 313 600</code></pre>
<pre class="r"><code>addmargins(condRm[[2]])</code></pre>
<pre><code>## Sum
## 11 19 30 60
## 36 52 92 180
## 44 74 122 240
## 16 41 63 120
## Sum 107 186 307 600</code></pre>
<p>This time around, the variance of the cells is reduced even further:</p>
<pre class="r"><code>sumR <- avgMatrix(condRm, sLabel = "R")
round(sumR$sampAvg, 0)</code></pre>
<pre><code>## [,1] [,2] [,3]
## [1,] 10 20 30
## [2,] 30 60 90
## [3,] 40 80 120
## [4,] 20 40 60</code></pre>
<pre class="r"><code>round(sumR$sampVar, 0)</code></pre>
<pre><code>## [,1] [,2] [,3]
## [1,] 8 14 15
## [2,] 26 41 46
## [3,] 34 53 59
## [4,] 17 27 31</code></pre>
<p>And let’s compare the distribution of the sample <span class="math inline">\(X^2\)</span> statistics with the <span class="math inline">\(\chi^2_8\)</span> distribution (since we now have <span class="math inline">\(12 - 4 = 8\)</span> degrees of freedom):</p>
<pre class="r"><code>X2 <- sapply(condRm, function(x) estX2(x, expected))
trueChisq <- rchisq(10000, 8)
# Comparing means
round(c( mean(X2), mean(trueChisq)), 1)</code></pre>
<pre><code>## [1] 8.1 8.0</code></pre>
<pre class="r"><code># Comparing variance
round(c( var(X2), var(trueChisq)), 1)</code></pre>
<pre><code>## [1] 16.4 16.4</code></pre>
</div>
<div id="conditioning-on-both-row-and-column-totals" class="section level3">
<h3>Conditioning on both row and column totals</h3>
<p>Here we are at the grand finale, the actual chi-square test of independence, where we condition on both the row and column totals. The whole point of this is to show that once we set this condition, the variance of the cells is reduced far below the Poisson variance. As a result, we must use a <span class="math inline">\(\chi^2\)</span> distribution with fewer degrees of freedom when evaluating the <span class="math inline">\(X^2\)</span> test statistic.</p>
<p>This final table shows both the constraints on the row and column totals, and the impact on the specific cell. The six grayed out cells are determined by the column totals once the six other cells are generated. That is, we lose six degrees of freedom. (Maybe you can now see where the <span class="math inline">\(degrees \ of \ freedom = (\# \ rows - 1) \times (\# \ cols - 1)\)</span> comes from?)</p>
<div class="figure">
<img src="https://www.rdatagen.net/img/post-chisquare/ConditionRC.png" />
</div>
<p>The process for generating data for a table where both row totals and column totals is interesting, and I actually wrote some pretty inefficient code that was based on a simple algorithm tied to the multivariate hypergeometric distribution, which was described <a href="https://blogs.sas.com/content/iml/2015/10/21/simulate-contingency-tables-fixed-sums-sas.html">here</a>. Luckily, just as I started writing this section, I stumbled upon the R function <code>r2dtable</code>. (Not sure why I didn’t find it right away, but was glad to have found it in any case.) So, with a single line, 10,000 tables can be very quickly generated.</p>
<pre class="r"><code>condRCm <- r2dtable(10000, row, col)</code></pre>
<p>Here are the first two generated tables:</p>
<pre class="r"><code>addmargins(condRCm[[1]])</code></pre>
<pre><code>## Sum
## 14 12 34 60
## 24 64 92 180
## 38 79 123 240
## 24 45 51 120
## Sum 100 200 300 600</code></pre>
<pre class="r"><code>addmargins(condRCm[[2]])</code></pre>
<pre><code>## Sum
## 7 23 30 60
## 30 60 90 180
## 38 78 124 240
## 25 39 56 120
## Sum 100 200 300 600</code></pre>
<p>And with this most restrictive set of conditioning constraints, the variances of the cell counts are considerably lower than when conditioning on row or column totals alone:</p>
<pre class="r"><code>sumRC <- avgMatrix(condRCm, sLabel = "RC")
round(sumRC$sampAvg, 0)</code></pre>
<pre><code>## [,1] [,2] [,3]
## [1,] 10 20 30
## [2,] 30 60 90
## [3,] 40 80 120
## [4,] 20 40 60</code></pre>
<pre class="r"><code>round(sumRC$sampVar, 0)</code></pre>
<pre><code>## [,1] [,2] [,3]
## [1,] 8 12 13
## [2,] 18 28 31
## [3,] 20 32 36
## [4,] 13 22 24</code></pre>
<p>And, take a look at the mean and variance of the <span class="math inline">\(X^2\)</span> statistic as it compares to the mean and variance of the <span class="math inline">\(\chi^2_6\)</span> distribution:</p>
<pre class="r"><code>X2 <- sapply(condRCm, function(x) estX2(x, expected))
trueChisq <- rchisq(10000, 6)
# Comparing means
round(c( mean(X2), mean(trueChisq)), 1)</code></pre>
<pre><code>## [1] 6 6</code></pre>
<pre class="r"><code># Comparing variance
round(c( var(X2), var(trueChisq)), 1)</code></pre>
<pre><code>## [1] 11.8 11.7</code></pre>
<p>I’ll leave you with a plot of the cell counts for each of the 10,000 tables generated in each of the conditioning scenarios: unconditional (U), conditional on N (N), conditional on row totals (R), and conditional on both row and column totals (RC). This plot confirms the point I’ve been trying to make in this post and the last: <em>adding more and more restrictive conditions progessively reduces variability within each cell</em>. The reduction in degrees of freedom in the chi-square test is the direct consequence of this reduction in within-cell variability.</p>
<p><img src="https://www.rdatagen.net/post/2018-03-25-a-little-intuition-and-simulation-behind-the-chi-square-test-of-independence_files/figure-html/unnamed-chunk-25-1.png" width="576" /></p>
</div>
</div>
Exploring the underlying theory of the chi-square test through simulation - part 1
https://www.rdatagen.net/post/a-little-intuition-and-simulation-behind-the-chi-square-test-of-independence/
Sun, 18 Mar 2018 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/a-little-intuition-and-simulation-behind-the-chi-square-test-of-independence/<p>Kids today are so sophisticated (at least they are in New York City, where I live). While I didn’t hear about the chi-square test of independence until my first stint in graduate school, they’re already talking about it in high school. When my kids came home and started talking about it, I did what I usually do when they come home asking about a new statistical concept. I opened up R and started generating some data. Of course, they rolled their eyes, but when the evening was done, I had something that might illuminate some of what underlies the theory of this ubiquitous test.</p>
<p>Actually, I created enough simulations to justify two posts - so this is just part 1, focusing on the <span class="math inline">\(\chi^2\)</span> distribution and its relationship to the Poisson distribution. Part 2 will consider contingency tables, where we are often interested in understanding the nature of the relationship between two categorical variables. More on that the next time.</p>
<div id="the-chi-square-distribution" class="section level3">
<h3>The chi-square distribution</h3>
<p>The chi-square (or <span class="math inline">\(\chi^2\)</span>) distribution can be described in many ways (for example as a special case of the Gamma distribution), but it is most intuitively characterized in relation to the standard normal distribution, <span class="math inline">\(N(0,1)\)</span>. The <span class="math inline">\(\chi^2_k\)</span> distribution has a single parameter <span class="math inline">\(k\)</span> which represents the <em>degrees of freedom</em>. If <span class="math inline">\(U\)</span> is standard normal, (i.e <span class="math inline">\(U \sim N(0,1)\)</span>), then <span class="math inline">\(U^2\)</span> has a <span class="math inline">\(\chi^2_1\)</span> distribution. If <span class="math inline">\(V\)</span> is also standard normal, then <span class="math inline">\((U^2 + V^2) \sim \chi^2_2\)</span>. That is, if we add two squared standard normal random variables, the distribution of the sum is chi-squared with 2 degrees of freedom. More generally, <span class="math display">\[\sum_{j=1}^k X^2_j \sim \chi^2_k,\]</span></p>
<p>where each <span class="math inline">\(X_j \sim N(0,1)\)</span>.</p>
<p>The following code defines a data set with two standard normal random variables and their sum:</p>
<pre class="r"><code>library(simstudy)
def <- defData(varname = "x", formula = 0, variance = 1, dist = "normal")
def <- defData(def, "chisq1df", formula = "x^2", dist = "nonrandom")
def <- defData(def, "y", formula = 0, variance = 1, dist = "normal")
def <- defData(def, "chisq2df",
formula = "(x^2) + (y^2)", dist = "nonrandom")
set.seed(2018)
dt <- genData(10000, def)
dt[1:5,]</code></pre>
<pre><code>## id x chisq1df y chisq2df
## 1: 1 -0.42298398 0.178915450 0.05378131 0.181807879
## 2: 2 -1.54987816 2.402122316 0.70312385 2.896505464
## 3: 3 -0.06442932 0.004151137 -0.07412058 0.009644997
## 4: 4 0.27088135 0.073376707 -1.09181873 1.265444851
## 5: 5 1.73528367 3.011209400 -0.79937643 3.650212075</code></pre>
<p>The standard normal has mean zero and variance one. Approximately 95% of the values will be expected to fall within two standard deviations of zero. Here is your classic “bell” curve:</p>
<p><img src="https://www.rdatagen.net/post/2018-03-18-a-little-intuition-and-simulation-behind-the-chi-square-test-of-independence_files/figure-html/unnamed-chunk-2-1.png" width="672" /></p>
<p>Since the statistic <span class="math inline">\(X^2\)</span> (try not to confuse <span class="math inline">\(X^2\)</span> and <span class="math inline">\(\chi^2\)</span>, unfortunate I know) is the sum of the squares of a continuous random variable and is always greater or equal to zero, the <span class="math inline">\(\chi^2\)</span> is a distribution of positive, continuous measures. Here is a histogram of <code>chisq1df</code> from the data set <code>dt</code>, which has a <span class="math inline">\(\chi^2_1\)</span> distribution:</p>
<p><img src="https://www.rdatagen.net/post/2018-03-18-a-little-intuition-and-simulation-behind-the-chi-square-test-of-independence_files/figure-html/unnamed-chunk-3-1.png" width="672" /></p>
<p>And here is a plot of <code>chisq2df</code>, which has two degrees of freedom, and has a <span class="math inline">\(\chi^2_2\)</span> distribution. Unsurprisingly, since we are adding positive numbers, we start to see values further away from zero:</p>
<p><img src="https://www.rdatagen.net/post/2018-03-18-a-little-intuition-and-simulation-behind-the-chi-square-test-of-independence_files/figure-html/unnamed-chunk-4-1.png" width="672" /></p>
<p>Just to show that the data we generated by adding two squared standard normal random variables is actually distributed as a <span class="math inline">\(\chi^2_2\)</span>, we can generate data from this distribution directly, and overlay the plots:</p>
<pre class="r"><code>actual_chisq2 <- rchisq(10000, 2)</code></pre>
<p><img src="https://www.rdatagen.net/post/2018-03-18-a-little-intuition-and-simulation-behind-the-chi-square-test-of-independence_files/figure-html/unnamed-chunk-6-1.png" width="672" /></p>
</div>
<div id="recycling-and-the-poisson-distribution" class="section level3">
<h3>Recycling and the Poisson distribution</h3>
<p>When we talk about counts, we are often dealing with a Poisson distribution. An example I use below is the number of glass bottles that end up in an apartment building’s recycling bin every day (as I mentioned, I do live in New York City). The Poisson distribution is a non-negative, discrete distribution that is characterized by a single parameter <span class="math inline">\(\lambda\)</span>. If <span class="math inline">\(H \sim Poisson(\lambda)\)</span>, then <span class="math inline">\(E(H) = Var(H) = \lambda\)</span>.</p>
<pre class="r"><code>def <- defData(varname = "h", formula = 40, dist = "poisson")
dh <- genData(10000, def)
round(dh[, .(avg = mean(h), var = var(h))], 1)</code></pre>
<pre><code>## avg var
## 1: 40.1 40</code></pre>
<p>To standardize a <em>normally</em> distributed variable (such as <span class="math inline">\(W \sim N(\mu,\sigma^2)\)</span>), we subtract the mean and divide by the standard deviation:</p>
<p><span class="math display">\[ W_i^{s} = \frac{W_i - \mu}{\sigma},\]</span></p>
<p>and <span class="math inline">\(W^s \sim N(0,1)\)</span>. Analogously, to standardize a Poisson variable we do the same, since <span class="math inline">\(\lambda\)</span> is the mean and the variance:</p>
<p><span class="math display">\[ S_{i} = \frac{H_i - \lambda}{\sqrt{\lambda}}\]</span></p>
<p>The distribution of this standardized variable <span class="math inline">\(S\)</span> will be close to a standard normal. We can generate some data and check this out. In this case, the mean and variance of the Poisson variable is 40:</p>
<pre class="r"><code>defA <- defDataAdd(varname = "s", formula = "(h-40)/sqrt(40)",
dist = "nonrandom")
dh <- addColumns(defA, dh)
dh[1:5, ]</code></pre>
<pre><code>## id h s
## 1: 1 34 -0.9486833
## 2: 2 44 0.6324555
## 3: 3 37 -0.4743416
## 4: 4 46 0.9486833
## 5: 5 42 0.3162278</code></pre>
<p>The mean and variance of the standardized data do suggest a standardized normal distribution:</p>
<pre class="r"><code>round(dh[ , .(mean = mean(s), var = var(s))], 1)</code></pre>
<pre><code>## mean var
## 1: 0 1</code></pre>
<p>Overlaying the plots of the standardized poisson distribution with the standard normal distribution, we can see that they <em>are</em> quite similar:</p>
<p><img src="https://www.rdatagen.net/post/2018-03-18-a-little-intuition-and-simulation-behind-the-chi-square-test-of-independence_files/figure-html/unnamed-chunk-10-1.png" width="672" /></p>
<p>Since the standardized Poisson is roughly standard normal, the square of the standardized Poisson should be roughly <span class="math inline">\(\chi^2_1\)</span>. If we square normalized Poisson, this is what we have:</p>
<p><span class="math display">\[ S_i^2 = \frac{(H_i - \lambda)^2}{\lambda}\]</span></p>
<p>Or maybe in a more familiar form (think Pearson):</p>
<p><span class="math display">\[ S_i^2 = \frac{(O_i - E_i)^2}{E_i},\]</span></p>
<p>where <span class="math inline">\(O_i\)</span> is the observed value and <span class="math inline">\(E_i\)</span> is the expected value. Since <span class="math inline">\(\lambda\)</span> is the expected value (and variance) of the Poisson random variable, the two formulations are equivalent.</p>
<p>Adding the transformed data to the data set, and calculating the mean and variance, it is apparent that these observations are close to a <span class="math inline">\(\chi^2_1\)</span> distribution:</p>
<pre class="r"><code>defA <- defDataAdd(varname = "h.chisq", formula = "(h-40)^2/40",
dist = "nonrandom")
dh <- addColumns(defA, dh)
round(dh[, .(avg = mean(h.chisq), var = var(h.chisq))], 2)</code></pre>
<pre><code>## avg var
## 1: 1 1.97</code></pre>
<pre class="r"><code>actual_chisq1 <- rchisq(10000, 1)
round(c(avg = mean(actual_chisq1), var = var(actual_chisq1)), 2)</code></pre>
<pre><code>## avg var
## 0.99 2.04</code></pre>
<p>Once again, an overlay of the two distributions based on the data we generated shows that this is plausible:</p>
<p><img src="https://www.rdatagen.net/post/2018-03-18-a-little-intuition-and-simulation-behind-the-chi-square-test-of-independence_files/figure-html/unnamed-chunk-12-1.png" width="672" /></p>
<p>Just for fun, let’s repeatedly generate 10 Poisson variables each with its own value of <span class="math inline">\(\lambda\)</span> and calculate <span class="math inline">\(X^2\)</span> for each iteration to compare with data generated from a <span class="math inline">\(\chi^2_{10}\)</span> distribution:</p>
<pre class="r"><code>nObs <- 10000
nMeasures <- 10
lambdas <- rpois(nMeasures, 50)
poisMat <- matrix(rpois(n = nMeasures*nObs, lambda = lambdas),
ncol = nMeasures, byrow = T)
poisMat[1:5,]</code></pre>
<pre><code>## [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
## [1,] 48 51 49 61 59 51 67 35 43 39
## [2,] 32 58 49 67 57 35 69 40 57 55
## [3,] 44 50 60 56 57 49 68 49 48 32
## [4,] 44 44 42 49 52 50 63 39 51 38
## [5,] 42 38 62 57 62 40 68 34 41 58</code></pre>
<p>Each column (variable) has its own mean and variance:</p>
<pre class="r"><code>rbind(lambdas,
mean = apply(poisMat, 2, function(x) round(mean(x), 0)),
var = apply(poisMat, 2, function(x) round(var(x), 0))
)</code></pre>
<pre><code>## [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
## lambdas 43 45 51 61 55 46 62 35 48 47
## mean 43 45 51 61 55 46 62 35 48 47
## var 43 46 51 61 55 46 62 35 47 47</code></pre>
<p>Calculate <span class="math inline">\(X^2\)</span> for each iteration (i.e. each row of the matrix <code>poisMat</code>), and estimate mean and variance across all estimated values of <span class="math inline">\(X^2\)</span>:</p>
<pre class="r"><code>X2 <- sapply(seq_len(nObs),
function(x) sum((poisMat[x,] - lambdas)^2 / lambdas))
round(c(mean(X2), var(X2)), 1)</code></pre>
<pre><code>## [1] 10.0 20.2</code></pre>
<p>The true <span class="math inline">\(\chi^2\)</span> distribution with 10 degrees of freedom:</p>
<pre class="r"><code>chisqkdf <- rchisq(nObs, nMeasures)
round(c(mean(chisqkdf), var(chisqkdf)), 1)</code></pre>
<pre><code>## [1] 10.0 19.8</code></pre>
<p>These simulations strongly suggest that summing across independent standardized Poisson variables generates a statistic that has a <span class="math inline">\(\chi^2\)</span> distribution.</p>
</div>
<div id="the-consequences-of-conditioning" class="section level3">
<h3>The consequences of conditioning</h3>
<p>If we find ourselves in the situation where we have some number of bins or containers or cells into which we are throwing a <em>fixed</em> number of something, we are no longer in the realm of independent, unconditional Poisson random variables. This has implications for our <span class="math inline">\(X^2\)</span> statistic.</p>
<p>As an example, say we have those recycling bins again (this time five) and a total of 100 glass bottles. If each bottle has an equal chance of ending up in any of the five bins, we would expect on average 20 bottles to end up in each. Typically, we highlight the fact that under this constraint (of having 100 bottles) information about about four of the bins is the same as having information about all five. If I tell you that the first four bins contain a total of 84 bottles, we know that the last bin must have exactly 16. Actually counting those bottles in the fifth bin provides <em>no</em> additional information. In this case (where we really only have 4 pieces of information, and not the the 5 we are looking at), we say we have lost 1 degree of freedom due to the constraint. This loss gets translated into the chi-square test.</p>
<p>I want to explore more concretely how the constraint on the total number of bottles affects the distribution of the <span class="math inline">\(X^2\)</span> statistic and ultimately the chi-square test.</p>
<div id="unconditional-counting" class="section level4">
<h4>Unconditional counting</h4>
<p>Consider a simpler example of three glass recycling bins in three different buildings. We know that, on average, the bin in building 1 typically has 20 bottles deposited daily, the bin in building 2 usually has 40, and the bin in building 3 has 80. These number of bottles in each bin is Poisson distributed, with <span class="math inline">\(\lambda_i, \ i \in \{1,2, 3\}\)</span> equal to 20, 40, and 80, respectively. Note, while we would expect on average 140 total bottles across the three buildings, some days we have fewer, some days we have more - all depending on what happens in each individual building. The total is also Poisson distributed with <span class="math inline">\(\lambda_{total} = 140\)</span>.</p>
<p>Let’s generate 10,000 days worth of data (under the assumption that bottle disposal patterns are consistent over a very long time, a dubious assumption).</p>
<pre class="r"><code>library(simstudy)
def <- defData(varname = "bin_1", formula = 20, dist = "poisson")
def <- defData(def, "bin_2", formula = 40, dist = "poisson")
def <- defData(def, "bin_3", formula = 80, dist = "poisson")
def <- defData(def, varname = "N",
formula = "bin_1 + bin_2 + bin_3",
dist = "nonrandom")
set.seed(1234)
dt <- genData(10000, def)
dt[1:5, ]</code></pre>
<pre><code>## id bin_1 bin_2 bin_3 N
## 1: 1 14 44 59 117
## 2: 2 21 36 81 138
## 3: 3 21 34 68 123
## 4: 4 16 43 81 140
## 5: 5 22 44 86 152</code></pre>
<p>The means and variances are as expected:</p>
<pre class="r"><code>round(dt[ ,.(mean(bin_1), mean(bin_2), mean(bin_3))], 1)</code></pre>
<pre><code>## V1 V2 V3
## 1: 20 39.9 80.1</code></pre>
<pre class="r"><code>round(dt[ ,.(var(bin_1), var(bin_2), var(bin_3))], 1)</code></pre>
<pre><code>## V1 V2 V3
## 1: 19.7 39.7 80.6</code></pre>
<p>This plot shows the actual numbers of bottles in each bin in each building over the 10,000 days:</p>
<p><img src="https://www.rdatagen.net/post/2018-03-18-a-little-intuition-and-simulation-behind-the-chi-square-test-of-independence_files/figure-html/unnamed-chunk-19-1.png" width="672" /></p>
<p>There is also quite a lot of variability in the daily totals calculated by adding up the bins across three buildings. (While it is clear based on the mean and variance that this total has a <span class="math inline">\(Poisson(140)\)</span> distribution, the plot looks quite symmetrical. It is the case that as <span class="math inline">\(\lambda\)</span> increases, the Poisson distribution becomes well approximated by the normal distribution.)</p>
<pre class="r"><code>round(dt[, .(avgN = mean(N), varN = var(N))], 1)</code></pre>
<pre><code>## avgN varN
## 1: 140 139.6</code></pre>
<p><img src="https://www.rdatagen.net/post/2018-03-18-a-little-intuition-and-simulation-behind-the-chi-square-test-of-independence_files/figure-html/unnamed-chunk-21-1.png" width="672" /></p>
</div>
<div id="conditional-counting" class="section level4">
<h4>Conditional counting</h4>
<p>Now, let’s say that the three bins are actually in the <em>same</em> (very large) building, located in different rooms in the basement, just to make it more convenient for residents (in case you are wondering, my bins are right next to the service elevator). But, let’s also make the assumption (and condition) that there are always between 138 and 142 total bottles on any given day. The expected values for each bin remain 20, 40, and 80, respectively.</p>
<p>We calculate the total number of bottles every day and identify all cases where the sum of the three bins is within the fixed range. For this subset of the sample, we see that the means are unchanged:</p>
<pre class="r"><code>defAdd <- defDataAdd(varname = "condN",
formula = "(N >= 138 & N <= 142)",
dist = "nonrandom")
dt <- addColumns(defAdd, dt)
round(dt[condN == 1,
.(mean(bin_1), mean(bin_2), mean(bin_3))],
1)</code></pre>
<pre><code>## V1 V2 V3
## 1: 20.1 40 79.9</code></pre>
<p>However, <strong>and this is really the key point</strong>, the variance of the sample (which is conditional on the sum being between 138 and 142) is reduced:</p>
<pre class="r"><code>round(dt[condN == 1,
.(var(bin_1), var(bin_2), var(bin_3))],
1)</code></pre>
<pre><code>## V1 V2 V3
## 1: 17.2 28.3 35.4</code></pre>
<p>The red points in the plot below represent all daily totals <span class="math inline">\(\sum_i bin_i\)</span> that fall between 138 and 142 bottles. The spread from top to bottom is contained by the rest of the (unconstrained) sample, an indication that the variance for this conditional scenario is smaller:</p>
<p><img src="https://www.rdatagen.net/post/2018-03-18-a-little-intuition-and-simulation-behind-the-chi-square-test-of-independence_files/figure-html/unnamed-chunk-24-1.png" width="672" /></p>
<p>Not surprisingly, the distribution of the totals across the bins is quite narrow. But, this is almost a tautology, since this is how we defined the sample:</p>
<p><img src="https://www.rdatagen.net/post/2018-03-18-a-little-intuition-and-simulation-behind-the-chi-square-test-of-independence_files/figure-html/unnamed-chunk-25-1.png" width="672" /></p>
</div>
</div>
<div id="biased-standardization" class="section level3">
<h3>Biased standardization</h3>
<p>And here is the grand finale of part 1. When we calculate <span class="math inline">\(X^2\)</span> using the standard formula under a constrained data generating process, we are not dividing by the proper variance. We just saw that the conditional variance within each bin is smaller than the variance of the unconstrained Poisson distribution. So, <span class="math inline">\(X^2\)</span>, as defined by</p>
<p><span class="math display">\[ X^2 = \sum_{i=1}^{k \ bins} {\frac{(O_i - E_i)^2}{E_i}}\]</span></p>
<p>is not a sum of approximately standard normal variables - the variance used in the formula is too high. <span class="math inline">\(X^2\)</span> will be smaller than a <span class="math inline">\(\chi^2_k\)</span>. How much smaller? Well, if the constraint is even tighter, limited to where the total equals exactly 140 bottles every day, <span class="math inline">\(X^2\)</span> has a <span class="math inline">\(\chi^2_{k-1}\)</span> distribution.</p>
<p>Even using our slightly looser constraint of fixing the total between 138 and 142, the distribution is quite close to a <span class="math inline">\(chi^2_2\)</span> distribution:</p>
<pre class="r"><code>defA <- defDataAdd(varname = "X2.1",
formula = "(bin_1-20)^2 / 20", dist = "nonrandom")
defA <- defDataAdd(defA, "X2.2",
formula = "(bin_2-40)^2 / 40", dist = "nonrandom")
defA <- defDataAdd(defA, "X2.3",
formula = "(bin_3-80)^2 / 80", dist = "nonrandom")
defA <- defDataAdd(defA, "X2",
formula = "X2.1 + X2.2 + X2.3", dist = "nonrandom")
dt <- addColumns(defA, dt)</code></pre>
<p>Comparison with <span class="math inline">\(\chi^2_3\)</span> shows clear bias:</p>
<p><img src="https://www.rdatagen.net/post/2018-03-18-a-little-intuition-and-simulation-behind-the-chi-square-test-of-independence_files/figure-html/unnamed-chunk-27-1.png" width="672" /></p>
<p>Here it is with a <span class="math inline">\(\chi^2_2\)</span> distribution:</p>
<p><img src="https://www.rdatagen.net/post/2018-03-18-a-little-intuition-and-simulation-behind-the-chi-square-test-of-independence_files/figure-html/unnamed-chunk-28-1.png" width="672" /></p>
</div>
<div id="recycling-more-than-glass" class="section level3">
<h3>Recycling more than glass</h3>
<p>Part 2 will extend this discussion to the contingency table, which is essentially a 2-dimensional array of bins. If we have different types of materials to recycle - glass bottles, plastic containers, cardboard boxes, and metal cans - we need four bins at each location. We might be interested in knowing if the distribution of these four materials is different across the 3 different locations - this is where the chi-square test for independence can be useful.</p>
<p>As an added bonus, you can expect to see lots of code that allows you to simulate contingency tables under different assumptions of conditioning. I know my kids are psyched.</p>
</div>
Another reason to be careful about what you control for
https://www.rdatagen.net/post/another-reason-to-be-careful-about-what-you-control-for/
Wed, 07 Mar 2018 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/another-reason-to-be-careful-about-what-you-control-for/<p>Modeling data without any underlying causal theory can sometimes lead you down the wrong path, particularly if you are interested in understanding the <em>way</em> things work rather than making <em>predictions.</em> A while back, I <a href="https://www.rdatagen.net/post/be-careful/">described</a> what can go wrong when you control for a mediator when you are interested in an exposure and an outcome. Here, I describe the potential biases that are introduced when you inadvertently control for a variable that turns out to be a <strong><em>collider</em></strong>.</p>
<p>A collider, like a mediator, is a post-exposure/post-intervention outcome. Unlike a mediator, a collider is not necessarily causally related to the outcome of interest. (This is not to say that it cannot be, which is why this concept came up in a talk I gave about marginal structural models, described <a href="https://www.rdatagen.net/post/potential-outcomes-confounding/">here</a>, <a href="https://www.rdatagen.net/post/inverse-probability-weighting-when-the-outcome-is-binary/">here</a>, and <a href="https://www.rdatagen.net/post/when-a-covariate-is-a-confounder-and-a-mediator/">here</a>.) The key distinction of a collider is that it is an outcome that has two causes. In a directed acyclic graph (or <a href="http://www.dagitty.net/learn/index.html">DAG</a>), a collider is a variable with two arrows pointing towards it. This is easier to see visually:</p>
<div class="figure">
<img src="https://www.rdatagen.net/img/post-collider/Collider_names.png" />
</div>
<p>In this (admittedly thoroughly made-up though not entirely implausible) network diagram, the <em>test score</em> outcome is a collider, influenced by a <em>test preparation</em> class and <em>socio-economic status</em> (SES). In particular, both the test prep course and high SES are related to the probability of having a high test score. One might expect an arrow of some sort to connect SES and the test prep class; in this case, participation in test prep is randomized so there is no causal link (and I am assuming that everyone randomized to the class actually takes it, a compliance issue I addressed in a series of posts starting with <a href="https://www.rdatagen.net/post/cace-explored/">this one</a>.)</p>
<p>The researcher who carried out the randomization had a hypothesis that test prep actually is detrimental to college success down the road, because it de-emphasizes deep thinking in favor of wrote memorization. In reality, it turns out that the course and subsequent college success are not related, indicated by an <em>absence</em> of a connection between the course and the long term outcome.</p>
<div id="simulate-data" class="section level3">
<h3>Simulate data</h3>
<p>We can simulate data from this hypothetical world (using functions from package <code>simstudy</code>):</p>
<pre class="r"><code># define data
library(simstudy)
defCollide <- defData(varname = "SES",
formula = "0;1",
dist = "uniform")
defCollide <- defData(defCollide, varname = "testPrep",
formula = 0.5,
dist = "binary")
defCollide <- defData(defCollide, varname = "highScore",
formula = "-1.2 + 3*SES + 3*testPrep",
dist = "binary", link="logit")
defCollide <- defData(defCollide, varname = "successMeasure",
formula = "20 + SES*40", variance = 9,
dist = "normal")
defCollide</code></pre>
<pre><code>## varname formula variance dist link
## 1: SES 0;1 0 uniform identity
## 2: testPrep 0.5 0 binary identity
## 3: highScore -1.2 + 3*SES + 3*testPrep 0 binary logit
## 4: successMeasure 20 + SES*40 9 normal identity</code></pre>
<pre class="r"><code># generate data
set.seed(139)
dt <- genData(1500, defCollide)
dt[1:6]</code></pre>
<pre><code>## id SES testPrep highScore successMeasure
## 1: 1 0.52510665 1 1 40.89440
## 2: 2 0.31565690 0 1 34.72037
## 3: 3 0.47978492 1 1 41.79532
## 4: 4 0.19114934 0 0 30.05569
## 5: 5 0.06889896 0 0 21.28575
## 6: 6 0.10139604 0 0 21.30306</code></pre>
<p>We can see that the distribution of the long-term (continuous) success outcome is the same for those who are randomized to test prep compared to those who are not, indicating there is no causal relationship between the test and the college outcome:</p>
<p><img src="https://www.rdatagen.net/post/2018-03-07-another-reason-to-be-careful-about-what-you-control-for_files/figure-html/unnamed-chunk-4-1.png" width="768" /></p>
<p>An unadjusted linear model leads us to the same conclusion, since the parameter estimate representing the treatment effect is quite small (and the hypothesis test is not statistically significant):</p>
<pre class="r"><code>library(broom)
rndtidy( lm(successMeasure ~ testPrep, data = dt))</code></pre>
<pre><code>## term estimate std.error statistic p.value
## 1 (Intercept) 40.112 0.44 91.209 0.000
## 2 testPrep -0.495 0.61 -0.811 0.418</code></pre>
</div>
<div id="but-dont-we-need-to-adjust-for-some-measure-of-intellectual-ability" class="section level3">
<h3>But, don’t we need to adjust for some measure of intellectual ability?</h3>
<p>Or so the researcher might ask after looking at the initial results, questioning the model. He believes that differences in ability could be related to future outcomes. While this may be the case, the question isn’t about ability but the impact of test prep. Based on his faulty logic, the researcher decides to fit a second model and control for the test score that followed the experiment. And this is where things go awry. Take a look at the following model where there appears to be a relationship between test prep and college success after controlling for the test score:</p>
<pre class="r"><code># adjusted model
rndtidy( lm(successMeasure ~ highScore + testPrep, data = dt))</code></pre>
<pre><code>## term estimate std.error statistic p.value
## 1 (Intercept) 35.525 0.619 57.409 0
## 2 highScore 8.027 0.786 10.207 0
## 3 testPrep -3.564 0.662 -5.380 0</code></pre>
<p>It does indeed appear that the test prep course is causing problems for real learning in college later on!</p>
</div>
<div id="what-is-going-on" class="section level3">
<h3>What is going on?</h3>
<p>Because the test score (here I am treating it as binary - either a high score or not), is related to both SES and test prep, the fact that someone does well on the test is due either to the fact that the student took the course, has high SES, or both. But, let’s consider the students who are possibly high SES or maybe took the course, but not not both, <strong><em>and</em></strong> who had a high test score. If a student is low SES, she probably took the course, or if she did not take the course, she is probably high SES. So, within the group that scored well, SES and the probability of taking the course are slightly negatively correlated.</p>
<p>If we “control” for test scores in the model, we are essentially comparing students within two distinct groups - those who scored well and those who did not. The updated network diagram shows a relationship between SES and test prep that didn’t exist before. This is the induced relationship we get by controlling a collider. (Control is shown in the diagram by removing the connection of SES and test prep to the test score.)</p>
<div class="figure">
<img src="https://www.rdatagen.net/img/post-collider/Collider_names_adjust.png" />
</div>
<p>If we look at the entire sample and compare the SES distribution (which is a continuous measure uniformly distributed between 0 and 1) for each test prep group, we see that both groups have the same distribution (i.e. there is no relationship):</p>
<p><img src="https://www.rdatagen.net/post/2018-03-07-another-reason-to-be-careful-about-what-you-control-for_files/figure-html/unnamed-chunk-7-1.png" width="768" /></p>
<p>But if we look at the relationship between SES and test prep within each test score group, the distributions no longer completely overlap - within each test score group, there is a relationship between SES and test prep.</p>
<p><img src="https://www.rdatagen.net/post/2018-03-07-another-reason-to-be-careful-about-what-you-control-for_files/figure-html/unnamed-chunk-8-1.png" width="768" /></p>
</div>
<div id="why-does-this-matter" class="section level3">
<h3>Why does this matter?</h3>
<p>If the researcher has no good measure for SES or no measure at all, he cannot control for SES in the model. And now, because of the induced relationship between test prep and (unmeasured) SES, there is unmeasured confounding. This confounding leads to the biased estimate that we saw in the second model. And we see this bias in the densities shown for each test score group:</p>
<p><img src="https://www.rdatagen.net/post/2018-03-07-another-reason-to-be-careful-about-what-you-control-for_files/figure-html/unnamed-chunk-9-1.png" width="768" /></p>
<p>If it turns out that we <em>can</em> control for SES as well, because we have an adequate measure for it, then the artificial link between SES and test prep is severed, and so is the relationship between test prep and the long term college outcome.</p>
<pre class="r"><code>rndtidy( lm(successMeasure ~ SES + highScore + testPrep, data = dt))</code></pre>
<pre><code>## term estimate std.error statistic p.value
## 1 (Intercept) 19.922 0.194 102.519 0.000
## 2 SES 40.091 0.279 143.528 0.000
## 3 highScore -0.098 0.212 -0.462 0.644
## 4 testPrep 0.137 0.174 0.788 0.431</code></pre>
<p>The researcher can create problems by controlling for all the variables he has and not controlling for the variables he doesn’t have. Of course, if there are no colliders and mediators, then there is no harm. And unfortunately, without theory, it may be hard to know the structure of the DAG, particularly if there are important unmeasured variables. But, the researcher needs to proceed with a bit of caution.</p>
</div>
<div id="addendum-selection-bias" class="section level2">
<h2>Addendum: selection bias</h2>
<p>“Selection bias” is used in a myriad of ways to characterize the improper assessment of an exposure-outcome relationship. For example, unmeasured confounding (where there is an unmeasured factor that influences both an exposure and an outcome) is often called selection bias, in the sense that the exposure is “selected” based on that particular characteristic.</p>
<p>Epidemiologists talk about selection bias in a very specific way, related to how individuals are selected or self-select into a study. In particular, if selection into a study depends on the exposure of interest and some other factor that is associated with the outcome, we can have selection bias.</p>
<p>How is this relevant to this post? Selection bias results from controlling a collider. In this case, however, control is done on through the study design, rather than through statistical modeling. Let’s say we have the same scenario with a randomized trial of a test prep course and we are primarily interested in the impact on the near-term test score. But, later on, we decide to explore the relationship of the course with the long-term college outcome and we send out a survey to collect the college outcome data. It turns out that those who did well on the near-term test were much more likely to respond to the survey - so those who have been selected (or in this case self-selected) will have an induced relationship between the test prep course and SES, just as before. Here is the new DAG:</p>
<div class="figure">
<img src="https://www.rdatagen.net/img/post-collider/Collider_names_select.png" />
</div>
<div id="simulate-new-study-selection-variable" class="section level3">
<h3>Simulate new study selection variable</h3>
<p>The study response or selection variable is dependent on the near-term test score. The selected group is explicitly defined by the value of <code>inStudy</code></p>
<pre class="r"><code># selection bias
defS <- defDataAdd(varname = "inStudy",
formula = "-2.0 + 2.2 * highScore",
dist = "binary", link = "logit")
dt <- addColumns(defS, dt)
dSelect <- dt[inStudy == 1]</code></pre>
<p>We can see that a large proportion of the the selected group has a high probability of having scored high on the test score:</p>
<pre class="r"><code>dSelect[, mean(highScore)]</code></pre>
<pre><code>## [1] 0.9339207</code></pre>
</div>
<div id="selection-bias-is-a-muted-version-of-full-on-collider-bias" class="section level3">
<h3>Selection bias is a muted version of full-on collider bias</h3>
<p>Within this group of selected students, there is an (incorrectly) estimated relationship between the test prep course and subsequent college success. This bias is what epidemiologists are talking about when they talk about selection bias:</p>
<pre class="r"><code>rndtidy( lm(successMeasure ~ testPrep, data = dSelect))</code></pre>
<pre><code>## term estimate std.error statistic p.value
## 1 (Intercept) 41.759 0.718 58.154 0.000
## 2 testPrep -2.164 0.908 -2.383 0.017</code></pre>
</div>
</div>
“I have to randomize by cluster. Is it OK if I only have 6 sites?"
https://www.rdatagen.net/post/i-have-to-randomize-by-site-is-it-ok-if-i-only-have-6/
Wed, 21 Feb 2018 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/i-have-to-randomize-by-site-is-it-ok-if-i-only-have-6/<p>The answer is probably no, because there is a not-so-low chance (perhaps considerably higher than 5%) you will draw the wrong conclusions from the study. I have heard variations on this question not so infrequently, so I thought it would be useful (of course) to do a few quick simulations to see what happens when we try to conduct a study under these conditions. (Another question I get every so often, after a study has failed to find an effect: “can we get a post-hoc estimate of the power?” I was all set to post on the issue, but then I found <a href="http://daniellakens.blogspot.com/2014/12/observed-power-and-what-to-do-if-your.html">this</a>, which does a really good job of explaining why this is not a very useful exercise.) But, back to the question at hand.</p>
<p>Here is the bottom line: if there are differences between clusters that relate to the outcome, there is a good chance that we might confuse those inherent differences for treatment effects. These inherent differences could be the characteristics of people in the different clusters; for example, a health care clinic might attract healthier people than others. Or the differences could be characteristics of the clusters themselves; for example, we could imagine that some health care clinics are better at managing high blood pressure than others. In both scenarios, individuals in a particular cluster are likely to have good outcomes regardless of the intervention. And if these clusters happen to get assigned to the intervention, we could easily confuse the underlying structure or characteristics as an intervention effect.</p>
<p>This problem easiest to observe if we generate data with the underlying assumption that there is no treatment effect. Actually, I will generate lots of these data sets, and for each one I am going to test for statistical significance. (I am comfortable doing that in this situation, since I literally can repeat the identical experiment over an over again, a key pre-requisite for properly interpreting a p-value.) I am going to estimate the proportion of cases where the test statistic would lead me to incorrectly reject the null hypothesis, or make a Type I error. (I am not getting into the case where there is actually a treatment effect.)</p>
<div id="a-single-cluster-randomized-trial-with-6-sites" class="section level3">
<h3>A single cluster randomized trial with 6 sites</h3>
<p>First, I define the cluster level data. Each cluster or site will have a “fixed” effect that will apply to all individuals within that site. I will generate the fixed effect so that on average (across all sites) it is 0 with a variance of 0.053. (I will explain that arbitrary number in a second.) Each site will have exactly 50 individuals.</p>
<pre class="r"><code>library(simstudy)
defC <- defData(varname = "siteFix", formula = 0,
variance = .053, dist = "normal", id = "cID")
defC <- defData(defC, varname = "nsite", formula = 50,
dist = "nonrandom")
defC</code></pre>
<pre><code>## varname formula variance dist link
## 1: siteFix 0 0.053 normal identity
## 2: nsite 50 0.000 nonrandom identity</code></pre>
<p>Now, I generate the cluster-level data and assign treatment:</p>
<pre class="r"><code>set.seed(7)
dtC <- genData(6, defC)
dtC <- trtAssign(dtC)
dtC</code></pre>
<p>Once the cluster-level are ready, I can define and generate the individual-level data. Each cluster will have 50 records, for a total of 300 individuals.</p>
<pre class="r"><code>defI <- defDataAdd(varname = "y", formula = "siteFix", variance = 1 )
dtI <- genCluster(dtClust = dtC, cLevelVar = "cID", numIndsVar = "nsite",
level1ID = "id")
dtI <- addColumns(defI, dtI)
dtI</code></pre>
<pre><code>## cID trtGrp siteFix nsite id y
## 1: 1 0 0.5265638 50 1 2.7165419
## 2: 1 0 0.5265638 50 2 0.8835501
## 3: 1 0 0.5265638 50 3 3.2433156
## 4: 1 0 0.5265638 50 4 2.8080158
## 5: 1 0 0.5265638 50 5 0.8505844
## ---
## 296: 6 1 -0.2180802 50 296 -0.6351033
## 297: 6 1 -0.2180802 50 297 -1.3822554
## 298: 6 1 -0.2180802 50 298 1.5197839
## 299: 6 1 -0.2180802 50 299 -0.4721576
## 300: 6 1 -0.2180802 50 300 -1.1917988</code></pre>
<p>I promised a little explanation of why the variance of the sites was specified as 0.053. The statistic that characterizes the extent of clustering is the inter-class coefficient, or ICC. This is calculated by</p>
<p><span class="math display">\[ICC = \frac{\sigma^2_{clust}}{\sigma^2_{clust}+\sigma^2_{ind}}\]</span> where <span class="math inline">\(\sigma^2_{clust}\)</span> is the variance of the cluster means, and <span class="math inline">\(\sigma^2_{ind}\)</span> is the variance of the individuals within the clusters. (We are assuming that the within-cluster variance is constant across all clusters.) The denominator represents the total variation across all individuals. The ICC ranges from 0 (no clustering) to 1 (maximal clustering). When <span class="math inline">\(\sigma^2_{clust} = 0\)</span> then the <span class="math inline">\(ICC=0\)</span>. This means that all variation is due to individual variation. And when <span class="math inline">\(\sigma^2_{ind}=0\)</span>, <span class="math inline">\(ICC=1\)</span>. In this case, there is no variation across individuals within a cluster (i.e. they are all the same with respect to this measure) and any variation across individuals more generally is due entirely to the cluster variation. I used a cluster-level variance of 0.053 so that the ICC is 0.05:</p>
<p><span class="math display">\[ICC = \frac{0.053}{0.053+1.00} \approx 0.05\]</span></p>
<p>OK - back to the data. Let’s take a quick look at it:</p>
<pre class="r"><code>library(ggplot2)
ggplot(data=dtI, aes(x=factor(cID), y=y)) +
geom_jitter(aes(color=factor(trtGrp)), width = .1) +
scale_color_manual(labels=c("control", "rx"),
values = c("#ffc734", "#346cff")) +
theme(panel.grid.minor = element_blank(),
legend.title = element_blank())</code></pre>
<p><img src="https://www.rdatagen.net/post/2018-02-21-i-have-to-randomize-by-site-is-it-ok-if-i-only-have-6_files/figure-html/unnamed-chunk-4-1.png" width="672" /></p>
<p>Remember, there is no treatment effect (either positive or negative). But, due to cluster variation, Site 1 (randomized to control) has higher than average outcomes. We estimate the treatment effect using a fixed effects model. This model seems reasonable, since we don’t have enough sites to estimates the variability of a random effects model. We conclude that the treatment has a (deleterious) effect (assuming higher <span class="math inline">\(y\)</span> is a good thing), based on the p-value for the treatment effect estimate that is considerably less than 0.05.</p>
<pre class="r"><code>library(broom)
library(lme4)
lmfit <- lm(y ~ trtGrp + factor(cID), data = dtI)
tidy(lmfit)</code></pre>
<pre><code>## term estimate std.error statistic p.value
## 1 (Intercept) 0.8267802 0.1394788 5.9276404 8.597761e-09
## 2 trtGrp -0.9576641 0.1972528 -4.8550088 1.958238e-06
## 3 factor(cID)2 -0.1162042 0.1972528 -0.5891129 5.562379e-01
## 4 factor(cID)3 0.1344241 0.1972528 0.6814812 4.961035e-01
## 5 factor(cID)4 -0.8148341 0.1972528 -4.1309123 4.714672e-05
## 6 factor(cID)5 -1.2684515 0.1972528 -6.4305878 5.132896e-10</code></pre>
<p> </p>
</div>
<div id="a-more-systematic-evaluation" class="section level3">
<h3>A more systematic evaluation</h3>
<p>OK, so I was able to pick a seed that generated the outcomes in that single instance that seemed to illustrate my point. But, what happens if we look at this a bit more systematically? The series of plots that follow seem to tell a story. Each one represents a series of simulations, similar to the one above (I am not including the code, because it is a bit convoluted, but would be happy to share if anyone wants it.)</p>
<p>The first scenario shown below is based on six sites using ICCs that range from 0 to 0.10. For each level of ICC, I generated 100 different samples of six sites. For each of those 100 samples, I generated 100 different randomization schemes (which I know is overkill in the case of 6 sites since there are only 20 possible randomization schemes) and generated a new set of individuals. For each of those 100 randomization schemes, I estimated a fixed effects model and recorded the proportion of the 100 where the p-values were below the 0.05 threshold.</p>
<p><img src="https://www.rdatagen.net/img/post-smallcluster/Fixed6.png" /> How do we interpret this plot? When there is no clustering (<span class="math inline">\(ICC=0\)</span>), the probability of a Type I error is close to 5%, which is what we would expect based on theory. But, once we have any kind of clustering, things start to go a little haywire. Even when <span class="math inline">\(ICC=0.025\)</span>, we would make a lot of mistakes. The error rate only increases as the extent of clustering increases. There is quite a lot variability in the error rate, which is a function of the variability of the site specific effects.</p>
<p>If we use 24 sites, and continue to fit a fixed effect model, we see largely the same thing. Here, we have a much bigger sample size, so a smaller treatment effect is more likely to be statistically significant:</p>
<div class="figure">
<img src="https://www.rdatagen.net/img/post-smallcluster/Fixed24.png" />
</div>
<p>One could make the case that instead of fitting a fixed effects model, we should be using a random effects model (particularly if the sites themselves are randomly pulled from a population of sites, though this is hardly likely to be the case when you are having a hard time recruiting sites to participate in your study). The next plot shows that the error rate goes down for 6 sites, but still not enough for my comfort:</p>
<div class="figure">
<img src="https://www.rdatagen.net/img/post-smallcluster/Random6.png" />
</div>
<p>With 24 sites, the random effects model seems much safer to use:</p>
<div class="figure">
<img src="https://www.rdatagen.net/img/post-smallcluster/Random24.png" />
</div>
<p>But, in reality, if we only have 6 sites, the best that we could do is randomize within site and use a fixed effect model to draw our conclusions. Even at high levels of clustering, this approach will generally lead us towards a valid conclusion (assuming, of course, the study itself is well designed and implemented):</p>
<p><img src="https://www.rdatagen.net/img/post-smallcluster/RwithinC6.png" /> But, I assume the researcher couldn’t randomize at the individual level, otherwise they wouldn’t have asked that question. In which case I would say, “It might not be the best use of resources.”</p>
</div>
Have you ever asked yourself, "how should I approach the classic pre-post analysis?"
https://www.rdatagen.net/post/thinking-about-the-run-of-the-mill-pre-post-analysis/
Sun, 28 Jan 2018 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/thinking-about-the-run-of-the-mill-pre-post-analysis/<p>Well, maybe you haven’t, but this seems to come up all the time. An investigator wants to assess the effect of an intervention on a outcome. Study participants are randomized either to receive the intervention (could be a new drug, new protocol, behavioral intervention, whatever) or treatment as usual. For each participant, the outcome measure is recorded at baseline - this is the <em>pre</em> in pre/post analysis. The intervention is delivered (or not, in the case of the control group), some time passes, and the outcome is measured a second time. This is our <em>post</em>. The question is, how should we analyze this study to draw conclusions about the intervention’s effect on the outcome?</p>
<p>There are at least three possible ways to approach this. (1) Ignore the <em>pre</em> outcome measure and just compare the average <em>post</em> scores of the two groups. (2) Calculate a <em>change</em> score for each individual (<span class="math inline">\(\Delta_i = post_i - pre_i\)</span>), and compare the average <span class="math inline">\(\Delta\)</span>’s for each group. Or (3), use a more sophisticated regression model to estimate the intervention effect while <em>controlling</em> for the <em>pre</em> or baseline measure of the outcome. Here are three models associated with each approach (<span class="math inline">\(T_i\)</span> is 1 if the individual <span class="math inline">\(i\)</span> received the treatment, 0 if not, and <span class="math inline">\(\epsilon_i\)</span> is an error term):</p>
<span class="math display">\[\begin{aligned}
&(1) \ \ post_i = \beta_0 + \beta_1T_i + \epsilon_i \\
\\
&(2) \ \ \Delta_i = \alpha_0 + \alpha_1T_i + \epsilon_i \\
\\
&(3) \ \ post_i = \gamma_0 + \gamma_1 pre_i+ \gamma_2 T_i + \epsilon_i
\end{aligned}\]</span>
<p>I’ve explored various scenarios (i.e. different data generating assumptions) to see if it matters which approach we use. (Of course it does.)</p>
<div id="scenario-1-pre-and-post-not-correlated" class="section level3">
<h3>Scenario 1: pre and post not correlated</h3>
<p>In the simulations that follow, I am generating potential outcomes for each individual. So, the variable <code>post0</code> represents the follow-up outcome for the individual under the control condition, and <code>post1</code> is the outcome in the intervention condition. <code>pre0</code> and <code>pre1</code> are the same, because the intervention does not affect the baseline measurement. The effect of the intervention is specified by <code>eff</code>. In the first scenario, the baseline and follow-up measures are not related to each other, and the effect size is 1. All of the data definitions and data generation are done using package <code>simstudy</code>.</p>
<pre class="r"><code>library(simstudy)
# generate potential outcomes
defPO <- defData(varname = "pre0", formula = 8.5,
variance = 4, dist = "normal")
defPO <- defData(defPO, varname = "post0", formula = 7.5,
variance = 4, dist = "normal")
defPO <- defData(defPO, varname = "pre1", formula = "pre0",
dist = "nonrandom")
defPO <- defData(defPO, varname = "eff", formula = 1,
variance = 0.2, dist = "normal")
defPO <- defData(defPO, varname = "post1", formula = "post0 + eff",
dist = "nonrandom")</code></pre>
<p>The baseline, follow-up, and change that are actually <em>observed</em> are merely a function of the group assignment.</p>
<pre class="r"><code># generate observed data
defObs <- defDataAdd(varname = "pre",
formula = "pre0 * (trtGrp == 0) + pre1 * (trtGrp == 1)",
dist = "nonrandom")
defObs <- defDataAdd(defObs, varname = "post",
formula = "post0 * (trtGrp == 0) + post1 * (trtGrp == 1)",
dist = "nonrandom")
defObs <- defDataAdd(defObs, varname = "delta",
formula = "post - pre",
dist = "nonrandom")</code></pre>
<p>Now we generate the potential outcomes, the group assignment, and observed data for 1000 individuals. (I’m using package <code>stargazer</code>, definitely worth checking out, to print out the first five rows of the dataset.)</p>
<pre class="r"><code>set.seed(123)
dt <- genData(1000, defPO)
dt <- trtAssign(dt)
dt <- addColumns(defObs, dt)
stargazer::stargazer(dt[1:5,], type = "text", summary=FALSE, digits = 2)</code></pre>
<pre><code>##
## =========================================================
## id trtGrp pre0 post0 pre1 eff post1 pre post delta
## ---------------------------------------------------------
## 1 1 1 7.38 5.51 7.38 0.77 6.28 7.38 6.28 -1.10
## 2 2 1 8.04 5.42 8.04 1.11 6.53 8.04 6.53 -1.51
## 3 3 1 11.62 7.46 11.62 0.76 8.22 11.62 8.22 -3.40
## 4 4 0 8.64 7.24 8.64 1.55 8.78 8.64 7.24 -1.41
## 5 5 1 8.76 2.40 8.76 1.08 3.48 8.76 3.48 -5.28
## ---------------------------------------------------------</code></pre>
<p>The plots show the three different types of analysis - follow-up measurement alone, change, or follow-up controlling for baseline:</p>
<p><img src="https://www.rdatagen.net/post/2018-01-28-thinking-about-the-run-of-the-mill-pre-post-analysis_files/figure-html/unnamed-chunk-4-1.png" width="1152" /></p>
<p>I compare the different modeling approaches by using simulation to estimate statistical power for each. That is, given that there is some effect, how often is the p-value of the test less than 0.05. I’ve written a function to handle the data generation and power estimation. The function generates 1000 data sets of a specified sample size, each time fitting the three models, and keeping track of the relevant p-values for each iteration.</p>
<pre class="r"><code>powerFunc <- function(def, addDef, ss, rct = TRUE) {
presults <- data.table()
iter <- 1000
for (i in 1:iter) {
dt <- genData(ss, def)
if (rct) {
dt <- trtAssign(dt)
} else {
dt <- trtObserve(dt, "-4.5 + .5*pre0", logit.link = TRUE)
}
dt <- addColumns(addDef, dt)
lmfit1 <- lm(post ~ trtGrp, data = dt)
lmfit2 <- lm(delta ~ trtGrp, data = dt)
lmfit3 <- lm(post ~ pre + trtGrp, data = dt)
lmfit3x <- lm(post ~ pre + trtGrp + pre*trtGrp, data = dt)
p1 <- coef(summary(lmfit1))["trtGrp","Pr(>|t|)" ]
p2 <- coef(summary(lmfit2))["trtGrp","Pr(>|t|)" ]
p3 <- coef(summary(lmfit3))["trtGrp","Pr(>|t|)" ]
p3x <- coef(summary(lmfit3x))["pre:trtGrp","Pr(>|t|)" ]
presults <- rbind(presults, data.table(p1, p2, p3, p3x))
}
return(presults)
}</code></pre>
<p>The results for the first data set are based on a sample size of 150 individuals (75 in each group). The <em>post-only</em> model does just as well as the <em>post adjusted for baseline</em> model. The model evaluating change in this scenario is way under powered.</p>
<pre class="r"><code>presults <- powerFunc(defPO, defObs, 150)
presults[, .(postonly = mean(p1 <= 0.05),
change = mean(p2 <= 0.05),
adjusted = mean(p3 <= 0.05))]</code></pre>
<pre><code>## postonly change adjusted
## 1: 0.85 0.543 0.845</code></pre>
<p> </p>
</div>
<div id="scenario-2-pre-and-post-are-moderately-correlated" class="section level3">
<h3>Scenario 2: pre and post are moderately correlated</h3>
<p>Now, we update the definition of <code>post0</code> so that it is now a function of <code>pre0</code>, so that the correlation is around 0.45.</p>
<pre class="r"><code>defPO <- updateDef(defPO, changevar = "post0",
newformula = "3.5 + 0.47 * pre0",
newvariance = 3) </code></pre>
<p><img src="https://www.rdatagen.net/post/2018-01-28-thinking-about-the-run-of-the-mill-pre-post-analysis_files/figure-html/unnamed-chunk-8-1.png" width="1152" /></p>
<p>The correlation actually increases power, so we use a reduced sample size of 120 for the power estimation. In this case, the three models actually all do pretty well, but the <em>adjusted</em> model is slightly superior.</p>
<pre><code>## postonly change adjusted
## 1: 0.776 0.771 0.869</code></pre>
<p> </p>
</div>
<div id="scenario-3-pre-and-post-are-almost-perfectly-correlated" class="section level3">
<h3>Scenario 3: pre and post are almost perfectly correlated</h3>
<p>When baseline and follow-up measurements are almost perfectly correlated (in this case about 0.85), we would be indifferent between the <em>change</em> and <em>adjusted</em> analyses - the power of the tests is virtually identical. However, the analysis that considers the follow-up measure alone does is much less adequate, due primarily to the measure’s relatively high variability.</p>
<pre class="r"><code>defPO <- updateDef(defPO, changevar = "post0",
newformula = "0.9 * pre0",
newvariance = 1) </code></pre>
<p><img src="https://www.rdatagen.net/post/2018-01-28-thinking-about-the-run-of-the-mill-pre-post-analysis_files/figure-html/unnamed-chunk-11-1.png" width="1152" /></p>
<pre><code>## postonly change adjusted
## 1: 0.358 0.898 0.894</code></pre>
<p> </p>
</div>
<div id="when-the-effect-differs-by-baseline-measurement" class="section level3">
<h3>When the effect differs by baseline measurement</h3>
<p>In a slight variation of the previous scenario, the <em>effect</em> of the intervention itself is a now function of the baseline score. Those who score higher will benefit less from the intervention - they simply have less room to improve. In this case, the adjusted model appears slightly inferior to the change model, while the unadjusted <em>post-only</em> model is still relatively low powered.</p>
<pre class="r"><code>defPO <- updateDef(defPO, changevar = "eff",
newformula = "1.9 - 1.9 * pre0/15") </code></pre>
<p><img src="https://www.rdatagen.net/post/2018-01-28-thinking-about-the-run-of-the-mill-pre-post-analysis_files/figure-html/unnamed-chunk-13-1.png" width="1152" /></p>
<pre class="r"><code>presults[, .(postonly = mean(p1 <= 0.05),
change = mean(p2 <= 0.05),
adjusted = mean(p3 <= 0.025 | p3x <= 0.025))]</code></pre>
<pre><code>## postonly change adjusted
## 1: 0.425 0.878 0.863</code></pre>
<p>The <em>adjusted</em> model has less power than the <em>change</em> model, because I used a reduced <span class="math inline">\(\alpha\)</span>-level for the hypothesis test of the <em>adjusted</em> models. I am testing for interaction first, then if that fails, for main effects, so I need to adjust for multiple comparisons. (I have another <a href="https://www.rdatagen.net/post/sub-group-analysis-in-rct/">post</a> that shows why this might be a good thing to do.) I have used a Bonferroni adjustment, which can be a more conservative test. I still prefer the <em>adjusted</em> model, because it provides more insight into the underlying process than the <em>change</em> model.</p>
</div>
<div id="treatment-assignment-depends-on-baseline-measurement" class="section level3">
<h3>Treatment assignment depends on baseline measurement</h3>
<p>Now, slightly off-topic. So far, we’ve been talking about situations where treatment assignment is randomized. What happens in a scenario where those with higher baseline scores are more likely to receive the intervention? Well, if we don’t adjust for the baseline score, we will have unmeasured confounding. A comparison of follow-up scores in the two groups will be biased towards the intervention group if the baseline scores are correlated with follow-up scores - as we see visually with a scenario in which the effect size is set to 0. Also notice that the p-values for the unadjusted model are consistently below 0.05 - we are almost always drawing the wrong conclusion if we use this model. On the other hand, the error rate for the adjusted model is close to 0.05, what we would expect.</p>
<pre class="r"><code>defPO <- updateDef(defPO, changevar = "eff",
newformula = 0)
dt <- genData(1000, defPO)
dt <- trtObserve(dt, "-4.5 + 0.5 * pre0", logit.link = TRUE)
dt <- addColumns(defObs, dt)</code></pre>
<p><img src="https://www.rdatagen.net/post/2018-01-28-thinking-about-the-run-of-the-mill-pre-post-analysis_files/figure-html/unnamed-chunk-16-1.png" width="1152" /></p>
<pre><code>## postonly change adjusted
## 1: 0.872 0.095 0.046</code></pre>
<p>I haven’t proved anything here, but these simulations suggest that we should certainly think twice about using an unadjusted model if we happen to have baseline measurements. And it seems like you are likely to maximize power (and maybe minimize bias) if you compare follow-up scores while adjusting for baseline scores rather than analyzing change in scores by group.</p>
</div>
Importance sampling adds an interesting twist to Monte Carlo simulation
https://www.rdatagen.net/post/importance-sampling-adds-a-little-excitement-to-monte-carlo-simulation/
Thu, 18 Jan 2018 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/importance-sampling-adds-a-little-excitement-to-monte-carlo-simulation/<p>I’m contemplating the idea of teaching a course on simulation next fall, so I have been exploring various topics that I might include. (If anyone has great ideas either because you have taught such a course or taken one, definitely drop me a note.) Monte Carlo (MC) simulation is an obvious one. I like the idea of talking about <em>importance sampling</em>, because it sheds light on the idea that not all MC simulations are created equally. I thought I’d do a brief blog to share some code I put together that demonstrates MC simulation generally, and shows how importance sampling can be an improvement.</p>
<p>Like many of the topics I’ve written about, this is a vast one that certainly warrants much, much more than a blog entry. MC simulation in particular, since it is so fundamental to the practice of statistics. MC methods are an essential tool to understand the behavior of statistical models. In fact, I’ve probably used MC simulations in just about all of my posts - to generate repeated samples from a model to explore bias, variance, and other distributional characteristics of a particular method.</p>
<p>For example, if we want to assess the variability of a regression parameter estimate, we can repeatedly generate data from a particular “hidden” model, and for each data set fit a regression model to estimate the parameter in question. For each iteration, we will arrive at a different estimate; the variation of all those estimates might be of great interest. In particular, the standard deviation of those estimates is the standard error of the estimate. (Of course, with certain problems, there are ways to analytically derive the standard errors. In these cases, MC simulation can be used to verify an analysis was correct. That’s the beauty of statistics - you can actually show yourself you’ve gotten the right answer.)</p>
<div id="a-simple-problem" class="section level3">
<h3>A simple problem</h3>
<p>In this post, I am considering a simple problem. We are interested in estimating the probability of drawing a value between 2 and 2.5 from a standard normal distribution. That is, we want to use MC simulation to estimate</p>
<p><span class="math display">\[p =P(2.0 <= X <= 2.5), \ \ \ X\sim N(0,1)\]</span></p>
<p>Of course, we can use <code>R</code> to get us the true <span class="math inline">\(p\)</span> directly without any simulation at all, but that is no fun:</p>
<pre class="r"><code>pnorm(2.5, 0, 1) - pnorm(2, 0, 1)</code></pre>
<pre><code>## [1] 0.01654047</code></pre>
<p>To do this using simulation, I wrote a simple function that checks to see if a value falls between two numbers.</p>
<pre class="r"><code>inSet <- function(x, minx, maxx) {
result <- (x >= minx & x <= maxx)
return(as.integer(result))
}</code></pre>
<p>To estimate the desired probability, we just repeatedly draw from the standard normal distribution. After each draw, we check to see if the value falls between 2.0 and 2.5, and store that information in a vector. The vector will have a value of 1 each time a value falls into the range, and 0 otherwise. The proportion of 1’s is the desired probability. Or in other words, <span class="math inline">\(\hat{p} = \bar{z}\)</span>, where <span class="math inline">\(\bar{z} = \frac{1}{1000} \sum{z}\)</span>.</p>
<pre class="r"><code>set.seed(1234)
z <- vector("numeric", 1000)
for (i in 1:1000) {
y <- rnorm(1, 0, 1)
z[i] <- inSet(y, 2, 2.5)
}
mean(z)</code></pre>
<pre><code>## [1] 0.018</code></pre>
<p>The estimate is close to the true value, but there is no reason it would be exact. In fact, I would be suspicious if it were. Now, we can also use the variance of <span class="math inline">\(z\)</span> to estimate the standard error of <span class="math inline">\(\hat{p}\)</span>:</p>
<pre class="r"><code>sd(z)/sqrt(1000)</code></pre>
<pre><code>## [1] 0.004206387</code></pre>
</div>
<div id="faster-code" class="section level3">
<h3>Faster code?</h3>
<p>If you’ve read any of my other posts, you know I am often interested in trying to make <code>R</code> run a little faster. This can be particularly important if we need to repeat tasks over and over, as we will be doing here. The <code>for</code> loop I used here is not ideal. Maybe <code>simstudy</code> (and you could do this without simstudy) can do better. Let’s first see if it provides the same estimates:</p>
<pre class="r"><code>library(data.table)
library(simstudy)
# define the data
defMC <- defData(varname = "y", formula = 0,
variance = 1, dist = "normal")
defMC <- defData(defMC, varname = "z", formula = "inSet(y, 2, 2.5)",
dist = "nonrandom")
# generate the data - the MC simulation without a loop
set.seed(1234)
dMC <- genData(1000, defMC)
# evaluate mean and standard error
dMC[ , .(mean(z), sd(z)/sqrt(1000))]</code></pre>
<pre><code>## V1 V2
## 1: 0.018 0.004206387</code></pre>
<p>So, the results are identical - no surprise there. But which approach used fewer computing resources. To find this out, we turn to the <code>microbenchmark</code> package. (I created a function out of the loop above that returns a vector of 1’s and 0’s.)</p>
<pre class="r"><code>library(microbenchmark)
mb <- microbenchmark(tradMCsim(1000), genData(1000, defMC))
summary(mb)[, c("expr", "lq", "mean", "uq", "neval")]</code></pre>
<pre><code>## expr lq mean uq neval
## 1 tradMCsim(1000) 1.428656 2.186979 2.573003 100
## 2 genData(1000, defMC) 1.376450 1.668248 1.674146 100</code></pre>
<p>With 1000 draws, there is actually very little difference between the two approaches. But if we start to increase the number of simulations, the differences become apparent. With 10000 draws, the simstudy approach is more than 7 times faster. The relative improvement continues to increase as the number of draws increases.</p>
<pre class="r"><code>mb <- microbenchmark(tradMCsim(10000), genData(10000, defMC))
summary(mb)[, c("expr", "lq", "mean", "uq", "neval")]</code></pre>
<pre><code>## expr lq mean uq neval
## 1 tradMCsim(10000) 18.453128 21.619022 22.226165 100
## 2 genData(10000, defMC) 2.006622 2.432078 2.508662 100</code></pre>
</div>
<div id="estimating-variation-of-hatp" class="section level3">
<h3>Estimating variation of <span class="math inline">\(\hat{p}\)</span></h3>
<p>Now, we can stop using the loop, at least to generate a single set of draws. But, in order to use MC simulation to estimate the variance of <span class="math inline">\(\hat{p}\)</span>, we still need to use a loop. In this case, we will generate 1500 data sets of 1000 draws each, so we will have 1500 estimates of <span class="math inline">\(\hat{p}\)</span>. (It would probably be best to do all of this using Rcpp, where we can loop with impunity.)</p>
<pre class="r"><code>iter <- 1500
estMC <- vector("numeric", iter)
for (i in 1:iter) {
dtMC <- genData(1000, defMC)
estMC[i] <- dtMC[, mean(z)]
}
head(estMC)</code></pre>
<pre><code>## [1] 0.020 0.013 0.023 0.017 0.016 0.019</code></pre>
<p>We can estimate the average of the <span class="math inline">\(\hat{p}\)</span>’s, which should be close to the true value of <span class="math inline">\(p \approx 0.0165\)</span>. And we can check to see if the standard error of <span class="math inline">\(\hat{p}\)</span> is close to our earlier estimate of 0.004.</p>
<pre class="r"><code>c(mean(estMC), sd(estMC))</code></pre>
<pre><code>## [1] 0.016820000 0.004113094</code></pre>
</div>
<div id="importance-sampling" class="section level3">
<h3>Importance sampling</h3>
<p>As we were trying to find an estimate for <span class="math inline">\(p\)</span> using the simulations above, we spent a lot of time drawing values far outside the range of 2 to 2.5. In fact, almost all of the draws were outside that range. You could almost so that most of those draws were providing little if any information. What if we could focus our attention on the area we are interested in - in this case the 2 to 2.5, without sacrificing our ability to make an unbiased estimate? That would be great, wouldn’t it? That is the idea behind importance sampling.</p>
<p>The idea is to draw from a distribution that is (a) easy to draw from and (b) is close to the region of interest. Obviously, if 100% of our draws is from the set/range in question, then we’ve way over-estimated the proportion. So, we need to reweight the draws in such a way that we get an unbiased estimate.</p>
</div>
<div id="a-very-small-amount-of-theory" class="section level3">
<h3>A very small amount of theory</h3>
<p>A teeny bit of stats theory here (hope you don’t jump ship). The expected value of a draw falling between 2 and 2.5 is</p>
<p><span class="math display">\[E_x(I_R) = \int_{-\infty}^{\infty}{I_{R}(x)f(x)dx} \ ,\]</span></p>
<p>where <span class="math inline">\(I_R(x)=1\)</span> when <span class="math inline">\(2.0 \le x \le 2.5\)</span>, and is 0 otherwise, and <span class="math inline">\(f(x)\)</span> is the standard normal density. This is the quantity that we were estimating above. Now, let’s say we want to draw closer to the range in question - say using <span class="math inline">\(Y\sim N(2.25, 1)\)</span>. We will certainly get more values around 2 and 2.5. If <span class="math inline">\(g(y)\)</span> represents this new density, we can write <span class="math inline">\(E(I_R)\)</span> another way:</p>
<p><span class="math display">\[E_y(I_R) = \int_{-\infty}^{\infty}{I_{R}(y)\frac{f(y)}{g(y)}g(y)dy} \ .\]</span> Notice that the <span class="math inline">\(g(y)\)</span>’s cancel out and we end up with the same expectation as above, except it is with respect to <span class="math inline">\(y\)</span>. Also, notice that the second equation is also a representation of <span class="math inline">\(E_y \left( I_{R}(y)\frac{f(y)}{g(y)} \right)\)</span>.</p>
<p>I know I am doing a lot of hand waving here, but the point is that</p>
<p><span class="math display">\[E_x(I_R) = E_y \left( I_{R}\frac{f}{g} \right)\]</span></p>
<p>Again, <span class="math inline">\(f\)</span> and <span class="math inline">\(g\)</span> are just the original density of interest - <span class="math inline">\(N(0,1)\)</span> - and the “important” density - <span class="math inline">\(N(2.25, 1)\)</span> - respectively. In our modified MC simulation, we draw a <span class="math inline">\(y_i\)</span> from the <span class="math inline">\(N(2.25, 1)\)</span>, and then we calculate <span class="math inline">\(f(y_i)\)</span>, <span class="math inline">\(g(y_i)\)</span>, and <span class="math inline">\(I_R(y_i)\)</span>, or more precisely, <span class="math inline">\(z_i = I_{R}(y_i)\frac{f(y_i)}{g(y_i)}\)</span>. To get <span class="math inline">\(\hat{p}\)</span>, we average the <span class="math inline">\(z_i\)</span>’s, as we did before.</p>
</div>
<div id="beyond-theory" class="section level3">
<h3>Beyond theory</h3>
<p>Why go to all of this trouble? Well, it turns out that the <span class="math inline">\(z_i\)</span>’s will be much less variable if we use importance sampling. And, as a result, the standard error of our estimate can be reduced. This is always a good thing, because it means a reduction in uncertainty.</p>
<p>Maybe a pretty plot will provide a little intuition? Our goal is to estimate the area under the black curve between 2 and 2.5. An importance sample from a <span class="math inline">\(N(2.25, 1)\)</span> distribution is represented by the green curve. I think, however, it might be easiest to understand the adjustment mechanism by looking at the orange curve, which represents the uniform distribution between 2 and 2.5. The density is <span class="math inline">\(g(y) = 2\)</span> for all values within the range, and <span class="math inline">\(g(y) = 0\)</span> outside the range. Each time we generate a <span class="math inline">\(y_i\)</span> from the <span class="math inline">\(U(2,2.5)\)</span>, the value is guaranteed to be in the target range. As calculated, the average of all the <span class="math inline">\(z_i\)</span>’s is the ratio of the area below the black line relative to the area below the orange line, but only in the range between 2 and 2.5. (This may not be obvious, but perhaps staring at the plot for a couple of minutes will help.)</p>
<p><img src="https://www.rdatagen.net/post/2018-01-18-importance-sampling-adds-a-little-excitement-to-monte-carlo-simulation_files/figure-html/unnamed-chunk-11-1.png" width="672" /></p>
</div>
<div id="reducing-standard-errors-by-improving-focus" class="section level3">
<h3>Reducing standard errors by improving focus</h3>
<p>Now we can generate data and estimate <span class="math inline">\(\hat{p}\)</span> and <span class="math inline">\(se(\hat{p})\)</span>. First, here is a simple function to calculate <span class="math inline">\(z\)</span>.</p>
<pre class="r"><code>h <- function(I, f, g) {
dx <- data.table(I, f, g)
dx[I != 0, result := I * f / g]
dx[I == 0, result := 0]
return(dx$result)
}</code></pre>
<p>We can define the three Monte Carlo simulations based on the three different distributions using <code>simstudy</code>. The elements that differ across the three MC simulations are the distribution we are drawing from and the density <span class="math inline">\(g\)</span> of that function.</p>
<pre class="r"><code># Normal(2.5, 1)
def1 <- defData(varname = "y", formula = 2.25,
variance = 1, dist = "normal")
def1 <- defData(def1, varname = "f", formula = "dnorm(y, 0, 1)",
dist = "nonrandom")
def1 <- defData(def1, varname = "g", formula = "dnorm(y, 2.25, 1)",
dist = "nonrandom")
def1 <- defData(def1, varname = "I", formula = "inSet(y, 2, 2.5)",
dist = "nonrandom")
def1 <- defData(def1, varname = "z", formula = "h(I, f, g)",
dist = "nonrandom")
# Normal(2.5, .16)
def2 <- updateDef(def1, "y", newvariance = 0.4^2)
def2 <- updateDef(def2, "g", newformula = "dnorm(y, 2.25, 0.4)")
# Uniform(2, 3)
def3 <- updateDef(def1, "y", newformula = "2;2.5",
newvariance = 0, newdist = "uniform")
def3 <- updateDef(def3, "g", newformula = "dunif(y, 2, 2.5)")</code></pre>
<p>Here is a peek at one data set using the uniform sampling approach:</p>
<pre class="r"><code>genData(1000, def3)</code></pre>
<pre><code>## id y f g I z
## 1: 1 2.181324 0.03695603 2 1 0.018478013
## 2: 2 2.381306 0.02341805 2 1 0.011709023
## 3: 3 2.338066 0.02593364 2 1 0.012966819
## 4: 4 2.200399 0.03544350 2 1 0.017721749
## 5: 5 2.461919 0.01926509 2 1 0.009632543
## ---
## 996: 996 2.118506 0.04229983 2 1 0.021149914
## 997: 997 2.433722 0.02064175 2 1 0.010320876
## 998: 998 2.265325 0.03066025 2 1 0.015330127
## 999: 999 2.107219 0.04332075 2 1 0.021660374
## 1000: 1000 2.444599 0.02010130 2 1 0.010050651</code></pre>
<p>And here are the estimates based on the three different importance samples. Again each iteration is 1000 draws from the distribution - and we use 1500 iterations:</p>
<pre class="r"><code>iter <- 1500
N <- 1000
est1 <- vector("numeric", iter)
est2 <- vector("numeric", iter)
est3 <- vector("numeric", iter)
for (i in 1:iter) {
dt1 <- genData(N, def1)
est1[i] <- dt1[, mean(z)]
dt2 <- genData(N, def2)
est2[i] <- dt2[, mean(z)]
dt3 <- genData(N, def3)
est3[i] <- dt3[, mean(z)]
}
# N(2.25, 1)
c(mean(est1), sd(est1))</code></pre>
<pre><code>## [1] 0.016525503 0.001128918</code></pre>
<pre class="r"><code># N(2.25, .16)
c(mean(est2), sd(est2))</code></pre>
<pre><code>## [1] 0.0165230677 0.0005924007</code></pre>
<pre class="r"><code># Uniform(2, 2.5)
c(mean(est3), sd(est3))</code></pre>
<pre><code>## [1] 0.0165394920 0.0001643243</code></pre>
<p>In each case, the average <span class="math inline">\(\hat{p}\)</span> is 0.0165, and the standard errors are all below the standard MC standard error of 0.0040. The estimates based on draws from the uniform distribution are the most efficient, with a standard error below 0.0002.</p>
</div>
Simulating a cost-effectiveness analysis to highlight new functions for generating correlated data
https://www.rdatagen.net/post/generating-correlated-data-for-a-simulated-cost-effectiveness-analysis/
Mon, 08 Jan 2018 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/generating-correlated-data-for-a-simulated-cost-effectiveness-analysis/<p>My dissertation work (which I only recently completed - in 2012 - even though I am not exactly young, a whole story on its own) focused on inverse probability weighting methods to estimate a causal cost-effectiveness model. I don’t really do any cost-effectiveness analysis (CEA) anymore, but it came up very recently when some folks in the Netherlands contacted me about using <code>simstudy</code> to generate correlated (and clustered) data to compare different approaches to estimating cost-effectiveness. As part of this effort, I developed two more functions in simstudy that allow users to generate correlated data drawn from different types of distributions. Earlier I had created the <code>CorGen</code> functions to generate multivariate data from a single distribution – e.g. multivariate gamma. Now, with the new <code>CorFlex</code> functions (<code>genCorFlex</code> and <code>addCorFlex</code>), users can mix and match distributions. The new version of simstudy is not yet up on CRAN, but is available for download from my <a href="https://github.com/kgoldfeld/simstudy">github</a> site. If you use RStudio, you can install using <code>devtools::install.github("kgoldfeld/simstudy")</code>. [Update: <code>simstudy</code> version 0.1.8 is now available on <a href="https://cran.rstudio.com/web/packages/simstudy/">CRAN</a>.]</p>
<p>I thought I’d introduce this new functionality by generating some correlated cost and outcome data, and show how to estimate a cost-effectiveness analysis curve (CEAC). The CEAC is based on a measure called the incremental net benefit (INB). It is far more common in cost-effectiveness analysis to measure the incremental cost-effectiveness ratio (ICER). I was never enamored of ICERs, because ratios can behave poorly when denominators (in this case the changes in outcomes) get very small. Since it is a difference, the INB behaves much better. Furthermore, it seems relatively intuitive that a negative INB is not a good thing (i.e., it is not good if costs are greater than benefits), but a negative ICER has an unclear interpretation. My goal isn’t to give you a full explanation of CEA, but to provide an application to demonstrate the new simstudy functions. If you really want to learn more about this topic, you can find a paper <a href="http://onlinelibrary.wiley.com/doi/10.1002/sim.6017/full">here</a> that described my dissertation work. Of course, this is a well-established field of study, so naturally there is much more out there…</p>
<div id="simulation-scenario" class="section level3">
<h3>Simulation scenario</h3>
<p>In the simulation scenario I’ve concocted, the goal is to increase the number of patients that come in for an important test. A group of public health professionals have developed a new outreach program that they think will be able to draw in more patients. The study is conducted at the site level - some sites will implement the new approach, and the others, serving as controls, will continue with the existing approach. The cost for the new approach is expected to be higher, and will vary by site. In the first scenario, we assume that costs and recruitment are correlated with each other. That is, sites that tend to spend more generally have higher recruitment levels, even before introducing the new recruitment method.</p>
<p>The data are simulated using the assumption that costs have a gamma distribution (since costs are positive, continuous and skewed to the right) and that recruitment numbers are Poisson distributed (since they are non-negative counts). The intervention sites will have costs that are on average $1000 greater than the control sites. Recruitment will be 10 patients higher for the intervention sites. This is an average expenditure of $100 per additional patient recruited:</p>
<pre class="r"><code>library(simstudy)
# Total of 500 sites, 250 control/250 intervention
set.seed(2018)
dt <- genData(500)
dt <- trtAssign(dtName = dt, nTrt = 2,
balanced = TRUE, grpName = "trtSite")
# Define data - intervention costs $1000 higher on average
def <- defDataAdd(varname = "cost", formula = "1000 + 1000*trtSite",
variance = 0.2, dist = "gamma")
def <- defDataAdd(def, varname = "nRecruits",
formula = "100 + 10*trtSite",
dist = "poisson")
# Set correlation paramater (based on Kendall's tau)
tau <- 0.2
# Generate correlated data using new function addCorFlex
dOutcomes <- addCorFlex(dt, defs = def, tau = tau)
dOutcomes</code></pre>
<pre><code>## id trtSite cost nRecruits
## 1: 1 1 1553.7862 99
## 2: 2 1 913.2466 90
## 3: 3 1 1314.5522 91
## 4: 4 1 1610.5535 112
## 5: 5 1 3254.1100 99
## ---
## 496: 496 1 1452.5903 99
## 497: 497 1 292.8769 109
## 498: 498 0 835.3930 85
## 499: 499 1 1618.0447 92
## 500: 500 0 363.2429 101</code></pre>
<p>The data have been generated, so now we can examine the means and standard deviations of costs and recruitment:</p>
<pre class="r"><code>dOutcomes[, .(meanCost = mean(cost), sdCost = sd(cost)),
keyby = trtSite]</code></pre>
<pre><code>## trtSite meanCost sdCost
## 1: 0 992.2823 449.8359
## 2: 1 1969.2057 877.1947</code></pre>
<pre class="r"><code>dOutcomes[, .(meanRecruit = mean(nRecruits), sdRecruit = sd(nRecruits)),
keyby = trtSite]</code></pre>
<pre><code>## trtSite meanRecruit sdRecruit
## 1: 0 99.708 10.23100
## 2: 1 108.600 10.10308</code></pre>
<p>And here is the estimate of Kendall’s tau within each intervention arm:</p>
<pre class="r"><code>dOutcomes[, .(tau = cor(cost, nRecruits, method = "kendall")),
keyby = trtSite]</code></pre>
<pre><code>## trtSite tau
## 1: 0 0.2018365
## 2: 1 0.1903694</code></pre>
</div>
<div id="cost-effectiveness-icer" class="section level3">
<h3>Cost-effectiveness: ICER</h3>
<p>The question is, are the added expenses of the program worth it when we look at the difference in recruitment? In the traditional approach, the incremental cost-effectiveness ratio is defined as</p>
<p><span class="math display">\[ICER = \frac{ \bar{C}_{intervention} - \bar{C}_{control} }{ \bar{R}_{intervention} - \bar{R}_{control}}\]</span></p>
<p>where <span class="math inline">\(\bar{C}\)</span> and <span class="math inline">\(\bar{R}\)</span> represent the average costs and recruitment levels, respectively.</p>
<p>We can calculate the ICER in this simulated study:</p>
<pre class="r"><code>(costDif <- dOutcomes[trtSite == 1, mean(cost)] -
dOutcomes[trtSite == 0, mean(cost)])</code></pre>
<pre><code>## [1] 976.9235</code></pre>
<pre class="r"><code>(nDif <- dOutcomes[trtSite == 1, mean(nRecruits)] -
dOutcomes[trtSite == 0, mean(nRecruits)])</code></pre>
<pre><code>## [1] 8.892</code></pre>
<pre class="r"><code># ICER
costDif/nDif</code></pre>
<pre><code>## [1] 109.8654</code></pre>
<p>In this case the average cost for the intervention group is $976 higher than the control group, and recruitment goes up by about 9 people. Based on this, the ICER is $110 per additional recruited individual. We would deem the initiative cost-effective if we are willing to pay at least $110 to recruit a single person. If, for example, we save $150 in future health care costs for every additional person we recruit, we should be willing to invest $110 for a new recruit. Under this scenario, we would deem the program cost effective (assuming, of course, we have some measure of uncertainty for our estimate).</p>
</div>
<div id="cost-effectiveness-inb-the-ceac" class="section level3">
<h3>Cost-effectiveness: INB & the CEAC</h3>
<p>I alluded to the fact that I believe that the incremental net benefit (INB) might be a preferable way to measure cost-effectiveness, just because the measure is more stable and easier to interpret. This is how it is defined:</p>
<p><span class="math display">\[INB = \lambda (\bar{R}_{intervention} - \bar{R}_{control}) - (\bar{C}_{intervention} - \bar{C}_{control})\]</span></p>
<p>where <span class="math inline">\(\lambda\)</span> is the willingness-to-pay I mentioned above. One of the advantages to using the INB is that we don’t need to specify <span class="math inline">\(\lambda\)</span>, but can estimate a range of INBs based on a range of willingness-to-pay values. For all values of <span class="math inline">\(\lambda\)</span> where the INB exceeds $0, the intervention is cost-effective.</p>
<p>The CEAC is a graphical approach to cost-effectiveness analysis that takes into consideration uncertainty. We estimate uncertainty using a bootstrap approach, which entails sampling repeatedly from the original “observed” data set with replacement. Each time we draw a sample, we estimate the mean differences in cost and recruitment for the two treatment arms. A plot of these estimated means gives a sense of the variability of our estimates (and we can see how strongly these means are correlated). Once we have all these bootstrapped means, we can calculate a range of INB’s for each pair of means and a range of <span class="math inline">\(\lambda\)</span>’s. The CEAC represents <em>the proportion of bootstrapped estimates with a positive INB at a particular level of <span class="math inline">\(\lambda\)</span>.</em></p>
<p>This is much easier to see in action. To implement this, I wrote a little function that randomly samples the original data set and estimates the means:</p>
<pre class="r"><code>estMeans <- function(dt, grp, boot = FALSE) {
dGrp <- dt[trtSite == grp]
if (boot) {
size <- nrow(dGrp)
bootIds <- dGrp[, sample(id, size = size, replace = TRUE)]
dGrp <- dt[bootIds]
}
dGrp[, .(mC = mean(cost), mN = mean(nRecruits))]
}</code></pre>
<p>First, we calculate the differences in means of the observed data:</p>
<pre class="r"><code>(estResult <- estMeans(dOutcomes, 1) - estMeans(dOutcomes, 0))</code></pre>
<pre><code>## mC mN
## 1: 976.9235 8.892</code></pre>
<p>Next, we draw 1000 bootstrap samples:</p>
<pre class="r"><code>bootResults <- data.table()
for (i in 1:1000) {
changes <- estMeans(dOutcomes, 1, boot = TRUE) -
estMeans(dOutcomes, 0, boot = TRUE)
bootResults <- rbind(bootResults, changes)
}
bootResults</code></pre>
<pre><code>## mC mN
## 1: 971.3087 9.784
## 2: 953.2996 8.504
## 3: 1053.0340 9.152
## 4: 849.5292 8.992
## 5: 1008.9378 8.452
## ---
## 996: 894.0251 8.116
## 997: 1002.0393 7.948
## 998: 981.6729 8.784
## 999: 1109.8255 9.596
## 1000: 995.6786 8.736</code></pre>
<p>Finally, we calculate the proportion of INBs that exceed zero for a range of <span class="math inline">\(\lambda\)</span>’s from $75 to $150. We can see that at willingness-to-pay levels higher than $125, there is a very high probability (~90%) of the intervention being cost-effective. (At the ICER level of $110, the probability of cost-effectiveness is only around 50%.)</p>
<pre class="r"><code>CEAC <- data.table()
for (wtp in seq(75, 150, 5)) {
propPos <- bootResults[, mean((wtp * mN - mC) > 0)]
CEAC <- rbind(CEAC, data.table(wtp, propPos))
}
CEAC</code></pre>
<pre><code>## wtp propPos
## 1: 75 0.000
## 2: 80 0.000
## 3: 85 0.002
## 4: 90 0.018
## 5: 95 0.075
## 6: 100 0.183
## 7: 105 0.339
## 8: 110 0.505
## 9: 115 0.659
## 10: 120 0.776
## 11: 125 0.871
## 12: 130 0.941
## 13: 135 0.965
## 14: 140 0.984
## 15: 145 0.992
## 16: 150 0.998</code></pre>
</div>
<div id="a-visual-cea" class="section level3">
<h3>A visual CEA</h3>
<p>Here are three series of plots, shown for different levels of correlation between cost and recruitment. Each series includes a plot of the original cost and recruitment data, where each point represents a site. The second plot shows the average difference in means between the intervention and control sites in purple and the bootstrapped differences in grey. The third plot is the CEAC with a horizontal line drawn at 90%. The first series is the data set we generated with tau = 0.2:</p>
<p><img src="https://www.rdatagen.net/post/2018-01-08-generating-correlated-data-for-a-simulated-cost-effectiveness-analysis_files/figure-html/plot1-1.png" width="1056" /></p>
<p>When there is no correlation between costs and recruitment across sites (tau = 0):</p>
<p><img src="https://www.rdatagen.net/post/2018-01-08-generating-correlated-data-for-a-simulated-cost-effectiveness-analysis_files/figure-html/tau2-1.png" width="1056" /></p>
<p>And finally - when there is a higher degree of correlation, tau = 0.4:</p>
<p><img src="https://www.rdatagen.net/post/2018-01-08-generating-correlated-data-for-a-simulated-cost-effectiveness-analysis_files/figure-html/tau3-1.png" width="1056" /></p>
</div>
<div id="effect-of-correlation" class="section level3">
<h3>Effect of correlation?</h3>
<p>In all three scenarios (with different levels of tau), the ICER is approximately $110. Of course, this is directly related to the fact that the estimated differences in means of the two intervention groups is the same across the scenarios. But, when we look at the three site-level and bootstrap plots, we can see the varying levels of correlation.</p>
<p>And while there also appears to be a subtle visual difference between the CEAC’s for different levels of correlation, it is not clear if this is a real difference or random variation. To explore this a bit further, I generated 250 data sets and their associated CEACs (which in turn are generated by 1000 bootstrap steps eacj) under a range of tau’s, starting with no correlation (tau = 0) up to a considerable level of correlation (tau = 0.4). In these simulations, I used a larger sample size of 2000 sites to reduce the variation a bit. Here are the results:</p>
<div class="figure">
<img src="https://www.rdatagen.net/img/post-cea/tauplots.png" />
</div>
<p>It appears that the variability of the CEAC curves decreases as correlation between cost and recruitment (determined by tau) increases; the range of the curves is smallest when tau is 0.4. In addition, in looks like the “median” CEAC moves slightly rightward as tau increases, which suggests that probability of cost-effectiveness will vary across different levels of tau. All this is to say that correlation appears to matter, so it might be an important factor to consider when both simulating these sorts of data and actually conducting a CEA.</p>
</div>
<div id="next-steps" class="section level3">
<h3>Next steps?</h3>
<p>In this example, I based the entire analysis on a simple non-parametric estimate of the means. In the future, I might explore copula-based methods to fit joint models of costs and outcomes. In simstudy, a Gaussian copula generates the correlated data. However there is a much larger world of copulas out there that can be used to model correlation between measures regardless of their marginal distributions. And some of these methods have been applied in the context of CEA. Stay tuned on this front (though it might be a while).</p>
</div>
When there's a fork in the road, take it. Or, taking a look at marginal structural models.
https://www.rdatagen.net/post/when-a-covariate-is-a-confounder-and-a-mediator/
Mon, 11 Dec 2017 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/when-a-covariate-is-a-confounder-and-a-mediator/<p>I am going to cut right to the chase, since this is the third of three posts related to confounding and weighting, and it’s kind of a long one. (If you want to catch up, the first two are <a href="https://www.rdatagen.net/post/potential-outcomes-confounding/">here</a> and <a href="https://www.rdatagen.net/post/inverse-probability-weighting-when-the-outcome-is-binary/">here</a>.) My aim with these three posts is to provide a basic explanation of the <em>marginal structural model</em> (MSM) and how we should interpret the estimates. This is obviously a very rich topic with a vast literature, so if you remain interested in the topic, I recommend checking out this (as of yet unpublished) <a href="https://www.hsph.harvard.edu/miguel-hernan/causal-inference-book/">text book</a> by Hernán & Robins for starters.</p>
<p>The DAG below is a simple version of how things can get complicated very fast if we have sequential treatments or exposures that both affect and are affected by intermediate factors or conditions.</p>
<div class="figure">
<img src="https://www.rdatagen.net/img/post-msm/MSM_DAG_observed.png" />
</div>
<p><span class="math inline">\(A_0\)</span> and <span class="math inline">\(A_1\)</span> represent two treatment points and <span class="math inline">\(L_0\)</span> and <span class="math inline">\(L_1\)</span> represent measurements taken before and after treatments, respectively. Both treatments and at least <span class="math inline">\(L_1\)</span> affect outcome <span class="math inline">\(Y\)</span>. (I am assuming that the <span class="math inline">\(A\)</span>’s and <span class="math inline">\(L\)</span>’s are binary and that <span class="math inline">\(Y\)</span> is continuous. <span class="math inline">\(\epsilon\)</span> is <span class="math inline">\(N(0, \sigma_\epsilon^2)\)</span>.)</p>
<p>An example of this might be a situation where we are interested in the effect of a drug treatment on mental health well-being for patients with prehypertension or hypertension. A physician’s decision to administer the drug at each visit is influenced by the patient’s level of hypertension. In turn, the treatment <span class="math inline">\(A_0\)</span> potentially reduces the probability of hypertension - <span class="math inline">\(P(L_1=1)\)</span>. And finally, <span class="math inline">\(L_1\)</span> influences the next treatment decision and ultimately the mental health outcome <span class="math inline">\(Y\)</span>.</p>
<p>The complicating factor is that the hypertension level following the first treatment (<span class="math inline">\(L_1\)</span>) is both a mediator the effect of treatment <span class="math inline">\(A_0\)</span> and confounder of the treatment effect <span class="math inline">\(A_1\)</span> on <span class="math inline">\(Y\)</span>. To get an unbiased estimate the effect of the combined treatment regime (<span class="math inline">\(A_0\)</span> and <span class="math inline">\(A_1\)</span>) we need to both control for <span class="math inline">\(L_1\)</span> and not control for <span class="math inline">\(L_1\)</span>. This is where MSMs and inverse probability weighting (IPW) come into play.</p>
<p>The MSM is marginal in the sense that we’ve been talking about in this series - the estimate will be a population-wide estimate that reflects the mixture of the covariates that influence the treatments and outcomes (in this case, the <span class="math inline">\(L\)</span>’s). It is structural in the sense that we are modeling <em>potential outcomes</em>. Nothing has changed from the last <a href="https://www.rdatagen.net/post/inverse-probability-weighting-when-the-outcome-is-binary/">post</a> except for the fact that we are now defining the exposures as a sequence of different treatments (here <span class="math inline">\(A_0\)</span> and <span class="math inline">\(A_1\)</span>, but could easily extend to <span class="math inline">\(n\)</span> treatments - up to <span class="math inline">\(A_n\)</span>.)</p>
<div id="imagine-an-experiment" class="section level3">
<h3>Imagine an experiment …</h3>
<p>To understand the MSM, it is actually helpful to think about how a single individual fits into the picture. The tree diagram below literally shows that. The MSM posits a weird experiment where measurements (of <span class="math inline">\(L\)</span>) are collected and treatments (<span class="math inline">\(A\)</span>) are assigned repeatedly until a final outcome is measured. In this experiment, the patient is not just assigned to one treatment arm, but to both! Impossible of course, but that is the world of potential outcomes.</p>
<p>At the start of the experiment, a measurement of <span class="math inline">\(L_0\)</span> is collected. This sends the patient down the one of the branches of the tree. Since the patient is assigned to both <span class="math inline">\(A_0=0\)</span> and <span class="math inline">\(A_0=1\)</span>, she actually heads down two <em>different</em> branches simultaneously. Following the completion of the first treatment period <span class="math inline">\(A_0\)</span>, the second measurement (<span class="math inline">\(L_1\)</span>) is collected. But, two measurements are taken for the patient - one for each branch. The results need not be the same. In fact, if the treatment has an individual-level effect on <span class="math inline">\(L_1\)</span>, then the results will be different for this patient. In the example below, this is indeed the case. Following each of those measurements (in parallel universes), the patient is sent down the next treatment branches (<span class="math inline">\(A_1\)</span>). At this point, the patient finds herself in four branches. At the end of each, the measurement of <span class="math inline">\(Y\)</span> is taken, and we have four potential outcomes for individual {i}: <span class="math inline">\(Y^i_{00}\)</span>, <span class="math inline">\(Y^i_{10}\)</span>, <span class="math inline">\(Y^i_{01}\)</span>, and <span class="math inline">\(Y^i_{11}\)</span>.</p>
<p>A different patient will head down different branches based on his own values of <span class="math inline">\(L_0\)</span> and <span class="math inline">\(L_1\)</span>, and will thus end up with different potential outcomes. (Note: the values represented in the figure are intended to be average values for that particular path.)</p>
<div class="figure">
<img src="https://www.rdatagen.net/img/post-msm/IPW_MSM_Ind.png" />
</div>
</div>
<div id="how-do-we-define-the-causal-effect" class="section level3">
<h3>How do we define the causal effect?</h3>
<p>With four potential outcomes rather than two, it is less obvious how to define the causal effect. We could, for example, consider three separate causal effects by comparing each of the treatment “regimes” that include at least one exposure to the intervention to the single regime that leaves the patient entirely unexposed. That is, we could be interested in (at the individual <span class="math inline">\(i\)</span> level) <span class="math inline">\(E^i_1 = Y^i_{10}-Y^i_{00}\)</span>, <span class="math inline">\(E^i_2 = Y^i_{01}-Y^i_{00}\)</span>, and <span class="math inline">\(E^i_3 = Y^i_{11}-Y^i_{00}\)</span>. This is just one possibility; the effects of interest are driven entirely by the research question.</p>
<p>When we have three or four or more intervention periods, the potential outcomes can start to pile up rapidly (we will have <span class="math inline">\(2^n\)</span> potential outcomes for a sequence of <span class="math inline">\(n\)</span> treatments.) So, the researcher might want to be judicious in deciding which contrasts to be made. Maybe something like <span class="math inline">\(Y_{1111} - Y_{0000}\)</span>, <span class="math inline">\(Y_{0111} - Y_{0000}\)</span>, <span class="math inline">\(Y_{0011} - Y_{0000}\)</span>, and <span class="math inline">\(Y_{0001} - Y_{0000}\)</span> for a four-period intervention. This would allow us to consider the effect of starting (and never stopping) the intervention in each period compared to never starting the intervention at all. By doing this, though, we would be using only 5 out of the 16 potential outcomes. If the remaining 11 paths are not so rare, we might be ignoring a lot of data.</p>
</div>
<div id="the-marginal-effect" class="section level3">
<h3>The marginal effect</h3>
<p>The tree below represents an aggregate set of branches for a sample of 5000 individuals. The sample is initially characterized only by the distribution of <span class="math inline">\(L_0\)</span>. Each individual will go down her own set of four paths, which depend on the starting value of <span class="math inline">\(L_0\)</span> and how each value of <span class="math inline">\(L_1\)</span> responds in the context of each treatment arm.</p>
<div class="figure">
<img src="https://www.rdatagen.net/img/post-msm/IPW_MSM_PO.png" />
</div>
<p>Each individual <span class="math inline">\(i\)</span> (at least in theory) has four potential outcomes: <span class="math inline">\(Y^i_{00}\)</span>, <span class="math inline">\(Y^i_{10}\)</span>, <span class="math inline">\(Y^i_{01}\)</span>, and <span class="math inline">\(Y^i_{11}\)</span>. Averaging across the sample provides a marginal estimate of each of these potential outcomes. For example, <span class="math inline">\(E(Y_{00})=\sum_i{Y^i_{00}}/5000\)</span>. This can be calculated from the tree as <span class="math display">\[(1742*53 + 1908*61 + 392*61 + 958*69)/5000 = 59.7\]</span> Similarly, <span class="math inline">\(E(Y_{11}) = 40.1\)</span> The sample average causal effects are estimated using the respective averages of the potential outcomes. For example, <span class="math inline">\(E_3\)</span> at the sample level would be defined as <span class="math inline">\(E(Y_{11}) - E(Y_{00}) = 40.1 - 59.7 = -19.6\)</span>.</p>
</div>
<div id="back-in-the-real-world" class="section level3">
<h3>Back in the real world</h3>
<p>In reality, there are no parallel universes. Maybe we could come up with an actual randomized experiment to mimic this, but it may be difficult. More likely, we’ll have observed data that looks like this:</p>
<div class="figure">
<img src="https://www.rdatagen.net/img/post-msm/IPW_MSM_obs_noIPW.png" />
</div>
<p>Each individual heads down his or her own path, receiving a single treatment at each time point. Since this is not a randomized trial, the probability of treatment is different across different levels of <span class="math inline">\(L_0\)</span> and <span class="math inline">\(L_1\)</span> and that <span class="math inline">\(L_0\)</span> and <span class="math inline">\(L_1\)</span> are associated with the outcome (i.e. there is confounding).</p>
</div>
<div id="estimating-the-marginal-effects" class="section level3">
<h3>Estimating the marginal effects</h3>
<p>In the previous posts in this series, I provided some insight as to how we might justify using observed data only to estimate these sample-wide average potential outcomes. The most important assumption is that when we have measured all confounders, we may be able to say, for example, <span class="math inline">\(E(Y_{01}) = E(Y | A_0 = 0 \ \& \ A_1 = 1 )\)</span>. The <em>potential outcome</em> for everyone in the sample is equal to the <em>observed</em> outcome for the subgroup who actually followed the particular path that represents that potential outcome. We will make the same assumption here.</p>
<p>At the start of this post, I argued that given the complex nature of the data generating process (in particular given that <span class="math inline">\(L_1\)</span> is both a mediator and confounder), it is challenging to get unbiased estimates of the intervention effects. One way to do this with marginal structural models (another way is using <a href="https://academic.oup.com/aje/article/173/7/731/104142"><em>g-computation</em></a>, but I won’t talk about that here). Inverse probability weighting converts the observed tree graph from the real world to the marginal tree graph so that we can estimate sample-wide average (marginal) potential outcomes as an estimate for some population causal effects.</p>
<p>In this case, the inverse probability weight is calculated as <span class="math display">\[IPW = \frac{1}{P(A_0=a_0 | L_0=l_0) \times P(A_1=a_1 | L_0=l_0, A_0=a_0, L_1=l_1)}\]</span> In practice, we estimate both probabilities using logistic regression or some other modeling technique. But here, we can read the probabilities off the tree graph. For example, if we are interested in the weight associated with individuals observed with <span class="math inline">\(L_0=1, A_0=0, L_1=0, \textbf{and } A_1=1\)</span>, the probabilities are <span class="math display">\[P(A_0 = 0 | L_0=1) = \frac{676}{1350}=0.50\]</span> and <span class="math display">\[P(A_1=1 | L_0=1, A_0=0, L_1=0) = \frac{59}{196} = 0.30\]</span></p>
<p>So, the inverse probability weight for these individuals is <span class="math display">\[IPW = \frac{1}{0.50 \times 0.30} = 6.67\]</span> For the 59 individuals that followed this pathway, the weighted number is <span class="math inline">\(59 \times 6.67 = 393\)</span>. In the marginal world of parallel universes, there were 394 individuals.</p>
</div>
<div id="simulating-data-from-an-msm" class="section level3">
<h3>Simulating data from an MSM</h3>
<p>Before I jump into the simulation, I do want to reference a paper by <a href="http://onlinelibrary.wiley.com/doi/10.1002/sim.5472/full">Havercroft and Didelez</a> that describes in great detail how to generate data from a MSM with time-dependent confounding. It turns out that the data can’t be generated exactly using the intial DAG (presented above), but rather needs to come from something like this:</p>
<div class="figure">
<img src="https://www.rdatagen.net/img/post-msm/MSM_DAG_dataGen.png" />
</div>
<p>where <span class="math inline">\(U\)</span> is an unmeasured, maybe latent, covariate. The observed data (that ignores <span class="math inline">\(U\)</span>) will indeed have a DAG that looks like the one that we started with.</p>
<p>When doing simulations with potential outcomes, we can generate all the potential outcomes for each individual using a parallel universe approach. The observed data (treatment choices and observed outcomes) are generated separately. The advantage of this is that we can confirm the <em>true</em> causal effects because we have actually generated potential outcomes. The disadvantage is that the code is considerably more complicated and the quantity of data generated grows. The situation is not so bad with just two treatment periods, but the size of the data increases exponentially with the number of treatments: as I mentioned earlier, there will be <span class="math inline">\(2^n\)</span> potential outcomes for each individual.</p>
<p>Alternatively, we can just generate the observed data directly. Since we know the true causal parameters we actually “know” the causal effects and can compare our estimates.</p>
<p>I will go through the convoluted approach because I think it clarifies (at least a bit) what is going on. As an addendum, I will show how all of this could be done in a few lines of code if we take the second approach …</p>
<pre class="r"><code>library(broom)
library(simstudy)
# define U, e and L0
defA0 <- defData(varname = "U", formula = "0;1", dist = "uniform")
defA0 <- defData(defA0, varname = "e", formula = 0,
variance = 4, dist = "normal")
defA0<- defData(defA0, varname = "L0", formula = "-2.66+ 3*U",
dist = "binary", link = "logit")
# generate the data
set.seed(1234)
dtA0 <- genData(n = 50000, defA0)
dtA0[1:6]</code></pre>
<pre><code>## id U e L0
## 1: 1 0.1137034 -3.5951796 0
## 2: 2 0.6222994 -0.5389197 0
## 3: 3 0.6092747 1.0675660 0
## 4: 4 0.6233794 -0.7226909 1
## 5: 5 0.8609154 0.8280401 0
## 6: 6 0.6403106 3.3532399 0</code></pre>
<p>Now we need to create the two parallel universes - assigning each individual to both treatments. <code>simstudy</code> has a function <code>addPeriods</code> to generate longitudinal data. I am not doing that here, but can generate 2-period data and change the name of the “period” field to “A0”.</p>
<pre class="r"><code>dtA0 <- addPeriods(dtA0, 2)
setnames(dtA0, "period", "A0")
dtA0[1:6]</code></pre>
<pre><code>## id A0 U e L0 timeID
## 1: 1 0 0.1137034 -3.5951796 0 1
## 2: 1 1 0.1137034 -3.5951796 0 2
## 3: 2 0 0.6222994 -0.5389197 0 3
## 4: 2 1 0.6222994 -0.5389197 0 4
## 5: 3 0 0.6092747 1.0675660 0 5
## 6: 3 1 0.6092747 1.0675660 0 6</code></pre>
<p>Now we are ready to randomly assign a value of <span class="math inline">\(L_1\)</span>. The probability is lower for cases where <span class="math inline">\(A_0 = 1\)</span>, so individuals themselves may have different values of <span class="math inline">\(L_1\)</span> in the alternative paths.</p>
<pre class="r"><code># generate L1 as a function of U, L0, and A0
addA0 <- defDataAdd(varname = "L1",
formula = "-1.2 + 3*U + 0.2*L0 - 2.5*A0",
dist= "binary", link="logit")
dtA0 <- addColumns(addA0, dtOld = dtA0)
dtA0[1:6]</code></pre>
<pre><code>## id A0 U e L0 timeID L1
## 1: 1 0 0.1137034 -3.5951796 0 1 0
## 2: 1 1 0.1137034 -3.5951796 0 2 0
## 3: 2 0 0.6222994 -0.5389197 0 3 1
## 4: 2 1 0.6222994 -0.5389197 0 4 0
## 5: 3 0 0.6092747 1.0675660 0 5 0
## 6: 3 1 0.6092747 1.0675660 0 6 0</code></pre>
<pre class="r"><code># L1 is clearly a function of A0
dtA0[, .(prob_L1 = mean(L1)), keyby = .(L0,A0)]</code></pre>
<pre><code>## L0 A0 prob_L1
## 1: 0 0 0.5238369
## 2: 0 1 0.1080039
## 3: 1 0 0.7053957
## 4: 1 1 0.2078551</code></pre>
<p>Now we create two additional parallel universes for treatment <span class="math inline">\(A_1\)</span> and the potential outcomes. This will result in four records per individual:</p>
<pre class="r"><code>dtA1 <- addPeriods(dtA0, 2)
setnames(dtA1, "period", "A1")
addA1 <- defDataAdd(varname = "Y_PO",
formula = "39.95 + U*40 - A0 * 8 - A1 * 12 + e",
dist = "nonrandom")
dtA1 <- addColumns(addA1, dtA1)
dtA1[1:8]</code></pre>
<pre><code>## id A1 A0 U e L0 timeID L1 Y_PO
## 1: 1 0 0 0.1137034 -3.5951796 0 1 0 40.90296
## 2: 1 0 1 0.1137034 -3.5951796 0 2 0 32.90296
## 3: 1 1 0 0.1137034 -3.5951796 0 3 0 28.90296
## 4: 1 1 1 0.1137034 -3.5951796 0 4 0 20.90296
## 5: 2 0 0 0.6222994 -0.5389197 0 5 1 64.30306
## 6: 2 0 1 0.6222994 -0.5389197 0 6 0 56.30306
## 7: 2 1 0 0.6222994 -0.5389197 0 7 1 52.30306
## 8: 2 1 1 0.6222994 -0.5389197 0 8 0 44.30306</code></pre>
<p>Not surprisingly, the estimates for the causal effects mirror the parameters we used to generate the <span class="math inline">\(Y\)</span>’s above …</p>
<pre class="r"><code># estimate for Y_00 is close to what we estimated from the tree
Y_00 <- dtA1[A0 == 0 & A1 == 0, mean(Y_PO)]
Y_00</code></pre>
<pre><code>## [1] 59.96619</code></pre>
<pre class="r"><code>Y_10 <- dtA1[A0 == 1 & A1 == 0, mean(Y_PO)]
Y_01 <- dtA1[A0 == 0 & A1 == 1, mean(Y_PO)]
Y_11 <- dtA1[A0 == 1 & A1 == 1, mean(Y_PO)]
# estimate 3 causal effects
c(Y_10 - Y_00, Y_01 - Y_00, Y_11 - Y_00)</code></pre>
<pre><code>## [1] -8 -12 -20</code></pre>
<p>Now that we’ve generated the four parallel universes with four potential outcomes per individual, we will generate an observed treatment sequence and measurements of the <span class="math inline">\(L\)</span>’s and <span class="math inline">\(Y\)</span> for each individual. The observed data set will have a single record for each individual:</p>
<pre class="r"><code>dt <- dtA1[A0 == 0 & A1 == 0, .(id, L0)]
dt</code></pre>
<pre><code>## id L0
## 1: 1 0
## 2: 2 0
## 3: 3 0
## 4: 4 1
## 5: 5 0
## ---
## 49996: 49996 1
## 49997: 49997 0
## 49998: 49998 1
## 49999: 49999 0
## 50000: 50000 1</code></pre>
<p><span class="math inline">\(A_0\)</span> is a function of <span class="math inline">\(L_0\)</span>:</p>
<pre class="r"><code>dtAdd <- defDataAdd(varname = "A0",
formula = "0.3 + L0 * 0.2", dist = "binary" )
dt <- addColumns(dtAdd, dt)
dt[, mean(A0), keyby= L0]</code></pre>
<pre><code>## L0 V1
## 1: 0 0.3015964
## 2: 1 0.4994783</code></pre>
<p>Now, we need to pull the appropriate value of <span class="math inline">\(L_1\)</span> from the original data set that includes both possible values for each individual. The value that gets pulled will be based on the observed value of <span class="math inline">\(A_0\)</span>:</p>
<pre class="r"><code>setkeyv(dt, c("id", "A0"))
setkeyv(dtA1, c("id", "A0"))
dt <- merge(dt, dtA1[, .(id, A0, L1, A1) ], by = c("id", "A0"))
dt <- dt[A1 == 0, .(id, L0, A0, L1)]
dt</code></pre>
<pre><code>## id L0 A0 L1
## 1: 1 0 1 0
## 2: 2 0 1 0
## 3: 3 0 0 0
## 4: 4 1 1 1
## 5: 5 0 0 1
## ---
## 49996: 49996 1 1 0
## 49997: 49997 0 1 0
## 49998: 49998 1 1 0
## 49999: 49999 0 0 1
## 50000: 50000 1 0 0</code></pre>
<p>Finally, we generate <span class="math inline">\(A_1\)</span> based on the observed values of <span class="math inline">\(A_0\)</span> and <span class="math inline">\(L_1\)</span>, and select the appropriate value of <span class="math inline">\(Y\)</span>:</p>
<pre class="r"><code>dtAdd <- defDataAdd(varname = "A1",
formula = "0.3 + L1 * 0.2 + A0 * .2", dist = "binary")
dt <- addColumns(dtAdd, dt)
# merge to get potential outcome that matches actual path
setkey(dt, id, L0, A0, L1, A1)
setkey(dtA1, id, L0, A0, L1, A1)
dtObs <- merge(dt, dtA1[,.(id, L0, A0, L1, A1, Y = Y_PO)])
dtObs</code></pre>
<pre><code>## id L0 A0 L1 A1 Y
## 1: 1 0 1 0 0 32.90296
## 2: 2 0 1 0 1 44.30306
## 3: 3 0 0 0 1 53.38856
## 4: 4 1 1 1 1 44.16249
## 5: 5 0 0 1 0 75.21466
## ---
## 49996: 49996 1 1 0 0 74.09161
## 49997: 49997 0 1 0 0 50.26162
## 49998: 49998 1 1 0 0 73.29376
## 49999: 49999 0 0 1 0 52.96703
## 50000: 50000 1 0 0 0 57.13109</code></pre>
<p>If we do a crude estimate of the causal effects using the unadjusted observed data, we know we are going to get biased estimates (remember the true causal effects are -8, -12, and -20):</p>
<pre class="r"><code>Y_00 <- dtObs[A0 == 0 & A1 == 0, mean(Y)]
Y_10 <- dtObs[A0 == 1 & A1 == 0, mean(Y)]
Y_01 <- dtObs[A0 == 0 & A1 == 1, mean(Y)]
Y_11 <- dtObs[A0 == 1 & A1 == 1, mean(Y)]
c(Y_10 - Y_00, Y_01 - Y_00, Y_11 - Y_00)</code></pre>
<pre><code>## [1] -6.272132 -10.091513 -17.208856</code></pre>
<p>This biased result is confirmed using an unadjusted regression model:</p>
<pre class="r"><code>lmfit <- lm(Y ~ A0 + A1, data = dtObs)
tidy(lmfit)</code></pre>
<pre><code>## term estimate std.error statistic p.value
## 1 (Intercept) 58.774695 0.07805319 753.00828 0
## 2 A0 -6.681213 0.10968055 -60.91520 0
## 3 A1 -10.397080 0.10544448 -98.60241 0</code></pre>
<p>Now, shouldn’t we do better if we adjust for the confounders? I don’t think so - the parameter estimate for <span class="math inline">\(A_0\)</span> should be close to <span class="math inline">\(8\)</span>; the estimate for <span class="math inline">\(A_1\)</span> should be approximately <span class="math inline">\(12\)</span>, but this is not the case, at least not for both of the estimates:</p>
<pre class="r"><code>lmfit <- lm(Y ~ L0 + L1 + A0 + A1, data = dtObs)
tidy(lmfit)</code></pre>
<pre><code>## term estimate std.error statistic p.value
## 1 (Intercept) 53.250244 0.08782653 606.31157 0
## 2 L0 7.659460 0.10798594 70.93016 0
## 3 L1 8.203983 0.10644683 77.07119 0
## 4 A0 -4.369547 0.11096204 -39.37875 0
## 5 A1 -12.037274 0.09592735 -125.48323 0</code></pre>
<p>Maybe if we just adjust for <span class="math inline">\(L_0\)</span> or <span class="math inline">\(L_1\)</span>?</p>
<pre class="r"><code>lmfit <- lm(Y ~ L1 + A0 + A1, data = dtObs)
tidy(lmfit)</code></pre>
<pre><code>## term estimate std.error statistic p.value
## 1 (Intercept) 54.247394 0.09095074 596.44808 0.000000e+00
## 2 L1 9.252919 0.11059038 83.66839 0.000000e+00
## 3 A0 -2.633981 0.11354466 -23.19775 2.031018e-118
## 4 A1 -12.016545 0.10063687 -119.40499 0.000000e+00</code></pre>
<pre class="r"><code>lmfit <- lm(Y ~ L0 + A0 + A1, data = dtObs)
tidy(lmfit)</code></pre>
<pre><code>## term estimate std.error statistic p.value
## 1 (Intercept) 57.036320 0.07700591 740.67459 0
## 2 L0 8.815691 0.11311215 77.93761 0
## 3 A0 -8.150706 0.10527255 -77.42480 0
## 4 A1 -10.632238 0.09961593 -106.73231 0</code></pre>
<p>So, none of these approaches seem to work. This is where IPW can provide a solution. First we estimate the treatment/exposure models, then we estimate the IPW, and finally we use weighted regression or just estimate weighted average outcomes directly (we’d have to bootstrap here if we want standard errors for the simple average approach):</p>
<pre class="r"><code># estimate P(A0|L0) and P(A1|L0, A0, L1)
fitA0 <- glm(A0 ~ L0, data = dtObs, family=binomial)
fitA1 <- glm(A1 ~ L0 + A0 + L1, data = dtObs, family=binomial)
dtObs[, predA0 := predict(fitA0, type = "response")]
dtObs[, predA1 := predict(fitA1, type = "response")]
# function to convert propenisty scores to IPW
getWeight <- function(predA0, actA0, predA1, actA1) {
predActA0 <- actA0*predA0 + (1-actA0)*(1-predA0)
predActA1 <- actA1*predA1 + (1-actA1)*(1-predA1)
p <- predActA0 * predActA1
return(1/p)
}
dtObs[, wgt := getWeight(predA0, A0, predA1, A1)]
# fit weighted model
lmfit <- lm(Y ~ A0 + A1, weights = wgt, data = dtObs)
tidy(lmfit)</code></pre>
<pre><code>## term estimate std.error statistic p.value
## 1 (Intercept) 59.982379 0.09059652 662.08257 0
## 2 A0 -7.986486 0.10464257 -76.32157 0
## 3 A1 -12.051805 0.10464258 -115.17114 0</code></pre>
<pre class="r"><code># non-parametric estimation
Y_00 <- dtObs[A0 == 0 & A1 == 0, weighted.mean(Y, wgt)]
Y_10 <- dtObs[A0 == 1 & A1 == 0, weighted.mean(Y, wgt)]
Y_01 <- dtObs[A0 == 0 & A1 == 1, weighted.mean(Y, wgt)]
Y_11 <- dtObs[A0 == 1 & A1 == 1, weighted.mean(Y, wgt)]
round(c(Y_10 - Y_00, Y_01 - Y_00, Y_11 - Y_00), 2)</code></pre>
<pre><code>## [1] -8.04 -12.10 -20.04</code></pre>
</div>
<div id="addendum" class="section level2">
<h2>Addendum</h2>
<p>This post has been quite long, so I probably shouldn’t go on. But, I wanted to show that we can do the data generation in a much less convoluted way that avoids generating all possible forking paths for each individual. As always in <code>simstudy</code> the data generation process needs us to create a data definition table. In this example, I’ve created that table in an external file named <code>msmDef.csv</code>. In the end, this simpler approach has reduced necessary code by about 95%.</p>
<pre class="r"><code>defMSM <- defRead("msmDef.csv")
defMSM</code></pre>
<pre><code>## varname formula variance dist link
## 1: U 0;1 0 uniform identity
## 2: e 0 9 normal identity
## 3: L0 -2.66+ 3*U 0 binary logit
## 4: A0 0.3 + L0 * 0.2 0 binary identity
## 5: L1 -1.2 + 3*U + 0.2*L0 - 2.5*A0 0 binary logit
## 6: A1 0.3 + L1*0.2 + A0*0.2 0 binary identity
## 7: Y 39.95 + U*40 - A0*8 - A1*12 + e 0 nonrandom identity</code></pre>
<pre class="r"><code>dt <- genData(50000, defMSM)
fitA0 <- glm(A0 ~ L0, data = dt, family=binomial)
fitA1 <- glm(A1 ~ L0 + A0 + L1, data = dt, family=binomial)
dt[, predA0 := predict(fitA0, type = "response")]
dt[, predA1 := predict(fitA1, type = "response")]
dt[, wgt := getWeight(predA0, A0, predA1, A1)]
tidy(lm(Y ~ A0 + A1, weights = wgt, data = dt))</code></pre>
<pre><code>## term estimate std.error statistic p.value
## 1 (Intercept) 60.061609 0.09284532 646.89967 0
## 2 A0 -7.931715 0.10715916 -74.01808 0
## 3 A1 -12.131829 0.10715900 -113.21335 0</code></pre>
<div id="does-the-msm-still-work-with-more-complicated-effects" class="section level3">
<h3>Does the MSM still work with more complicated effects?</h3>
<p>In conclusion, I wanted to show that MSMs still function well even when the causal effects do not follow a simple linear pattern. (And I wanted to be able to end with a figure.) I generated 10000 datasets of 900 observations each, and calculated the crude and marginal causal effects after each iteration. The true treatment effects are described by an “interaction” between <span class="math inline">\(A_0\)</span> and <span class="math inline">\(A_1\)</span>. If treatment is received in <em>both</em> periods (i.e. <span class="math inline">\(A_0=1\)</span> and <span class="math inline">\(A_1=1\)</span>), there is an extra additive effect:</p>
<p><span class="math display">\[ Y = 39.95 + U*40 - A0*8 - A1*12 - A0*A1*3 + e\]</span></p>
<p>The purple density is the (biased) observed estimates and the green density is the (unbiased) IPW-based estimate. Again the true causal effects are -8, -12, and -23:</p>
<div class="figure">
<img src="https://www.rdatagen.net/img/post-msm/densities.png" />
</div>
</div>
</div>
When you use inverse probability weighting for estimation, what are the weights actually doing?
https://www.rdatagen.net/post/inverse-probability-weighting-when-the-outcome-is-binary/
Mon, 04 Dec 2017 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/inverse-probability-weighting-when-the-outcome-is-binary/<p>Towards the end of <a href="https://www.rdatagen.net/post/potential-outcomes-confounding/">Part 1</a> of this short series on confounding, IPW, and (hopefully) marginal structural models, I talked a little bit about the fact that inverse probability weighting (IPW) can provide unbiased estimates of marginal causal effects in the context of confounding just as more traditional regression models like OLS can. I used an example based on a normally distributed outcome. Now, that example wasn’t super interesting, because in the case of a linear model with homogeneous treatment effects (i.e. no interaction), the marginal causal effect is the same as the conditional effect (that is, conditional on the confounders.) There was no real reason to use IPW in that example - I just wanted to illustrate that the estimates looked reasonable.</p>
<p>But in many cases, the conditional effect <em>is</em> different from the marginal effect. (And in other cases, there may not even be an obvious way to estimate the conditional effect - that will be the topic for the last post in this series). When the outcome is binary, the notion that conditional effects are equal to marginal effects is no longer the case. (I’ve touched on this <a href="https://www.rdatagen.net/post/marginal-v-conditional/">before</a>.) What this means, is that we can recover the true conditional effects using logistic regression, but we cannot estimate the marginal effect. This is directly related to the fact that logistic regression is linear on the logit (or log-odds) scale, not on the probability scale. The issue here is collapsibility, or rather, non-collapsibility.</p>
<div id="a-simulation" class="section level3">
<h3>A simulation</h3>
<p>Because binary outcomes are less amenable to visual illustration, I am going to stick with model estimation to see how this plays out:</p>
<pre class="r"><code>library(simstudy)
# define the data
defB <- defData(varname = "L", formula =0.27,
dist = "binary")
defB <- defData(defB, varname = "Y0", formula = "-2.5 + 1.75*L",
dist = "binary", link = "logit")
defB <- defData(defB, varname = "Y1", formula = "-1.5 + 1.75*L",
dist = "binary", link = "logit")
defB <- defData(defB, varname = "A", formula = "0.315 + 0.352 * L",
dist = "binary")
defB <- defData(defB, varname = "Y", formula = "Y0 + A * (Y1 - Y0)",
dist = "nonrandom")
# generate the data
set.seed(2002)
dtB <- genData(200000, defB)
dtB[1:6]</code></pre>
<pre><code>## id L Y0 Y1 A Y
## 1: 1 0 0 0 0 0
## 2: 2 0 0 0 0 0
## 3: 3 1 0 1 1 1
## 4: 4 0 1 1 1 1
## 5: 5 1 0 0 1 0
## 6: 6 1 0 0 0 0</code></pre>
<p>We can look directly at the potential outcomes to see the true causal effect, measured as a log odds ratio (LOR):</p>
<pre class="r"><code>odds <- function (p) {
return((p/(1 - p)))
}
# log odds ratio for entire sample (marginal LOR)
dtB[, log( odds( mean(Y1) ) / odds( mean(Y0) ) )]</code></pre>
<pre><code>## [1] 0.8651611</code></pre>
<p>In the linear regression context, the conditional effect measured using observed data from the exposed and unexposed subgroups was in fact a good estimate of the marginal effect in the population. Not the case here, as the conditional causal effect (LOR) of A is 1.0, which is greater than the true marginal effect of 0.86:</p>
<pre class="r"><code>library(broom)
tidy(glm(Y ~ A + L , data = dtB, family="binomial")) </code></pre>
<pre><code>## term estimate std.error statistic p.value
## 1 (Intercept) -2.4895846 0.01053398 -236.33836 0
## 2 A 0.9947154 0.01268904 78.39167 0
## 3 L 1.7411358 0.01249180 139.38225 0</code></pre>
<p>This regression estimate for the coefficient of <span class="math inline">\(A\)</span> <em>is</em> a good estimate of the conditional effect in the population (based on the potential outcomes at each level of <span class="math inline">\(L\)</span>):</p>
<pre class="r"><code>dtB[, .(LOR = log( odds( mean(Y1) ) / odds( mean(Y0) ) ) ), keyby = L]</code></pre>
<pre><code>## L LOR
## 1: 0 0.9842565
## 2: 1 0.9865561</code></pre>
<p>Of course, ignoring the confounder <span class="math inline">\(L\)</span> is not very useful if we are interested in recovering the marginal effect. The estimate of 1.4 is biased for <em>both</em> the conditional effect <em>and</em> the marginal effect - it is not really useful for anything:</p>
<pre class="r"><code>tidy(glm(Y ~ A , data = dtB, family="binomial"))</code></pre>
<pre><code>## term estimate std.error statistic p.value
## 1 (Intercept) -2.049994 0.009164085 -223.6987 0
## 2 A 1.433094 0.011723767 122.2384 0</code></pre>
</div>
<div id="how-weighting-reshapes-the-data" class="section level3">
<h3>How weighting reshapes the data …</h3>
<p>Here is a simple tree graph that shows the potential outcomes for 1000 individuals (based on the same distributions we’ve been using in our simulation). For 27% of the individuals, <span class="math inline">\(L=1\)</span>, for 73% <span class="math inline">\(L=0\)</span>. Each individual has a potential outcome under each level of treatment <span class="math inline">\(A\)</span>. So, that is why there are 730 individuals with <span class="math inline">\(L=0\)</span> who are both with and without treatment. Likewise each treatment arm for those with <span class="math inline">\(L=0\)</span> has 270 individuals. We are not double counting.</p>
<div class="figure">
<img src="https://www.rdatagen.net/img/post-ipw/PO_flow_large.png" />
</div>
<p>Both the marginal and conditional estimates that we estimated before using the simulated data can be calculated directly using information from this tree. The conditional effects on the log-odds scale can be calculated as …</p>
<p><span class="math display">\[LOR_{A=1 \textbf{ vs } A=0|L = 0} = log \left (\frac{0.182/0.818}{0.076/0.924} \right)=log(2.705) = 0.995\]</span></p>
<p>and</p>
<p><span class="math display">\[LOR_{A=1 \textbf{ vs } A=0|L = 1} = log \left (\frac{0.562/0.438}{0.324/0.676} \right)=log(2.677) = 0.984\]</span></p>
<p>The marginal effect on the log odds scale is estimated marginal probabilities: <span class="math inline">\(P(Y=1|A=0)\)</span> and <span class="math inline">\(P(Y=1|A=1)\)</span>. Again, we can take this right from the tree …</p>
<p><span class="math display">\[P(Y=1|A=0) = 0.73\times0.076 + 0.27\times0.324 = 0.143\]</span> and</p>
<p><span class="math display">\[P(Y=1|A=1) = 0.73\times0.182 + 0.27\times0.562 = 0.285\]</span></p>
<p>Based on these average outcomes (probabilities) by exposure, the marginal log-odds for the sample is:</p>
<p><span class="math display">\[LOR_{A=1 \textbf{ vs } A=0} = log \left (\frac{0.285/0.715}{0.143/0.857} \right)=log(2.389) = 0.871\]</span></p>
<p>Back in the real world of observed data, this is what the tree diagram looks like:</p>
<div class="figure">
<img src="https://www.rdatagen.net/img/post-ipw/Obs_flow_large.png" />
</div>
<p>This tree tells us that the probability of exposure <span class="math inline">\(A=1\)</span> is different depending upon that value of <span class="math inline">\(L\)</span>. For <span class="math inline">\(L=1\)</span>, <span class="math inline">\(P(A=1) = 230/730 = 0.315\)</span> and for <span class="math inline">\(L=0\)</span>, <span class="math inline">\(P(A=1) = 180/270 = 0.667\)</span>. Because of this disparity, the crude estimate of the effect (ignoring <span class="math inline">\(L\)</span>) is biased for the marginal causal effect:</p>
<p><span class="math display">\[P(Y=1|A=0) = \frac{500\times0.076 + 90\times0.324}{500+90}=0.114\]</span></p>
<p>and</p>
<p><span class="math display">\[P(Y=1|A=1) = \frac{230\times0.182 + 180\times0.562}{230+180}=0.349\]</span></p>
<p>The crude log odds ratio is</p>
<p><span class="math display">\[LOR_{A=1 \textbf{ vs } A=0} = log \left (\frac{0.349/0.651}{0.114/0.886} \right)=log(4.170) = 1.420\]</span></p>
<p>And now we finally get to the weights. As mentioned in the prior post, the IPW is based on the probability of the actual exposure at each level of <span class="math inline">\(L\)</span>: <span class="math inline">\(P(A=a | L)\)</span>, where <span class="math inline">\(a\in(0,1)\)</span> (and not on <span class="math inline">\(P(A=1|L)\)</span>, the propensity score). Here are the simple weights for each group:</p>
<div class="figure">
<img src="https://www.rdatagen.net/img/post-ipw/Weights.png" />
</div>
<p>If we apply the weights to each of the respective groups, you can see that the number of individuals in each treatment arm is adjusted to the total number of individuals in the sub-group defined the level of <span class="math inline">\(L\)</span>. For example, if we apply the weight of 3.17 (730/230) to each person observed with <span class="math inline">\(L=0\)</span> and <span class="math inline">\(A=1\)</span>, we end up with <span class="math inline">\(230\times3.17=730\)</span>. Applying each of the respective weights to the subgroups of <span class="math inline">\(L\)</span> and <span class="math inline">\(A\)</span> results in a new sample of individuals that looks exactly like the one we started out with in the potential outcomes world:</p>
<div class="figure">
<img src="https://www.rdatagen.net/img/post-ipw/PO_flow_large.png" />
</div>
<p>This all works only if we make these two assumptions: <span class="math display">\[P(Y=1|A=0, L=l) = P(Y_0=1 | A=1, L=l)\]</span> and <span class="math display">\[P(Y=1|A=1, L=l) = P(Y_1=1 | A=0, L=l)\]</span></p>
<p>That is, we can make this claim <em>only under the assumption of no unmeasured confounding</em>. (This was discussed in the <a href="https://www.rdatagen.net/post/potential-outcomes-confounding/">Part 1</a> post.)</p>
</div>
<div id="applying-ipw-to-our-data" class="section level3">
<h3>Applying IPW to our data</h3>
<p>We need to estimate the weights using logistic regression (though other, more flexible methods, can also be used). First, we estimate <span class="math inline">\(P(A=1|L)\)</span> …</p>
<pre class="r"><code>exposureModel <- glm(A ~ L, data = dtB, family = "binomial")
dtB[, pA := predict(exposureModel, type = "response")]</code></pre>
<p>Now we can derive an estimate for <span class="math inline">\(P(A=a|L=l)\)</span> and get the weight itself…</p>
<pre class="r"><code># Define two new columns
defB2 <- defDataAdd(varname = "pA_actual",
formula = "(A * pA) + ((1 - A) * (1 - pA))",
dist = "nonrandom")
defB2 <- defDataAdd(defB2, varname = "IPW",
formula = "1/pA_actual",
dist = "nonrandom")
# Add weights
dtB <- addColumns(defB2, dtB)
dtB[1:6]</code></pre>
<pre><code>## id L Y0 Y1 A Y pA pA_actual IPW
## 1: 1 0 0 0 0 0 0.3146009 0.6853991 1.459004
## 2: 2 0 0 0 0 0 0.3146009 0.6853991 1.459004
## 3: 3 1 0 1 1 1 0.6682351 0.6682351 1.496479
## 4: 4 0 1 1 1 1 0.3146009 0.3146009 3.178631
## 5: 5 1 0 0 1 0 0.6682351 0.6682351 1.496479
## 6: 6 1 0 0 0 0 0.6682351 0.3317649 3.014183</code></pre>
<p>To estimate the marginal effect on the log-odds scale, we use the function <code>glm</code> with weights specified by IPW. The true value of marginal effect (based on the population-wide potential outcomes) was 0.87 (as we estimated from the potential outcomes directly and from the tree graph). Our estimate here is spot on (but with such a large sample size, this is not so surprising):</p>
<pre class="r"><code>tidy(glm(Y ~ A , data = dtB, family="binomial", weights = IPW)) </code></pre>
<pre><code>## term estimate std.error statistic p.value
## 1 (Intercept) -1.7879512 0.006381189 -280.1909 0
## 2 A 0.8743154 0.008074115 108.2862 0</code></pre>
<p>It may not seem like a big deal to be able to estimate the marginal effect - we may actually not be interested in it. However, in the next post, I will touch on the issue of estimating causal effects when there are repeated exposures (for example, administering a drug over time) and time dependent confounders that are both affected by prior exposures and affect future exposures <em>and</em> affect the outcome. Under this scenario, it is very difficult if not impossible to control for these confounders - the best we might be able to do is estimate a marginal, population-wide causal effect. That is where weighting will be really useful.</p>
</div>
Characterizing the variance for clustered data that are Gamma distributed
https://www.rdatagen.net/post/icc-for-gamma-distribution/
Mon, 27 Nov 2017 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/icc-for-gamma-distribution/<p>Way back when I was studying algebra and wrestling with one word problem after another (I think now they call them story problems), I complained to my father. He laughed and told me to get used to it. “Life is one big word problem,” is how he put it. Well, maybe one could say any statistical analysis is really just some form of multilevel data analysis, whether we treat it that way or not.</p>
<p>A key feature of the multilevel model is the ability to explicitly untangle the variation that occurs at different levels. Variation of individuals within a sub-group, variation across sub-groups, variation across groups of sub-groups, and so on. The intra-class coefficient (ICC) is one summarizing statistic that attempts to characterize the <em>relative</em> variability across the different levels.</p>
<p>The amount of clustering as measured by the ICC has implications for study design, because it communicates how much information is available at different levels of the hierarchy. We may have thousands of individuals that fall into ten or twenty clusters, and think we have a lot of information. But if most of the variation is at the cluster/group level (and not across individuals within a cluster), we don’t have thousands of observations, but more like ten or twenty. This has important implications for our measures of uncertainty.</p>
<p>Recently, a researcher was trying to use <code>simstudy</code> to generate cost and quality-of-life measurements to simulate clustered data for a cost-effectiveness analysis. (They wanted the cost and quality measurements to correlate within individuals, but I am going to ignore that aspect here.) Cost data are typically <em>right skewed</em> with most values falling on the lower end, but with some extremely high values on the upper end. (These dollar values cannot be negative.)</p>
<p>Because of this characteristic shape, cost data are often modeled using a Gamma distribution. The challenge here was that in simulating the data, the researcher wanted to control the group level variation relative to the individual-level variation. If the data were normally distributed, it would be natural to talk about that control in terms of the ICC. But, with the Gamma distribution, it is not as obvious how to partition the variation.</p>
<p>As most of my posts do, this one provides simulation and plots to illuminate some of these issues.</p>
<div id="gamma-distribtution" class="section level3">
<h3>Gamma distribtution</h3>
<p>The Gamma distribution is a continuous probability distribution that includes all non-negative numbers. The probability density function is typically written as a function of two parameters - the shape <span class="math inline">\(\alpha\)</span> and the rate <span class="math inline">\(\beta\)</span>:</p>
<p><span class="math display">\[f(x) = \frac{\beta ^ \alpha}{\Gamma(\alpha)} x^{\alpha - 1} e^{-\beta x},\]</span></p>
<p>with <span class="math inline">\(\text{E}(x) = \alpha / \beta\)</span>, and <span class="math inline">\(\text{Var}(x)=\alpha / \beta^2\)</span>. <span class="math inline">\(\Gamma(.)\)</span> is the continuous Gamma function, which lends its name to the distribution. (When <span class="math inline">\(\alpha\)</span> is a positive integer, <span class="math inline">\(\Gamma(\alpha)=(\alpha - 1 )!\)</span>) In <code>simstudy</code>, I decided to parameterize the pdf using <span class="math inline">\(\mu\)</span> to represent the mean and a dispersion parameter <span class="math inline">\(\nu\)</span>, where <span class="math inline">\(\text{Var}(x) = \nu\mu^2\)</span>. In this parameterization, shape <span class="math inline">\(\alpha = \frac{1}{\nu}\)</span> and rate <span class="math inline">\(\beta = \frac{1}{\nu\mu}\)</span>. (There is a simstudy function <code>gammaGetShapeRate</code> that maps <span class="math inline">\(\mu\)</span> and <span class="math inline">\(\nu\)</span> to <span class="math inline">\(\alpha\)</span> and <span class="math inline">\(\beta\)</span>.) With this parameterization, it is clear that the variance of a Gamma distributed random variable is a function of the (square) of the mean.</p>
<p>Simulating data gives a sense of the shape of the distribution and also makes clear that the variance depends on the mean (which is not the case for the normal distribution):</p>
<pre class="r"><code>mu <- 20
nu <- 1.2
# theoretical mean and variance
c(mean = mu, variance = mu^2 * nu) </code></pre>
<pre><code>## mean variance
## 20 480</code></pre>
<pre class="r"><code>library(simstudy)
(ab <- gammaGetShapeRate(mu, nu))</code></pre>
<pre><code>## $shape
## [1] 0.8333333
##
## $rate
## [1] 0.04166667</code></pre>
<pre class="r"><code># simulate data using R function
set.seed(1)
g.rfunc <- rgamma(100000, ab$shape, ab$rate)
round(c(mean(g.rfunc), var(g.rfunc)), 2)</code></pre>
<pre><code>## [1] 19.97 479.52</code></pre>
<pre class="r"><code># simulate data using simstudy function - no difference
set.seed(1)
defg <- defData(varname = "g.sim", formula = mu, variance = nu,
dist = "gamma")
dt.g1 <- genData(100000, defg)
dt.g1[, .(round(mean(g.sim),2), round(var(g.sim),2))]</code></pre>
<pre><code>## V1 V2
## 1: 19.97 479.52</code></pre>
<pre class="r"><code># doubling dispersion factor
defg <- updateDef(defg, changevar = "g.sim", newvariance = nu * 2)
dt.g0 <- genData(100000, defg)
dt.g0[, .(round(mean(g.sim),2), round(var(g.sim),2))]</code></pre>
<pre><code>## V1 V2
## 1: 20.09 983.01</code></pre>
<pre class="r"><code># halving dispersion factor
defg <- updateDef(defg, changevar = "g.sim", newvariance = nu * 0.5)
dt.g2 <- genData(100000, defg)
dt.g2[, .(round(mean(g.sim),2), round(var(g.sim),2))]</code></pre>
<pre><code>## V1 V2
## 1: 19.98 240.16</code></pre>
<p>Generating data sets with the same mean but decreasing levels of dispersion makes it appear as if the distribution is “moving” to the right: the peak shifts to the right and variance decreases …</p>
<pre class="r"><code>library(ggplot2)
dt.g0[, nugrp := 0]
dt.g1[, nugrp := 1]
dt.g2[, nugrp := 2]
dt.g <- rbind(dt.g0, dt.g1, dt.g2)
ggplot(data = dt.g, aes(x=g.sim), group = nugrp) +
geom_density(aes(fill=factor(nugrp)), alpha = .5) +
scale_fill_manual(values = c("#226ab2","#b22222","#22b26a"),
labels = c(nu*2, nu, nu*0.5),
name = bquote(nu)) +
scale_y_continuous(limits = c(0, 0.10)) +
scale_x_continuous(limits = c(0, 100)) +
theme(panel.grid.minor = element_blank()) +
ggtitle(paste0("Varying dispersion with mean = ", mu))</code></pre>
<p><img src="https://www.rdatagen.net/post/2017-11-27-icc-for-clustered-data-that-happen-to-have-a-gamma-distribution_files/figure-html/unnamed-chunk-2-1.png" width="672" /></p>
<p>Conversely, generating data with constant dispersion but increasing the mean does not shift the location but makes the distribution appear less “peaked”. In this case, variance increases with higher means (we can see that longer tails are associated with higher means) …</p>
<p><img src="https://www.rdatagen.net/post/2017-11-27-icc-for-clustered-data-that-happen-to-have-a-gamma-distribution_files/figure-html/unnamed-chunk-3-1.png" width="672" /></p>
</div>
<div id="icc-for-clustered-data-where-within-group-observations-have-a-gaussian-normal-distribution" class="section level3">
<h3>ICC for clustered data where within-group observations have a Gaussian (normal) distribution</h3>
<p>In a 2-level world, with multiple groups each containing individuals, a normally distributed continuous outcome can be described by this simple model:
<span class="math display">\[Y_{ij} = \mu + a_j + e_{ij},\]</span>
where <span class="math inline">\(Y_{ij}\)</span> is the outcome for individual <span class="math inline">\(i\)</span> who is a member of group <span class="math inline">\(j\)</span>. <span class="math inline">\(\mu\)</span> is the average across all groups and individuals. <span class="math inline">\(a_j\)</span> is the group level effect and is typically assumed to be normally distributed as <span class="math inline">\(N(0, \sigma^2_a)\)</span>, and <span class="math inline">\(e_{ij}\)</span> is the individual level effect that is <span class="math inline">\(N(0, \sigma^2_e)\)</span>. The variance of <span class="math inline">\(Y_{ij}\)</span> is <span class="math inline">\(\text{Var}(a_j + e_{ij}) = \text{Var}(a_j) + \text{Var}(e_{ij}) = \sigma^2_a + \sigma^2_e\)</span>. The ICC is the proportion of total variation of <span class="math inline">\(Y\)</span> explained by the group variation:
<span class="math display">\[ICC = \frac{\sigma^2_a}{\sigma^2_a+\sigma^2_e}\]</span>
If individual level variation is relatively low or variation across groups is relatively high, then the ICC will be higher. Conversely, higher individual variation or lower variation between groups implies a smaller ICC.</p>
<p>Here is a simulation of data for 50 groups, where each group has 250 individuals. The ICC is 0.10:</p>
<pre class="r"><code># define the group level data
defgrp <- defData(varname = "a", formula = 0,
variance = 2.8, dist = "normal", id = "cid")
defgrp <- defData(defgrp, varname = "n", formula = 250,
dist = "nonrandom")
# define the individual level data
defind <- defDataAdd(varname = "ynorm", formula = "30 + a",
variance = 25.2, dist = "normal")
# generate the group and individual level data
set.seed(3017)
dt <- genData(50, defgrp)
dc <- genCluster(dt, "cid", "n", "id")
dc <- addColumns(defind, dc)
dc</code></pre>
<pre><code>## cid a n id ynorm
## 1: 1 -2.133488 250 1 30.78689
## 2: 1 -2.133488 250 2 25.48245
## 3: 1 -2.133488 250 3 22.48975
## 4: 1 -2.133488 250 4 30.61370
## 5: 1 -2.133488 250 5 22.51571
## ---
## 12496: 50 -1.294690 250 12496 25.26879
## 12497: 50 -1.294690 250 12497 27.12190
## 12498: 50 -1.294690 250 12498 34.82744
## 12499: 50 -1.294690 250 12499 27.93607
## 12500: 50 -1.294690 250 12500 32.33438</code></pre>
<pre class="r"><code># mean Y by group
davg <- dc[, .(avgy = mean(ynorm)), keyby = cid]
# variance of group means
(between.var <- davg[, var(avgy)])</code></pre>
<pre><code>## [1] 2.70381</code></pre>
<pre class="r"><code># overall (marginal) mean and var of Y
gavg <- dc[, mean(ynorm)]
gvar <- dc[, var(ynorm)]
# individual variance within each group
dvar <- dc[, .(vary = var(ynorm)), keyby = cid]
(within.var <- dvar[, mean(vary)])</code></pre>
<pre><code>## [1] 25.08481</code></pre>
<pre class="r"><code># estimate of ICC
(ICCest <- between.var/(between.var + within.var))</code></pre>
<pre><code>## [1] 0.09729918</code></pre>
<pre class="r"><code>ggplot(data=dc, aes(y = ynorm, x = factor(cid))) +
geom_jitter(size = .5, color = "grey50", width = 0.2) +
geom_point(data = davg, aes(y = avgy, x = factor(cid)),
shape = 21, fill = "firebrick3", size = 3) +
theme(panel.grid.major.y = element_blank(),
panel.grid.minor.y = element_blank(),
axis.ticks.x = element_blank(),
axis.text.x = element_blank(),
axis.text.y = element_text(size = 12),
axis.title = element_text(size = 14)
) +
xlab("Group") +
scale_y_continuous(limits = c(0, 60), name = "Measure") +
ggtitle(bquote("ICC:" ~ .(round(ICCest, 2)) ~
(sigma[a]^2 == .(round(between.var, 1)) ~ "," ~
sigma[e]^2 == .(round(within.var, 1)))
)) </code></pre>
<p><img src="https://www.rdatagen.net/post/2017-11-27-icc-for-clustered-data-that-happen-to-have-a-gamma-distribution_files/figure-html/unnamed-chunk-4-1.png" width="960" /></p>
<p>Here is a plot of data generated using the same overall variance of 28, but based on a much higher ICC of 0.80. Almost all of the variation in the data is driven by the clusters rather than the individuals. This has implications for a study, because (in contrast to the first data set generated above) the individual-level data is not providing as much information or insight into the variation of <span class="math inline">\(Y\)</span>. The most useful information (from this extreme example) can be derived from the difference between the groups (so we really have more like 50 data points rather than 125K).</p>
<p><img src="https://www.rdatagen.net/post/2017-11-27-icc-for-clustered-data-that-happen-to-have-a-gamma-distribution_files/figure-html/unnamed-chunk-6-1.png" width="960" /></p>
<p>Of course, if we look at the individual-level data for each of the two data sets while ignoring the group membership, the two data sets are indistinguishable. That is, the marginal (or population level) distributions are both normally distributed with mean 30 and variance 28:</p>
<p><img src="https://www.rdatagen.net/post/2017-11-27-icc-for-clustered-data-that-happen-to-have-a-gamma-distribution_files/figure-html/unnamed-chunk-7-1.png" width="672" /></p>
</div>
<div id="icc-for-clustered-data-with-gamma-distribution" class="section level3">
<h3>ICC for clustered data with Gamma distribution</h3>
<p>Now, back to the original question … how do we think about the ICC with clustered data that is Gamma distributed? The model (and data generating process) for these type of data can be described as:</p>
<p><span class="math display">\[Y_{ij} \sim \text{gamma}(\mu_{j}, \nu),\]</span>
where <span class="math inline">\(\text{E}(Y_{j}) = \mu_j\)</span> and <span class="math inline">\(\text{Var}(Y_j) = \nu\mu_j^2\)</span>. In addition, the mean of each group is often modeled as:</p>
<p><span class="math display">\[\text{log}(\mu_j) = \beta + a_j,\]</span>
where <span class="math inline">\(\beta\)</span> is log of the mean for the group whose group effect is 0, and <span class="math inline">\(a_j \sim N(0, \sigma^2_a)\)</span>. So, the group means are normally distributed on the log scale (or are lognormal) with variance <span class="math inline">\(\sigma^2_a\)</span>. (Although the individual observations within each cluster are Gamma-distributed, the means of the groups are not themselves Gamma-distributed.)</p>
<p>But what is the within group (individual) variation, which <em>is</em> Gamma-distributed? It is not so clear, as the variance within each group depends on both the group mean <span class="math inline">\(\mu_j\)</span> and the dispersion factor <span class="math inline">\(\nu\)</span>. A <a href="https://royalsocietypublishing.org/doi/pdf/10.1098/rsif.2017.0213">paper</a> by Nakagawa <em>et al</em> shows that <span class="math inline">\(\sigma^2_e\)</span> on the log scale is also lognormal and can be estimated using the trigamma function (the 2nd derivative of the gamma function) of the dispersion factor. So, the ICC of clustered Gamma observations can be defined on the the log scale:</p>
<p><span class="math display">\[\text{ICC}_\text{gamma-log} = \frac{\sigma^2_a}{\sigma^2_a + \psi_1 \left( \frac{1}{\nu}\right)}\]</span>
<span class="math inline">\(\psi_1\)</span> is the <em>trigamma</em> function. I’m quoting from the paper here: “the variance of a gamma-distributed variable on the log scale is equal to <span class="math inline">\(\psi_1 (\frac{1}{\nu})\)</span>, where <span class="math inline">\(\frac{1}{\nu}\)</span> is the shape parameter of the gamma distribution and hence <span class="math inline">\(\sigma^2_e\)</span> is <span class="math inline">\(\psi_1 (\frac{1}{\nu})\)</span>.” (The formula I have written here is slightly different, as I define the dispersion factor as the reciprocal of the the dispersion factor used in the paper.)</p>
<pre class="r"><code>sigma2a <- 0.8
nuval <- 2.5
(sigma2e <- trigamma(1/nuval))</code></pre>
<pre><code>## [1] 7.275357</code></pre>
<pre class="r"><code># Theoretical ICC on log scale
(ICC <- sigma2a/(sigma2a + sigma2e))</code></pre>
<pre><code>## [1] 0.09906683</code></pre>
<pre class="r"><code># generate clustered gamma data
def <- defData(varname = "a", formula = 0, variance = sigma2a,
dist = "normal")
def <- defData(def, varname = "n", formula = 250, dist = "nonrandom")
defc <- defDataAdd(varname = "g", formula = "2 + a",
variance = nuval, dist = "gamma", link = "log")
dt <- genData(1000, def)
dc <- genCluster(dt, "id", "n", "id1")
dc <- addColumns(defc, dc)
dc</code></pre>
<pre><code>## id a n id1 g
## 1: 1 0.6629489 250 1 4.115116e+00
## 2: 1 0.6629489 250 2 6.464886e+01
## 3: 1 0.6629489 250 3 3.365173e+00
## 4: 1 0.6629489 250 4 3.624267e+01
## 5: 1 0.6629489 250 5 6.021529e-08
## ---
## 249996: 1000 0.3535922 250 249996 1.835999e+00
## 249997: 1000 0.3535922 250 249997 2.923195e+01
## 249998: 1000 0.3535922 250 249998 1.708895e+00
## 249999: 1000 0.3535922 250 249999 1.298296e+00
## 250000: 1000 0.3535922 250 250000 1.212823e+01</code></pre>
<p>Here is an estimation of the ICC on the log scale using the raw data …</p>
<pre class="r"><code>dc[, lg := log(g)]
davg <- dc[, .(avgg = mean(lg)), keyby = id]
(between <- davg[, var(avgg)])</code></pre>
<pre><code>## [1] 0.8137816</code></pre>
<pre class="r"><code>dvar <- dc[, .(varg = var(lg)), keyby = id]
(within <- dvar[, mean(varg)])</code></pre>
<pre><code>## [1] 7.20502</code></pre>
<pre class="r"><code>(ICCest <- between/(between + within))</code></pre>
<pre><code>## [1] 0.1014842</code></pre>
<p>Here is an estimation of the ICC (on the log scale) based on the estimated variance of the random effects using a generalized mixed effects model. The between-group variance is a ratio of the intercept variance and the residual variance. An estimate of <span class="math inline">\(\nu\)</span> is just the residual variance …</p>
<pre class="r"><code>library(lme4)
glmerfit <- glmer(g ~ 1 + (1|id),
family = Gamma(link="log"), data= dc)
summary(glmerfit)</code></pre>
<pre><code>## Generalized linear mixed model fit by maximum likelihood (Laplace
## Approximation) [glmerMod]
## Family: Gamma ( log )
## Formula: g ~ 1 + (1 | id)
## Data: dc
##
## AIC BIC logLik deviance df.resid
## 1328004.4 1328035.7 -663999.2 1327998.4 249997
##
## Scaled residuals:
## Min 1Q Median 3Q Max
## -0.6394 -0.6009 -0.4061 0.1755 14.0254
##
## Random effects:
## Groups Name Variance Std.Dev.
## id (Intercept) 1.909 1.382
## Residual 2.446 1.564
## Number of obs: 250000, groups: id, 1000
##
## Fixed effects:
## Estimate Std. Error t value Pr(>|z|)
## (Intercept) 2.03127 0.02803 72.47 <2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1</code></pre>
<pre class="r"><code>estnu <- as.data.table(VarCorr(glmerfit))[2,4]
estsig <- as.data.table(VarCorr(glmerfit))[1,4] / estnu
estsig/(estsig + trigamma(1/estnu))</code></pre>
<pre><code>## vcov
## 1: 0.1003386</code></pre>
<p>Finally, here are some plots of the generated observations and the group means on the log scale. The plots in each row have the same ICC but different underlying mean and dispersion parameters. I find these plots interesting because looking across the columns or up and down the two rows, they provide some insight to the interplay of group means and dispersion on the ICC …</p>
<p><img src="https://www.rdatagen.net/post/2017-11-27-icc-for-clustered-data-that-happen-to-have-a-gamma-distribution_files/figure-html/unnamed-chunk-12-1.png" width="960" /></p>
<p>
<p><small><font color="darkkhaki">
Reference:</p>
<p>Nakagawa, Shinichi, Paul CD Johnson, and Holger Schielzeth. “The coefficient of determination R 2 and intra-class correlation coefficient from generalized linear mixed-effects models revisited and expanded.” Journal of the Royal Society Interface 14.134 (2017): 20170213.</p>
</font></small>
</p>
</div>
Visualizing how confounding biases estimates of population-wide (or marginal) average causal effects
https://www.rdatagen.net/post/potential-outcomes-confounding/
Thu, 16 Nov 2017 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/potential-outcomes-confounding/<p>When we are trying to assess the effect of an exposure or intervention on an outcome, confounding is an ever-present threat to our ability to draw the proper conclusions. My goal (starting here and continuing in upcoming posts) is to think a bit about how to characterize confounding in a way that makes it possible to literally see why improperly estimating intervention effects might lead to bias.</p>
<div id="confounding-potential-outcomes-and-causal-effects" class="section level3">
<h3>Confounding, potential outcomes, and causal effects</h3>
<p>Typically, we think of a confounder as a factor that influences <em>both</em> exposure <em>and</em> outcome. If we ignore the confounding factor in estimating the effect of an exposure, we can easily over- or underestimate the size of the effect due to the exposure. If sicker patients are more likely than healthier patients to take a particular drug, the relatively poor outcomes of those who took the drug may be due to the initial health status rather than the drug.</p>
<p>A slightly different view of confounding is tied to the more conceptual framework of potential outcomes, which I <a href="https://www.rdatagen.net/post/be-careful/">wrote</a> a bit about earlier. A potential outcome is the outcome we <em>would</em> observe <em>if</em> an individual were subjected to a particular exposure. We may or may not observe the potential outcome - this depends on the actual exposure. (To simplify things here, I will assume we are interested only in two different exposures.) <span class="math inline">\(Y_0\)</span> and <span class="math inline">\(Y_1\)</span> represent the potential outcomes for an individual with and without exposure, respectively. We observe <span class="math inline">\(Y_0\)</span> if the individual is not exposed, and <span class="math inline">\(Y_1\)</span> if she is.</p>
<p>The causal effect of the exposure for the individual <span class="math inline">\(i\)</span> can be defined as <span class="math inline">\(Y_{1i} - Y_{0i}\)</span>. If we can observe each individual in both states (with and without the exposure) long enough to measure the outcome <span class="math inline">\(Y\)</span>, we are observing both potential outcomes and can measure the causal effect for each individual. Averaging across all individuals in the sample provides an estimate the population average causal effect. (Think of a crossover or N-of-1 study.)</p>
<p>Unfortunately, in the real world, it is rarely feasible to expose an individual to multiple conditions. Instead, we use one group as a proxy for the other. For example, the control group represents what would have happened to the exposed group had the exposed group not been exposed. This approach only makes sense if the control group is identical in every way to the exposure group (except for the exposure, of course.)</p>
<p>Our goal is to compare the distribution of outcomes for the control group with the exposed group. We often simplify this comparison by looking at the means of each distribution. The average causal effect (across all individuals) can be written as <span class="math inline">\(E(Y_1 - Y_0)\)</span>, where <span class="math inline">\(E()\)</span> is the expectation or average. In reality, we cannot directly measure this since only one potential outcome is observed for each individual.</p>
<p>Using the following logic, we might be able to convince ourselves that we can use <em>observed</em> measurements to estimate unobservable average causal effects. First, we can say <span class="math inline">\(E(Y_1 - Y_0) = E(Y_1) - E(Y_0)\)</span>, because expectation is linear. Next, it seems fairly reasonable to say that <span class="math inline">\(E(Y_1 | A = 1) = E(Y | A = 1)\)</span>, where <span class="math inline">\(A=1\)</span> for exposure, <span class="math inline">\(A=0\)</span> for control. In words, this states that the average <strong>potential outcome of exposure</strong> for the <strong><em>exposed group</em></strong> is the same as what we actually <strong>observe</strong> for the <strong><em>exposed group</em></strong> (this is the consistency assumption in causal inference theory). Along the same lines, <span class="math inline">\(E(Y_0 | A = 0) = E(Y | A = 0)\)</span>. Finally, <em>if</em> we can say that <span class="math inline">\(E(Y_1) = E(Y_1 | A = 1)\)</span> - the potential outcome of exposure for <strong><em>everyone</em></strong> is equal to the potential outcome of exposure for those <strong><em>exposed</em></strong> - then we can say that <span class="math inline">\(E(Y_1) = E(Y | A = 1)\)</span> (the potential outcome with exposure for <strong><em>everyone</em></strong> is the same as the observed outcome for <strong><em>the exposed</em></strong>. Similarly, we can make the same argument to conclude that <span class="math inline">\(E(Y_0) = E(Y | A = 0)\)</span>. At the end of this train of logic, we conclude that we can estimate <span class="math inline">\(E(Y_1 - Y_0)\)</span> using observed data only: <span class="math inline">\(E(Y | A = 1) - E(Y | A = 0)\)</span>.</p>
<p>This nice logic fails if <span class="math inline">\(E(Y_1) \ne E(Y | A = 1)\)</span> and/or <span class="math inline">\(E(Y_0) \ne E(Y | A = 0)\)</span>. That is, this nice logic fails when there is confounding.</p>
<p>This is all a very long-winded way of saying that confounding arises when the distributions of potential outcomes <strong><em>for the population</em></strong> are different from those distributions for <strong><em>the subgroups</em></strong> we are using for analysis. For example, if the potential outcome under exposure for the population as a whole (<span class="math inline">\(Y_1\)</span>) differs from the observed outcome for the subgroup that was exposed (<span class="math inline">\(Y|A=1\)</span>), or the potential outcome without exposure for the entire population (<span class="math inline">\(Y_0\)</span>) differs from the observed outcome for the subgroup that was not exposed (<span class="math inline">\(Y|A=0\)</span>), any estimates of population level causal effects using observed data will be biased.</p>
<p>However, if we can find a factor <span class="math inline">\(L\)</span> (or factors) where</p>
<p><span class="math display">\[ \begin{aligned}
P(Y_1 | L=l) &= P(Y | A = 1 \text{ and } L=l) \\
P(Y_0 | L=l) &= P(Y | A = 0 \text{ and } L=l)
\end{aligned}
\]</span> both hold for all levels or values of <span class="math inline">\(L\)</span>, we can remove confounding (and get unbiased estimates of the causal effect) by “controlling” for <span class="math inline">\(L\)</span>. In some cases, the causal effect we measure will be conditional on <span class="math inline">\(L\)</span>, sometimes it will be a population-wide average (or marginal) causal effect, and sometimes it will be both.</p>
</div>
<div id="what-confounding-looks-like" class="section level3">
<h3>What confounding looks like …</h3>
<p>The easiest way to illustrate the population/subgroup contrast is to generate data from a process that includes confounding. In this first example, the outcome is continuous, and is a function of both the exposure (<span class="math inline">\(A\)</span>) and a covariate (<span class="math inline">\(L\)</span>). For each individual, we can generate both potential outcomes <span class="math inline">\(Y_0\)</span> and <span class="math inline">\(Y_1\)</span>. (Note that both potential outcomes share the same individual level noise term <span class="math inline">\(e\)</span> - this is not a necessary assumption.) This way, we can “know” the true population, or marginal causal effect of exposure. The observed outcome <span class="math inline">\(Y\)</span> is determined by the exposure status. For the purposes of plotting a smooth density curve, we generate a very large sample - 2 million.</p>
<pre class="r"><code>library(simstudy)
defC <- defData(varname = "e", formula = 0, variance = 2,
dist = "normal")
defC <- defData(defC, varname = "L", formula = 0.4,
dist = "binary")
defC <- defData(defC, varname = "Y0", formula = "1 + 4*L + e",
dist = "nonrandom")
defC <- defData(defC, varname = "Y1", formula = "5 + 4*L + e",
dist = "nonrandom")
defC <- defData(defC, varname = "A", formula = "0.3 + 0.3 * L",
dist = "binary")
defC <- defData(defC, varname = "Y", formula = "1 + 4*A + 4*L + e",
dist = "nonrandom")
set.seed(2017)
dtC <- genData(n = 2000000, defC)
dtC[1:5]</code></pre>
<pre><code>## id e L Y0 Y1 A Y
## 1: 1 2.02826718 1 7.0282672 11.028267 1 11.0282672
## 2: 2 -0.10930734 0 0.8906927 4.890693 0 0.8906927
## 3: 3 1.04529790 0 2.0452979 6.045298 0 2.0452979
## 4: 4 -2.48704266 1 2.5129573 6.512957 1 6.5129573
## 5: 5 -0.09874778 0 0.9012522 4.901252 0 0.9012522</code></pre>
<p>Feel free to skip over this code - I am just including in case anyone finds it useful to see how I generated the following series of annotated density curves:</p>
<pre class="r"><code>library(ggplot2)
getDensity <- function(vector, weights = NULL) {
if (!is.vector(vector)) stop("Not a vector!")
if (is.null(weights)) {
avg <- mean(vector)
} else {
avg <- weighted.mean(vector, weights)
}
close <- min(which(avg < density(vector)$x))
x <- density(vector)$x[close]
if (is.null(weights)) {
y = density(vector)$y[close]
} else {
y = density(vector, weights = weights)$y[close]
}
return(data.table(x = x, y = y))
}
plotDens <- function(dtx, var, xPrefix, title, textL = NULL, weighted = FALSE) {
dt <- copy(dtx)
if (weighted) {
dt[, nIPW := IPW/sum(IPW)]
dMarginal <- getDensity(dt[, get(var)], weights = dt$nIPW)
} else {
dMarginal <- getDensity(dt[, get(var)])
}
d0 <- getDensity(dt[L==0, get(var)])
d1 <- getDensity(dt[L==1, get(var)])
dline <- rbind(d0, dMarginal, d1)
brk <- round(dline$x, 1)
p <- ggplot(aes(x=get(var)), data=dt) +
geom_density(data=dt[L==0], fill = "#ce682f", alpha = .4) +
geom_density(data=dt[L==1], fill = "#96ce2f", alpha = .4)
if (weighted) {
p <- p + geom_density(aes(weight = nIPW),
fill = "#2f46ce", alpha = .8)
} else p <- p + geom_density(fill = "#2f46ce", alpha = .8)
p <- p + geom_segment(data = dline, aes(x = x, xend = x,
y = 0, yend = y),
size = .7, color = "white", lty=3) +
annotate(geom="text", x = 12.5, y = .24,
label = title, size = 5, fontface = 2) +
scale_x_continuous(limits = c(-2, 15),
breaks = brk,
name = paste(xPrefix, var)) +
theme(panel.grid = element_blank(),
axis.text.x = element_text(size = 12),
axis.title.x = element_text(size = 13)
)
if (!is.null(textL)) {
p <- p +
annotate(geom = "text", x = textL[1], y = textL[2],
label = "L=0", size = 4, fontface = 2) +
annotate(geom = "text", x = textL[3], y = textL[4],
label="L=1", size = 4, fontface = 2) +
annotate(geom = "text", x = textL[5], y = textL[6],
label="Population distribution", size = 4, fontface = 2)
}
return(p)
}</code></pre>
<pre class="r"><code>library(gridExtra)
grid.arrange(plotDens(dtC, "Y0", "Potential outcome", "Full\npopulation",
c(1, .24, 5, .22, 2.6, .06)),
plotDens(dtC[A==0], "Y", "Observed", "Unexposed\nonly"),
plotDens(dtC, "Y1", "Potential outcome", "Full\npopulation"),
plotDens(dtC[A==1], "Y", "Observed", "Exposed\nonly"),
nrow = 2
)</code></pre>
<p><img src="https://www.rdatagen.net/post/2017-11-16-potential-outcomes-confounding_files/figure-html/unnamed-chunk-3-1.png" width="1152" /></p>
<p>Looking at the various plots, we can see a few interesting things. The density curves on the left represent the entire population. The conditional distributions of the potential outcomes at the population level are all normally distributed, with means that depend on the exposure and covariate <span class="math inline">\(L\)</span>. We can also see that the population-wide distribution of <span class="math inline">\(Y_0\)</span> and <span class="math inline">\(Y_1\)</span> (in blue) are non-symmetrically shaped, because they are a mixture of the conditional normal distributions, weighted by the proportion of each level of <span class="math inline">\(L\)</span>. Since the proportions for the top and bottom plots are in fact the population proportion, the population-level density curves for <span class="math inline">\(Y_0\)</span> and <span class="math inline">\(Y_1\)</span> are similarly shaped, with less mass on the higher end, because individuals are less likely to have an <span class="math inline">\(L\)</span> value of 1:</p>
<pre class="r"><code>dtC[, .(propLis1 = mean(L))]</code></pre>
<pre><code>## propLis1
## 1: 0.399822</code></pre>
<p>The shape of the marginal distribution of <span class="math inline">\(Y_1\)</span> is identical to <span class="math inline">\(Y_0\)</span> (in this case, because that is the way I generated the data), but shifted to the right by an amount equal to the causal effect. The conditional effect sizes are 4, as is the population or marginal effect size.</p>
<p>The subgroup plots on the right are a different story. In this case, the distributions of <span class="math inline">\(L\)</span> vary across the exposed and unexposed groups:</p>
<pre class="r"><code>dtC[, .(propLis1 = mean(L)), keyby = A]</code></pre>
<pre><code>## A propLis1
## 1: 0 0.2757937
## 2: 1 0.5711685</code></pre>
<p>So, even though the distributions of (observed) <span class="math inline">\(Y\)</span> conditional on <span class="math inline">\(L\)</span> are identical to their potential outcome counterparts in the whole population - for example, <span class="math inline">\(P(Y | A=0 \text{ and } L = 1) = P(Y_0 | L = 1)\)</span> - the marginal distributions of <span class="math inline">\(Y\)</span> are quite different for the exposed and unexposed. For example, <span class="math inline">\(P(Y | A = 0) \ne P(Y_0)\)</span>. This is directly due to the fact that the mixing weights (the proportions of <span class="math inline">\(L\)</span>) are different for each of the groups. In the unexposed group, about 28% have <span class="math inline">\(L=1\)</span>, but for the exposed group, about 57% do. Using the subgroup data only, the conditional effect sizes are still 4 (comparing mean outcomes <span class="math inline">\(Y\)</span> within each level of <span class="math inline">\(L\)</span>). However the difference in means between the marginal distributions of each subgroup is about 5.2 (calculated by 7.3 - 2.1). This is confounding.</p>
</div>
<div id="no-confounding" class="section level3">
<h3>No confounding</h3>
<p>Just so we can see that when the covariate <span class="math inline">\(L\)</span> has nothing to do with the probability of exposure, the marginal distributions of the subgroups do in fact look like their population-level potential outcome marginal distributions:</p>
<pre class="r"><code>defC <- updateDef(defC, "A", newformula = 0.5) # change data generation
dtC <- genData(n = 2000000, defC)
dtC[, .(propLis1 = mean(L)), keyby = A] # subgroup proportions</code></pre>
<pre><code>## A propLis1
## 1: 0 0.4006499
## 2: 1 0.3987437</code></pre>
<pre class="r"><code>dtC[, .(propLis1 = mean(L))] # population/marginal props</code></pre>
<pre><code>## propLis1
## 1: 0.3996975</code></pre>
<pre class="r"><code>grid.arrange(plotDens(dtC, "Y0", "Potential outcome", "Population",
c(1, .24, 5, .22, 2.6, .06)),
plotDens(dtC[A==0], "Y", "Observed", "Unexposed"),
plotDens(dtC, "Y1", "Potential outcome", "Population"),
plotDens(dtC[A==1], "Y", "Observed", "Exposed"),
nrow = 2
)</code></pre>
<p><img src="https://www.rdatagen.net/post/2017-11-16-potential-outcomes-confounding_files/figure-html/unnamed-chunk-6-1.png" width="1152" /></p>
</div>
<div id="estimation-of-causal-effects-now-with-confounding" class="section level3">
<h3>Estimation of causal effects (now with confounding)</h3>
<p>Generating a smaller data set, we estimate the causal effects using simple calculations and linear regression:</p>
<pre class="r"><code>library(broom)
# change back to confounding
defC <- updateDef(defC, "A", newformula = ".3 + .3 * L")
dtC <- genData(2500, defC)</code></pre>
<p>The true average (marginal) causal effect from the average difference in potential outcomes for the entire population:</p>
<pre class="r"><code>dtC[, mean(Y1 - Y0)]</code></pre>
<pre><code>## [1] 4</code></pre>
<p>And the true average causal effects conditional on the covariate <span class="math inline">\(L\)</span>:</p>
<pre class="r"><code>dtC[, mean(Y1 - Y0), keyby = L]</code></pre>
<pre><code>## L V1
## 1: 0 4
## 2: 1 4</code></pre>
<p>If we try to estimate the marginal causal effect by using a regression model that does not include <span class="math inline">\(L\)</span>, we run into problems. The estimate of 5.2 we see below is the same biased estimate we saw in the plot above. This model is reporting the differences of the means (across both levels of <span class="math inline">\(L\)</span>) for the two subgroups, which we know (because we saw) are not the same as the potential outcome distributions in the population due to different proportions of <span class="math inline">\(L\)</span> in each subgroup:</p>
<pre class="r"><code>tidy(lm(Y ~ A, data = dtC))</code></pre>
<pre><code>## term estimate std.error statistic p.value
## 1 (Intercept) 2.027132 0.06012997 33.71251 1.116211e-205
## 2 A 5.241004 0.09386145 55.83766 0.000000e+00</code></pre>
<p>If we estimate a model that conditions on <span class="math inline">\(L\)</span>, the estimates are on target because in the context of normal linear regression without interaction terms, conditional effects are the same as marginal effects (when confounding has been removed, or think of the comparisons being made within the orange groups and green groups in the fist set of plots above, not within the purple groups):</p>
<pre class="r"><code>tidy(lm(Y ~ A + L , data = dtC))</code></pre>
<pre><code>## term estimate std.error statistic p.value
## 1 (Intercept) 0.9178849 0.03936553 23.31697 5.809202e-109
## 2 A 4.0968358 0.05835709 70.20288 0.000000e+00
## 3 L 3.9589109 0.05862583 67.52844 0.000000e+00</code></pre>
</div>
<div id="inverse-probability-weighting-ipw" class="section level3">
<h3>Inverse probability weighting (IPW)</h3>
<p>What follows briefly here is just a sneak preview of IPW (without any real explanation), which is one way to recover the marginal mean using observed data with confounding. For now, I am ignoring the question of why you might be interested in knowing the marginal effect when the conditional effect estimate provides the same information. Suffice it to say that the conditional effect is <em>not</em> always the same as the marginal effect (think of data generating processes that include interactions or non-linear relationships), and sometimes the marginal effect estimate may the best that we can do, or at least that we can do easily.</p>
<p>If we weight each individual observation by the inverse probability of exposure, we can remove confounding and estimate the <em>marginal</em> effect of exposure on the outcome. Here is a quick simulation example.</p>
<p>After generating the dataset (the same large one we started out with so you can compare) we estimate the probability of exposure <span class="math inline">\(P(A=1 | L)\)</span>, assuming that we know the correct exposure model. This is definitely a questionable assumption, but in this case, we actually do. Once the model has been fit, we assign the predicted probability to each individual based on her value of <span class="math inline">\(L\)</span>.</p>
<pre class="r"><code>set.seed(2017)
dtC <- genData(2000000, defC)
exposureModel <- glm(A ~ L, data = dtC, family = "binomial")
tidy(exposureModel)</code></pre>
<pre><code>## term estimate std.error statistic p.value
## 1 (Intercept) -0.847190 0.001991708 -425.3584 0
## 2 L 1.252043 0.003029343 413.3053 0</code></pre>
<pre class="r"><code>dtC[, pA := predict(exposureModel, type = "response")]</code></pre>
<p>The IPW is <em>not</em> based exactly on <span class="math inline">\(P(A=1 | L)\)</span> (which is commonly used in propensity score analysis), but rather, the probability of the actual exposure at each level of <span class="math inline">\(L\)</span>: <span class="math inline">\(P(A=a | L)\)</span>, where <span class="math inline">\(a\in(0,1)\)</span>:</p>
<pre class="r"><code># Define two new columns
defC2 <- defDataAdd(varname = "pA_actual",
formula = "A * pA + (1-A) * (1-pA)",
dist = "nonrandom")
defC2 <- defDataAdd(defC2, varname = "IPW",
formula = "1/pA_actual",
dist = "nonrandom")
# Add weights
dtC <- addColumns(defC2, dtC)
round(dtC[1:5], 2)</code></pre>
<pre><code>## id e L Y0 Y1 A Y pA pA_actual IPW
## 1: 1 2.03 1 7.03 11.03 1 11.03 0.6 0.6 1.67
## 2: 2 -0.11 0 0.89 4.89 0 0.89 0.3 0.7 1.43
## 3: 3 1.05 0 2.05 6.05 0 2.05 0.3 0.7 1.43
## 4: 4 -2.49 1 2.51 6.51 1 6.51 0.6 0.6 1.67
## 5: 5 -0.10 0 0.90 4.90 0 0.90 0.3 0.7 1.43</code></pre>
<p>To estimate the marginal effect on the log-odds scale, we use function <code>lm</code> again, but with weights specified by IPW. The true value of the marginal effect of exposure (based on the population-wide potential outcomes) was 4.0. I know I am repeating myself here, but first I am providing the biased estimate that we get when we ignore covariate <span class="math inline">\(L\)</span> to convince you that the relationship between exposure and outcome is indeed confounded:</p>
<pre class="r"><code>tidy(lm(Y ~ A , data = dtC)) </code></pre>
<pre><code>## term estimate std.error statistic p.value
## 1 (Intercept) 2.101021 0.002176711 965.2275 0
## 2 A 5.184133 0.003359132 1543.2956 0</code></pre>
<p>And now, with the simple addition of the weights but still <em>not</em> including <span class="math inline">\(L\)</span> in the model, our weighted estimate of the marginal effect is spot on (but with such a large sample size, this is not so surprising):</p>
<pre class="r"><code>tidy(lm(Y ~ A , data = dtC, weights = IPW)) </code></pre>
<pre><code>## term estimate std.error statistic p.value
## 1 (Intercept) 2.596769 0.002416072 1074.789 0
## 2 A 4.003122 0.003416842 1171.585 0</code></pre>
<p>And finally, here is a plot of the IPW-adjusted density. You might think I am just showing you the plots for the unconfounded data again, but you can see from the code (and I haven’t hidden anything) that I am still using the data set with confounding. In particular, you can see that I am calling the routine <code>plotDens</code> with weights.</p>
<pre class="r"><code>grid.arrange(plotDens(dtC, "Y0", "Potential outcome", "Population",
c(1, .24, 5, .22, 2.6, .06)),
plotDens(dtC[A==0], "Y", "Observed", "Unexposed",
weighted = TRUE),
plotDens(dtC, "Y1", "Potential outcome", "Population"),
plotDens(dtC[A==1], "Y", "Observed", "Exposed",
weighted = TRUE),
nrow = 2
)</code></pre>
<p><img src="https://www.rdatagen.net/post/2017-11-16-potential-outcomes-confounding_files/figure-html/unnamed-chunk-16-1.png" width="1152" /></p>
<p>As I mentioned, I hope to write more on <em>IPW</em>, and <em>marginal structural models</em>, which make good use of this methodology to estimate effects that can be challenging to get a handle on.</p>
</div>
A simstudy update provides an excuse to generate and display Likert-type data
https://www.rdatagen.net/post/generating-and-displaying-likert-type-data/
Tue, 07 Nov 2017 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/generating-and-displaying-likert-type-data/<p>I just updated <code>simstudy</code> to version 0.1.7. It is available on CRAN.</p>
<p>To mark the occasion, I wanted to highlight a new function, <code>genOrdCat</code>, which puts into practice some code that I presented a little while back as part of a discussion of <a href="https://www.rdatagen.net/post/a-hidden-process-part-2-of-2/">ordinal logistic regression</a>. The new function was motivated by a reader/researcher who came across my blog while wrestling with a simulation study. After a little back and forth about how to generate ordinal categorical data, I ended up with a function that might be useful. Here’s a little example that uses the <code>likert</code> package, which makes plotting Likert-type easy and attractive.</p>
<div id="defining-the-data" class="section level3">
<h3>Defining the data</h3>
<p>The proportional odds model assumes a baseline distribution of probabilities. In the case of a survey item, this baseline is the probability of responding at a particular level - in this example I assume a range of 1 (strongly disagree) to 4 (strongly agree) - given a value of zero for all of the covariates. In this example, there is a single predictor <span class="math inline">\(x\)</span> that ranges from -0.5 to 0.5. The baseline probabilities of the response variable <span class="math inline">\(r\)</span> will apply in cases where <span class="math inline">\(x = 0\)</span>. In the proportional odds data generating process, the covariates “influence” the response through an additive shift (either positive or negative) on the logistic scale. (If this makes no sense at all, maybe check out my <a href="https://www.rdatagen.net/post/a-hidden-process-part-2-of-2/">earlier post</a> for a little explanation.) Here, this additive shift is represented by the variable <span class="math inline">\(z\)</span>, which is a function of <span class="math inline">\(x\)</span>.</p>
<pre class="r"><code>library(simstudy)
baseprobs<-c(0.40, 0.25, 0.15, 0.20)
def <- defData(varname="x", formula="-0.5;0.5", dist = "uniform")
def <- defData(def, varname = "z", formula = "2*x", dist = "nonrandom")</code></pre>
</div>
<div id="generate-data" class="section level3">
<h3>Generate data</h3>
<p>The ordinal data is generated after a data set has been created with an adjustment variable. We have to provide the data.table name, the name of the adjustment variable, and the base probabilities. That’s really it.</p>
<pre class="r"><code>set.seed(2017)
dx <- genData(2500, def)
dx <- genOrdCat(dx, adjVar = "z", baseprobs, catVar = "r")
dx <- genFactor(dx, "r", c("Strongly disagree", "Disagree",
"Agree", "Strongly agree"))
print(dx)</code></pre>
<pre><code>## id x z r fr
## 1: 1 0.42424261 0.84848522 2 Disagree
## 2: 2 0.03717641 0.07435283 3 Agree
## 3: 3 -0.03080435 -0.06160871 3 Agree
## 4: 4 -0.21137382 -0.42274765 1 Strongly disagree
## 5: 5 0.27008816 0.54017632 1 Strongly disagree
## ---
## 2496: 2496 -0.32250407 -0.64500815 4 Strongly agree
## 2497: 2497 -0.10268875 -0.20537751 2 Disagree
## 2498: 2498 -0.17037112 -0.34074223 2 Disagree
## 2499: 2499 0.14778233 0.29556465 2 Disagree
## 2500: 2500 0.10665252 0.21330504 3 Agree</code></pre>
<p>The expected cumulative log odds when <span class="math inline">\(x=0\)</span> can be calculated from the base probabilities:</p>
<pre class="r"><code>dp <- data.table(baseprobs,
cumProb = cumsum(baseprobs),
cumOdds = cumsum(baseprobs)/(1 - cumsum(baseprobs))
)
dp[, cumLogOdds := log(cumOdds)]
dp</code></pre>
<pre><code>## baseprobs cumProb cumOdds cumLogOdds
## 1: 0.40 0.40 0.6666667 -0.4054651
## 2: 0.25 0.65 1.8571429 0.6190392
## 3: 0.15 0.80 4.0000000 1.3862944
## 4: 0.20 1.00 Inf Inf</code></pre>
<p>If we fit a cumulative odds model (using package <code>ordinal</code>), we recover those cumulative log odds (see the output under the section labeled “Threshold coefficients”). Also, we get an estimate for the coefficient of <span class="math inline">\(x\)</span> (where the true value used to generate the data was 2.00):</p>
<pre class="r"><code>library(ordinal)
model.fit <- clm(fr ~ x, data = dx, link = "logit")
summary(model.fit)</code></pre>
<pre><code>## formula: fr ~ x
## data: dx
##
## link threshold nobs logLik AIC niter max.grad cond.H
## logit flexible 2500 -3185.75 6379.51 5(0) 3.19e-11 3.3e+01
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## x 2.096 0.134 15.64 <2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Threshold coefficients:
## Estimate Std. Error z value
## Strongly disagree|Disagree -0.46572 0.04243 -10.98
## Disagree|Agree 0.60374 0.04312 14.00
## Agree|Strongly agree 1.38954 0.05049 27.52</code></pre>
</div>
<div id="looking-at-the-data" class="section level3">
<h3>Looking at the data</h3>
<p>Below is a plot of the response as a function of the predictor <span class="math inline">\(x\)</span>. I “jitter” the data prior to plotting; otherwise, individual responses would overlap and obscure each other.</p>
<pre class="r"><code>library(ggplot2)
dx[, rjitter := jitter(as.numeric(r), factor = 0.5)]
ggplot(data = dx, aes(x = x, y = rjitter)) +
geom_point(color = "forestgreen", size = 0.5) +
scale_y_continuous(breaks = c(1:4),
labels = c("Strongly disagree", "Disagree",
"Agree", "Strongly Agree")) +
theme(panel.grid.minor = element_blank(),
panel.grid.major.x = element_blank(),
axis.title.y = element_blank())</code></pre>
<p><img src="https://www.rdatagen.net/post/2017-11-07-generating-and-displaying-likert-type-data_files/figure-html/unnamed-chunk-5-1.png" width="672" /></p>
<p>You can see that when <span class="math inline">\(x\)</span> is smaller (closer to -0.5), a response of “Strongly disagree” is more likely. Conversely, when <span class="math inline">\(x\)</span> is closer to +0.5, the proportion of folks responding with “Strongly agree” increases.</p>
<p>If we “bin” the individual responses by ranges of <span class="math inline">\(x\)</span>, say grouping by tenths, -0.5 to -0.4, -0.4 to -0.3, all the way to 0.4 to 0.5, we can get another view of how the probabilities shift with respect to <span class="math inline">\(x\)</span>.</p>
<p>The <code>likert</code> package requires very little data manipulation, and once the data are set, it is easy to look at the data in a number of different ways, a couple of which I plot here. I encourage you to look at the <a href="http://jason.bryer.org/likert/">website</a> for many more examples and instructions on how to download the latest version from github.</p>
<pre class="r"><code>library(likert)
bins <- cut(dx$x, breaks = seq(-.5, .5, .1), include.lowest = TRUE)
dx[ , xbin := bins]
item <- data.frame(dx[, fr])
names(item) <- "r"
bin.grp <- factor(dx[, xbin])
likert.bin <- likert(item, grouping = bin.grp)
likert.bin</code></pre>
<pre><code>## Group Item Strongly disagree Disagree Agree Strongly agree
## 1 [-0.5,-0.4] r 65.63877 18.50220 7.048458 8.810573
## 2 (-0.4,-0.3] r 53.33333 27.40741 8.888889 10.370370
## 3 (-0.3,-0.2] r 52.84553 19.51220 10.975610 16.666667
## 4 (-0.2,-0.1] r 48.00000 22.80000 12.800000 16.400000
## 5 (-0.1,0] r 40.24390 24.39024 17.886179 17.479675
## 6 (0,0.1] r 35.20599 25.46816 15.355805 23.970037
## 7 (0.1,0.2] r 32.06107 27.09924 17.175573 23.664122
## 8 (0.2,0.3] r 25.00000 25.40984 21.721311 27.868852
## 9 (0.3,0.4] r 23.91304 27.39130 17.391304 31.304348
## 10 (0.4,0.5] r 17.82946 21.70543 20.155039 40.310078</code></pre>
<pre class="r"><code>plot(likert.bin)</code></pre>
<p><img src="https://www.rdatagen.net/post/2017-11-07-generating-and-displaying-likert-type-data_files/figure-html/unnamed-chunk-7-1.png" width="672" /></p>
<pre class="r"><code>plot(likert.bin, centered = FALSE)</code></pre>
<p><img src="https://www.rdatagen.net/post/2017-11-07-generating-and-displaying-likert-type-data_files/figure-html/unnamed-chunk-8-1.png" width="672" /></p>
<p>These plots show what data look like when the cumulative log odds are proportional as we move across different levels of a covariate. (Note that the two center groups should be closest to the baseline probabilities that were used to generate the data.) If you have real data, obviously it is useful to look at it first to see if this type of pattern emerges from the data. When we have more than one or two covariates, the pictures are not as useful, but then it also is probably harder to justify the proportional odds assumption.</p>
</div>
Thinking about different ways to analyze sub-groups in an RCT
https://www.rdatagen.net/post/sub-group-analysis-in-rct/
Wed, 01 Nov 2017 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/sub-group-analysis-in-rct/<p>Here’s the scenario: we have an intervention that we think will improve outcomes for a particular population. Furthermore, there are two sub-groups (let’s say defined by which of two medical conditions each person in the population has) and we are interested in knowing if the intervention effect is different for each sub-group.</p>
<p>And here’s the question: what is the ideal way to set up a study so that we can assess (1) the intervention effects on the group as a whole, but also (2) the sub-group specific intervention effects?</p>
<p>This is a pretty straightforward, text-book scenario. Sub-group analysis is common in many areas of research, including health services research where I do most of my work. It is definitely an advantage to know ahead of time if you want to do a sub-group analysis, as you would in designing a stratified randomized controlled trial. Much of the criticism of these sub-group analyses arises when they are not pre-specified and conducted in an <em>ad hoc</em> manner after the study data have been collected. The danger there, of course, is that the assumptions underlying the validity of a hypothesis test are violated. (It may not be easy to convince folks to avoid hypothesis testing.) In planning ahead for these analyses, researchers are less likely to be accused of snooping through data in search of findings.</p>
<p>So, given that you know you want to do these analyses, the primary issue is how they should be structured. In particular, how should the statistical tests be set up so that we can draw reasonable conclusions? In my mind there are a few ways to answer the question.</p>
<div id="three-possible-models" class="section level2">
<h2>Three possible models</h2>
<p>Here are three models that can help us assess the effect of an intervention on an outcome in a population with at least two sub-groups:</p>
<p><span class="math display">\[ \text{Model 1: } Y_i = \beta_0 + \beta_1 D_i \]</span></p>
<p><span class="math display">\[ \text{Model 2: } Y_i = \beta_0^{\prime} + \beta_1^{\prime} D_i + \beta^{\prime}_2 T_i \]</span></p>
<p><span class="math display">\[ \text{Model 3: } Y_i = \beta_0^{\prime\prime} + \beta_1^{\prime\prime} D_i +\beta^{\prime\prime}_2 T_i +\beta^{\prime\prime}_3 T_i D_i\]</span></p>
<p>where <span class="math inline">\(Y_i\)</span> is the outcome for subject <span class="math inline">\(i\)</span>, <span class="math inline">\(T_i\)</span> is an indicator of treatment and equals 1 if the subject received the treatment, and <span class="math inline">\(D_i\)</span> is an indicator of having the condition that defines the second sub-group. <em>Model 1</em> assumes the medical condition can only affect the outcome. <em>Model 2</em> assumes that if the intervention does have an effect, it is the same regardless of sub-group. And <em>Model 3</em> allows for the possibility that intervention effects might vary between sub-groups.</p>
<div id="main-effects" class="section level3">
<h3>1. Main effects</h3>
<p>In the first approach, we would estimate both <em>Model 2</em> and <em>Model 3</em>, and conduct a hypothesis test using the null hypothesis <span class="math inline">\(\text{H}_{01}\)</span>: <span class="math inline">\(\beta_2^{\prime} = 0\)</span>. In this case we would reject <span class="math inline">\(\text{H}_{01}\)</span> if the p-value for the estimated value of <span class="math inline">\(\beta_2^{\prime}\)</span> was less than 0.05. If in fact we do reject <span class="math inline">\(\text{H}_{01}\)</span> (and conclude that there is an overall main effect), we could then (and only then) proceed to a second hypothesis test of the interaction term in <em>Model 3</em>, testing <span class="math inline">\(\text{H}_{02}\)</span>: <span class="math inline">\(\beta_3^{\prime\prime} = 0\)</span>. In this second test we can also evaluate the test using a cutoff of 0.05, because we only do this test if we reject the first one.</p>
<p>This is not a path typically taken, for reasons we will see at the end when we explore the relative power of each test under different effect size scenarios.</p>
</div>
<div id="interaction-effects" class="section level3">
<h3>2. Interaction effects</h3>
<p>In the second approach, we would also estimate just <em>Models 2</em> and <em>3</em>, but would reverse the order of the tests. We would first test for interaction in <em>Model 3</em>: <span class="math inline">\(\text{H}_{01}\)</span>: <span class="math inline">\(\beta_3^{\prime\prime} = 0\)</span>. If we reject <span class="math inline">\(\text{H}_{01}\)</span> (and conclude that the intervention effects are different across the two sub-groups), we stop there, because we have evidence that the intervention has some sort of effect, and that it is different across the sub-groups. (Of course, we can report the point estimates.) However, if we fail to reject <span class="math inline">\(\text{H}_{01}\)</span>, we would proceed to test the main effect from <em>Model 2</em>. In this case we would test <span class="math inline">\(\text{H}_{02}\)</span>: <span class="math inline">\(\beta_2^{\prime} = 0\)</span>.</p>
<p>In this approach, we are forced to adjust the size of our tests (and use, for example, 0.025 as a cutoff for both). Here is a little intuition for why. If we use a cutoff of 0.05 for the first test and in fact there is no effect, 5% of the time we will draw the wrong conclusion (by wrongly rejecting <span class="math inline">\(\text{H}_{01}\)</span>). However, 95% of the time we will <em>correctly</em> fail to reject the (true) null hypothesis in step one, and thus proceed to step two. Of all the times we proceed to the second step (which will be 95% of the time), we will err 5% of the time (again assuming the null is true). So, 95% of the time we will have an additional 5% error due to the second step, for an error rate of 4.75% due to the second test (95% <span class="math inline">\(\times\)</span> 5%). In total - adding up the errors from steps 1 and 2 - we will draw the wrong conclusion almost 10% of the time. However, if we use a cutoff of 0.025, then we will be wrong 2.5% of the time in step 1, and about 2.4% (97.5% <span class="math inline">\(\times\)</span> 2.5%) of the time in the second step, for a total error rate of just under 5%.</p>
<p>In the first approach (looking at the main effect first), we need to make no adjustment, because we only do the second test when we’ve rejected (incorrectly) the null hypothesis. By definition, errors we make in the second step will only occur in cases where we have made an error in the first step. In the first approach where we evaluate main effects first, the errors are nested. In the second, they are not nested but additive.</p>
</div>
<div id="global-test" class="section level3">
<h3>3. Global test</h3>
<p>In the third and last approach, we start by comparing <em>Model 3</em> with <em>Model 1</em> using a global F-test. In this case, we are asking the question of whether or not a model that includes treatment as a predictor does “better” than a model that only adjust for sub-group membership. The null hypothesis can crudely be stated as <span class="math inline">\(\text{H}_{01}\)</span>: <span class="math inline">\(\text{Model }3 = \text{Model }1\)</span>. If we reject this hypothesis (and conclude that the intervention does have some sort of effect, either generally or differentially for each sub-group), then we are free to evaluate <em>Models 2</em> and <em>3</em> to see if the there is a varying affect or not.</p>
<p>Here we can use cutoffs of 0.05 in our hypothesis tests. Again, we only make errors in the second step if we’ve made a mistake in the first step. The errors are nested and not additive.</p>
</div>
</div>
<div id="simulating-error-rates" class="section level2">
<h2>Simulating error rates</h2>
<p>This first simulation shows that the error rates of the three approaches are all approximately 5% under the assumption of no intervention effect. That is, given that there is no effect of the intervention on either sub-group (on average), we will draw the wrong conclusion about 5% of the time. In these simulations, the outcome depends only on disease status and not the treatment. Or, in other words, the null hypothesis is in fact true:</p>
<pre class="r"><code>library(simstudy)
# define the data
def <- defData(varname = "disease", formula = .5, dist = "binary")
# outcome depends only on sub-group status, not intervention
def2 <- defCondition(condition = "disease == 0",
formula = 0.0, variance = 1,
dist = "normal")
def2 <- defCondition(def2, condition = "disease == 1",
formula = 0.5, variance = 1,
dist = "normal")
set.seed(1987) # the year I graduated from college, in case
# you are wondering ...
pvals <- data.table() # store simulation results
# run 2500 simulations
for (i in 1: 2500) {
# generate data set
dx <- genData(400, def)
dx <- trtAssign(dx, nTrt = 2, balanced = TRUE,
strata = "disease", grpName = "trt")
dx <- addCondition(def2, dx, "y")
# fit 3 models
lm1 <- lm(y ~ disease, data = dx)
lm2 <- lm(y ~ disease + trt, data = dx)
lm3 <- lm(y ~ disease + trt + trt*disease, data = dx)
# extract relevant p-values
cM <- coef(summary(lm2))["trt", 4]
cI <- coef(summary(lm3))["disease:trt", 4]
fI <- anova(lm1, lm3)$`Pr(>F)`[2]
# store the p-values from each iteration
pvals <- rbind(pvals, data.table(cM, cI, fI))
}
pvals</code></pre>
<pre><code>## cM cI fI
## 1: 0.72272413 0.727465073 0.883669625
## 2: 0.20230262 0.243850267 0.224974909
## 3: 0.83602639 0.897635326 0.970757254
## 4: 0.70949192 0.150259496 0.331072131
## 5: 0.85990787 0.449130976 0.739087609
## ---
## 2496: 0.76142389 0.000834619 0.003572901
## 2497: 0.03942419 0.590363493 0.103971344
## 2498: 0.16305568 0.757882365 0.360893205
## 2499: 0.81873930 0.004805028 0.018188997
## 2500: 0.69122281 0.644801480 0.830958227</code></pre>
<pre class="r"><code># Approach 1
pvals[, mEffect := (cM <= 0.05)] # cases where we would reject null
pvals[, iEffect := (cI <= 0.05)]
# total error rate
pvals[, mean(mEffect & iEffect)] +
pvals[, mean(mEffect & !iEffect)]</code></pre>
<pre><code>## [1] 0.0496</code></pre>
<pre class="r"><code># Approach 2
pvals[, iEffect := (cI <= 0.025)]
pvals[, mEffect := (cM <= 0.025)]
# total error rate
pvals[, mean(iEffect)] +
pvals[, mean((!iEffect) & mEffect)]</code></pre>
<pre><code>## [1] 0.054</code></pre>
<pre class="r"><code># Approach 3
pvals[, fEffect := (fI <= 0.05)]
pvals[, iEffect := (cI <= 0.05)]
pvals[, mEffect := (cM <= 0.05)]
# total error rate
pvals[, mean(fEffect & iEffect)] +
pvals[, mean(fEffect & !(iEffect) & mEffect)]</code></pre>
<pre><code>## [1] 0.05</code></pre>
<p>If we use a cutoff of 0.05 for the second approach, we can see that the overall error rate is indeed inflated to close to 10%:</p>
<pre class="r"><code># Approach 2 - with invalid cutoff
pvals[, iEffect := (cI <= 0.05)]
pvals[, mEffect := (cM <= 0.05)]
# total error rate
pvals[, mean(iEffect)] +
pvals[, mean((!iEffect) & mEffect)]</code></pre>
<pre><code>## [1] 0.1028</code></pre>
</div>
<div id="exploring-power" class="section level2">
<h2>Exploring power</h2>
<p>Now that we have established at least three valid testing schemes, we can compare them by assessing the <em>power</em> of the tests. For the uninitiated, power is simply the probability of concluding that there is an effect when in fact there truly is an effect. Power depends on a number of factors, such as sample size, effect size, variation, and importantly for this post, the testing scheme.</p>
<p>The plot below shows the results of estimating power using a range of assumptions about an intervention’s effect in the two subgroups and the different approaches to testing. (The sample size and variation were fixed across all simulations.) The effect sizes ranged from -0.5 to +0.5. (I have not included the code here, because it is quite similar to what I did to assess the error rates. If anyone wants it, please let me know, and I can post it on github or send it to you.)</p>
<p>The estimated power reflects the probability that the tests correctly rejected at least one null hypothesis. So, if there was no interaction (say both group effects were +0.5) but there was a main effect, we would be correct if we rejected the hypothesis associated with the main effect. Take a look a the plot:</p>
<div class="figure">
<img src="https://www.rdatagen.net/img/post-interaction/Models.png" />
</div>
<p>What can we glean from this power simulation? Well, it looks like the global test that compares the interaction model with the null model (Approach 3) is the way to go, but just barely when compared to the approach that focuses solely on the interaction model first.</p>
<p>And, we see clearly that the first approach suffers from a fatal flaw. When the sub-group effects are offsetting, as they are when the effect is -0.5 in subgroup 1 and +0.5 in subgroup 2, we will fail to reject the null that says there is no main effect. As a result, we will never test for interaction and see that in fact the intervention does have an effect on both sub-groups (one positive and one negative). We don’t get to test for interaction, because the rule was designed to keep the error rate at 5% when in fact there is no effect, main or otherwise.</p>
<p>Of course, things are not totally clear cut. If we are quite certain that the effects are going to be positive for both groups, the second approach is not such a disaster. In fact, if we suspect that one of the sub-group effects will be large, it may be preferable to go with this approach. (Look at the right-hand side of the bottom plot to see this.) But, it is still hard to argue (though please do if you feel so inclined), at least based on the assumptions I used in the simulation, that we should take any approach other than the global test.</p>
</div>
Who knew likelihood functions could be so pretty?
https://www.rdatagen.net/post/mle-can-be-pretty/
Mon, 23 Oct 2017 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/mle-can-be-pretty/<p>I just released a new iteration of <code>simstudy</code> (version 0.1.6), which fixes a bug or two and adds several spline related routines (available on <a href="https://cran.r-project.org/web/packages/simstudy/index.html">CRAN</a>). The <a href="https://www.rdatagen.net/post/generating-non-linear-data-using-b-splines/">previous post</a> focused on using spline curves to generate data, so I won’t repeat myself here. And, apropos of nothing really - I thought I’d take the opportunity to do a simple simulation to briefly explore the likelihood function. It turns out if we generate lots of them, it can be pretty, and maybe provide a little insight.</p>
<p>If a probability density (or mass) function is more or less forward-looking - answering the question of what is the probability of seeing some future outcome based on some known probability model, the likelihood function is essentially backward-looking. The likelihood takes the data as given or already observed - and allows us to assess how likely that outcome was under different assumptions the underlying probability model. While the form of the model is not necessarily in question (normal, Poisson, binomial, etc) - though it certainly should be - the specific values of the parameters that define the location and shape of that distribution are not known. The likelihood function provides a guide as to how the backward-looking probability varies across different values of the distribution’s parameters for a <em>given</em> data set.</p>
<p>We are generally most interested in finding out where the peak of that curve is, because the parameters associated with that point (the maximum likelihood estimates) are often used to describe the “true” underlying data generating process. However, we are also quite interested in the shape of the likelihood curve itself, because that provides information about how certain we can be about our conclusions about the “true” model. In short, a function that has a more clearly defined peak provides more information than one that is pretty flat. When you are climbing Mount Everest, you are pretty sure you know when you reach the peak. But when you are walking across the rolling hills of Tuscany, you can never be certain if you are at the top.</p>
<div id="the-setup" class="section level3">
<h3>The setup</h3>
<p>A likelihood curve is itself a function of the observed data. That is, if we were able to draw different samples of data from a single population, the curves associated with each of those sample will vary. In effect, the function is a random variable. For this simulation, I repeatedly make draws from an underlying known model - in this case a very simple linear model with only one unknown slope parameter - and plot the likelihood function for each dataset set across a range of possible slopes along with the maximum point for each curve.</p>
<p>In this example, I am interested in understanding the relationship between a variable <span class="math inline">\(X\)</span> and some outcome <span class="math inline">\(Y\)</span>. In truth, there is a simple relationship between the two:</p>
<p><span class="math display">\[ Y_i = 1.5 \times X_i + \epsilon_i \ ,\]</span> where <span class="math inline">\(\epsilon_i \sim Normal(0, \sigma^2)\)</span>. In this case, we have <span class="math inline">\(n\)</span> individual observations, so that <span class="math inline">\(i \in (1,...n)\)</span>. Under this model, the likelihood where we do know <span class="math inline">\(\sigma^2\)</span> but don’t know the coefficient <span class="math inline">\(\beta\)</span> can be written as:</p>
<p><span class="math display">\[L(\beta;y_1, y_2,..., y_n, x_1, x_2,..., x_n,\sigma^2) = (2\pi\sigma^2)^{-n/2}\text{exp}\left(-\frac{1}{2\sigma^2} \sum_{i=1}^n (y_i - \beta x_i)^2\right)\]</span></p>
<p>Since it is much easier to work with sums than products, we generally work with the log-likelihood function:</p>
<p><span class="math display">\[l(\beta;y_1, y_2,..., y_n, x_1, x_2,..., x_n, \sigma^2) = -\frac{n}{2}\text{ln}(2\pi\sigma^2) - \frac{1}{2\sigma^2} \sum_{i=1}^n (y_i - \beta x_i)^2\]</span> In the log-likelihood function, <span class="math inline">\(n\)</span>, <span class="math inline">\(x_i\)</span>’s, <span class="math inline">\(y_i\)</span>’s, and <span class="math inline">\(\sigma^2\)</span> are all fixed and known - we are trying to estimate <span class="math inline">\(\beta\)</span>, the slope. That is, the likelihood (or log-likelihood) is a function of <span class="math inline">\(\beta\)</span> only. Typically, we will have more than unknown one parameter - say multiple regression coefficients, or an unknown variance parameter (<span class="math inline">\(\sigma^2\)</span>) - but visualizing the likelihood function gets very hard or impossible; I am not great in imagining (or plotting) in <span class="math inline">\(p\)</span>-dimensions, which is what we need to do if we have <span class="math inline">\(p\)</span> parameters.</p>
</div>
<div id="the-simulation" class="section level3">
<h3>The simulation</h3>
<p>To start, here is a one-line function that returns the log-likelihood of a data set (containing <span class="math inline">\(x\)</span>’s and <span class="math inline">\(y\)</span>’s) based on a specific value of <span class="math inline">\(\beta\)</span>.</p>
<pre class="r"><code>library(data.table)
ll <- function(b, dt, var) {
dt[, sum(dnorm(x = y, mean = b*x, sd = sqrt(var), log = TRUE))]
}
test <- data.table(x=c(1,1,4), y =c(2.0, 1.8, 6.3))
ll(b = 1.8, test, var = 1)</code></pre>
<pre><code>## [1] -3.181816</code></pre>
<pre class="r"><code>ll(b = 0.5, test, var = 1)</code></pre>
<pre><code>## [1] -13.97182</code></pre>
<p>Next, I generate a single draw of 200 observations of <span class="math inline">\(x\)</span>’s and <span class="math inline">\(y\)</span>’s:</p>
<pre class="r"><code>library(simstudy)
b <- c(seq(0, 3, length.out = 500))
truevar = 1
defX <- defData(varname = "x", formula = 0,
variance = 9, dist = "normal")
defA <- defDataAdd(varname = "y", formula = "1.5*x",
variance = truevar, dist = "normal")
set.seed(21)
dt <- genData(200, defX)
dt <- addColumns(defA, dt)
dt</code></pre>
<pre><code>## id x y
## 1: 1 2.379040 4.3166333
## 2: 2 1.566754 0.9801416
## 3: 3 5.238667 8.4869651
## 4: 4 -3.814008 -5.6348268
## 5: 5 6.592169 9.6706410
## ---
## 196: 196 3.843341 4.5740967
## 197: 197 -1.334778 -1.5701510
## 198: 198 3.583162 5.0193182
## 199: 199 1.112866 1.5506167
## 200: 200 4.913644 8.2063354</code></pre>
<p>The likelihood function is described with a series of calls to function <code>ll</code> using <code>sapply</code>. Each iteration uses one value of the <code>b</code> vector. What we end up with is a likelihood estimation for each potential value of <span class="math inline">\(\beta\)</span> given the data.</p>
<pre class="r"><code>loglik <- sapply(b, ll, dt = dt, var = truevar)
bt <- data.table(b, loglike = loglik)
bt</code></pre>
<pre><code>## b loglike
## 1: 0.000000000 -2149.240
## 2: 0.006012024 -2134.051
## 3: 0.012024048 -2118.924
## 4: 0.018036072 -2103.860
## 5: 0.024048096 -2088.858
## ---
## 496: 2.975951904 -2235.436
## 497: 2.981963928 -2251.036
## 498: 2.987975952 -2266.697
## 499: 2.993987976 -2282.421
## 500: 3.000000000 -2298.206</code></pre>
<p>In a highly simplified approach to maximizing the likelihood, I simply select the <span class="math inline">\(\beta\)</span> that has the largest likelihood based on my calls to <code>ll</code> (I am limiting my search to values between 0 and 3, just because I happen to know the true value of the parameter). Of course, this is not how things work in the real world, particularly when you have more than one parameter to estimate - the estimation process requires elaborate algorithms. In the case of a normal regression model, it is actually the case that the ordinary least estimate of the regression parameters is the maximum likelihood estimate (you can see in the above equations that maximizing the likelihood <em>is</em> minimizing the sum of the squared differences of the observed and expected values).</p>
<pre class="r"><code>maxlik <- dt[, max(loglik)]
lmfit <- lm(y ~ x - 1, data =dt) # OLS estimate
(maxest <- bt[loglik == maxlik, b]) # value of beta that maxmizes likelihood</code></pre>
<pre><code>## [1] 1.472946</code></pre>
<p>The plot below on the left shows the data and the estimated slope using OLS. The plot on the right shows the likelihood function. The <span class="math inline">\(x\)</span>-axis represents the values of <span class="math inline">\(\beta\)</span>, and the <span class="math inline">\(y\)</span>-axis is the log-likelihood as a function of those <span class="math inline">\(\beta's\)</span>:</p>
<pre class="r"><code>library(ggplot2)
slopetxt <- paste0("OLS estimate: ", round(coef(lmfit), 2))
p1 <- ggplot(data = dt, aes(x = x, y= y)) +
geom_point(color = "grey50") +
theme(panel.grid = element_blank()) +
geom_smooth(method = "lm", se = FALSE,
size = 1, color = "#1740a6") +
annotate(geom = "text", label = slopetxt,
x = -5, y = 7.5,
family = "sans")
p2 <- ggplot(data = bt) +
scale_y_continuous(name = "Log likelihood") +
scale_x_continuous(limits = c(0, 3),
breaks = seq(0, 3, 0.5),
name = expression(beta)) +
theme(panel.grid.minor = element_blank()) +
geom_line(aes(x = b, y = loglike),
color = "#a67d17", size = 1) +
geom_point(x = maxest, y = maxlik, color = "black", size = 3)
library(gridExtra)
grid.arrange(p1, p2, nrow = 1)</code></pre>
<p><img src="https://www.rdatagen.net/post/2017-10-23-repeated-sampling-to-see-what-the-likelihood-function-looks-like-literally_files/figure-html/unnamed-chunk-5-1.png" width="864" /></p>
</div>
<div id="adding-variation" class="section level3">
<h3>Adding variation</h3>
<p>Now, for the pretty part. Below, I show plots of multiple likelihood functions under three scenarios. The only thing that differs across each of those scenarios is the level of variance in the error term, which is specified in <span class="math inline">\(\sigma^2\)</span>. (I have not included the code here since essentially loop through the process describe above.) If you want the code just let me know, and I will make sure to post it. I do want to highlight the fact that I used package <code>randomcoloR</code> to generate the colors in the plots.)</p>
<p><img src="https://www.rdatagen.net/post/2017-10-23-repeated-sampling-to-see-what-the-likelihood-function-looks-like-literally_files/figure-html/unnamed-chunk-6-1.png" width="672" /><img src="https://www.rdatagen.net/post/2017-10-23-repeated-sampling-to-see-what-the-likelihood-function-looks-like-literally_files/figure-html/unnamed-chunk-6-2.png" width="672" /><img src="https://www.rdatagen.net/post/2017-10-23-repeated-sampling-to-see-what-the-likelihood-function-looks-like-literally_files/figure-html/unnamed-chunk-6-3.png" width="672" /></p>
<p>What we can see here is that as the variance increases, we move away from Mt. Everest towards the Tuscan hills. The variance of the underlying process clearly has an impact on the uncertainty of the maximum likelihood estimates. The likelihood functions flatten out and the MLEs have more variability with increased underlying variance of the outcomes <span class="math inline">\(y\)</span>. Of course, this is all consistent with maximum likelihood theory.</p>
</div>
Can we use B-splines to generate non-linear data?
https://www.rdatagen.net/post/generating-non-linear-data-using-b-splines/
Mon, 16 Oct 2017 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/generating-non-linear-data-using-b-splines/<p>I’m exploring the idea of adding a function or set of functions to the <code>simstudy</code> package that would make it possible to easily generate non-linear data. One way to do this would be using B-splines. Typically, one uses splines to fit a curve to data, but I thought it might be useful to switch things around a bit to use the underlying splines to generate data. This would facilitate exploring models where we know the assumption of linearity is violated. It would also make it easy to explore spline methods, because as with any other simulated data set, we would know the underlying data generating process.</p>
<div id="b-splines" class="section level3">
<h3>B-splines</h3>
<p>A B-spline is a linear combination of a set of basis functions that are determined by the number and location of specified knots or cut-points, as well as the (polynomial) degree of curvature. A degree of one implies a set of straight lines, degree of two implies a quadratic curve, three a cubic curve, etc. This <a href="https://cran.r-project.org/web/packages/crs/vignettes/spline_primer.pdf">nice quick intro</a> provides much more insight into issues B-splines than I can provide here. Or if you want even more detail, check out this <a href="http://www.springer.com/us/book/9780387953663">book</a>. It is a very rich topic.</p>
<p>Within a cut-point region, the sum of the basis functions always equals 1. This is easy to see by looking at a plot of basis functions, several of which are provided below. The definition and shape of the basis functions do not in any way depend on the data, only on the degree and cut-points. Of course, these functions can be added together in infinitely different ways using weights. If one is trying to fit a B-spline line to data, those weights can be estimated using regression models.</p>
</div>
<div id="splines-in-r" class="section level2">
<h2>Splines in R</h2>
<p>The <code>bs</code> function in the <code>splines</code> package, returns values from these basis functions based on the specification of knots and degree of curvature. I wrote a wrapper function that uses the <code>bs</code> function to generate the basis function, and then I do a linear transformation of these functions by multiplying the vector parameter <em>theta</em>, which is just a vector of coefficients. The linear combination at each value of <span class="math inline">\(x\)</span> (the support of the basis functions) generates a value (which I call <span class="math inline">\(y.spline\)</span>) on the desired curve. The wrapper returns a list of objects, including a data.table that includes <span class="math inline">\(x\)</span> and <span class="math inline">\(y.spline\)</span>, as well as the basis functions, and knots.</p>
<pre class="r"><code>library(splines)
library(data.table)
library(ggplot2)
library(broom)
genSpline <- function(x, knots, degree, theta) {
basis <- bs(x = x, knots = knots, degree = degree,
Boundary.knots = c(0,1), intercept = TRUE)
y.spline <- basis %*% theta
dt <- data.table(x, y.spline = as.vector(y.spline))
return(list(dt = dt, basis = basis, knots = knots))
}</code></pre>
<p>I’ve also written two functions that make it easy to print the basis function and the spline curve. This will enable us to look at a variety of splines.</p>
<pre class="r"><code>plot.basis <- function(basisdata) {
dtbasis <- as.data.table(basisdata$basis)
dtbasis[, x := seq(0, 1, length.out = .N)]
dtmelt <- melt(data = dtbasis, id = "x",
variable.name = "basis", variable.factor = TRUE)
ggplot(data=dtmelt, aes(x=x, y=value, group = basis)) +
geom_line(aes(color=basis), size = 1) +
theme(legend.position = "none") +
scale_x_continuous(limits = c(0, 1),
breaks = c(0, basisdata$knots, 1)) +
theme(panel.grid.minor = element_blank())
}</code></pre>
<pre class="r"><code>plot.spline <- function(basisdata, points = FALSE) {
p <- ggplot(data = basisdata$dt)
if (points) p <- p + geom_point(aes(x=x, y = y), color = "grey75")
p <- p +
geom_line(aes(x = x, y = y.spline), color = "red", size = 1) +
scale_y_continuous(limits = c(0, 1)) +
scale_x_continuous(limits = c(0, 1), breaks = knots) +
theme(panel.grid.minor = element_blank())
return(p)
}</code></pre>
<div id="linear-spline-with-quartile-cut-points" class="section level3">
<h3>Linear spline with quartile cut-points</h3>
<p>Here is a simple linear spline that has four regions defined by three cut-points, and the slope of the line in each region varies. The first value of <em>theta</em> is essentially the intercept. When you look at the basis plot, you will see that any single region has two “active” basis functions (represented by two colors), the other functions are all 0 in that region. The slope of the line in each is determined by the relevant values of theta. It is probably just easier to take a look:</p>
<pre class="r"><code>x <- seq(0, 1, length.out = 1000)
knots <- c(0.25, 0.5, 0.75)
theta = c(0.6, 0.1, 0.3, 0.2, 0.9)
sdata <- genSpline(x, knots, 1, theta)</code></pre>
<p>For this example, I am printing out the basis function for the first few values of <span class="math inline">\(x\)</span>.</p>
<pre class="r"><code>round( head(cbind(x = sdata$dt$x, sdata$basis)), 4 )</code></pre>
<pre><code>## x 1 2 3 4 5
## [1,] 0.000 1.000 0.000 0 0 0
## [2,] 0.001 0.996 0.004 0 0 0
## [3,] 0.002 0.992 0.008 0 0 0
## [4,] 0.003 0.988 0.012 0 0 0
## [5,] 0.004 0.984 0.016 0 0 0
## [6,] 0.005 0.980 0.020 0 0 0</code></pre>
<pre class="r"><code>plot.basis(sdata)</code></pre>
<p><img src="https://www.rdatagen.net/post/2017-10-16-generating-non-linear-data-using-b-splines_files/figure-html/unnamed-chunk-5-1.png" width="672" /></p>
<pre class="r"><code>plot.spline(sdata)</code></pre>
<p><img src="https://www.rdatagen.net/post/2017-10-16-generating-non-linear-data-using-b-splines_files/figure-html/unnamed-chunk-5-2.png" width="672" /></p>
</div>
<div id="same-knots-cut-points-but-different-theta-coefficients" class="section level3">
<h3>Same knots (cut-points) but different theta (coefficients)</h3>
<p>If use the same knot and degree specification, but change the vector <span class="math inline">\(theta\)</span>, we change the slope of the lines in each of the four regions:</p>
<pre class="r"><code>theta = c(0.2, 0.3, 0.8, 0.2, 0.1)
sdata <- genSpline(x, knots, 1, theta)
plot.basis(sdata)</code></pre>
<p><img src="https://www.rdatagen.net/post/2017-10-16-generating-non-linear-data-using-b-splines_files/figure-html/unnamed-chunk-6-1.png" width="672" /></p>
<pre class="r"><code>plot.spline(sdata)</code></pre>
<p><img src="https://www.rdatagen.net/post/2017-10-16-generating-non-linear-data-using-b-splines_files/figure-html/unnamed-chunk-6-2.png" width="672" /></p>
</div>
<div id="quadratic-spline-with-quartile-cut-points" class="section level3">
<h3>Quadratic spline with quartile cut-points</h3>
<p>The basis functions get a little more elaborate with a quadratic spline. With this added degree, we get an additional basis function in each region, so you should see 3 colors instead of 2. The resulting spline is parabolic in each region, but with a different shape, each of which is determined by <em>theta</em>.</p>
<pre class="r"><code>knots <- c(0.25, 0.5, 0.75)
theta = c(0.6, 0.1, 0.5, 0.2, 0.8, 0.3)
sdata <- genSpline(x, knots, 2, theta)
plot.basis(sdata)</code></pre>
<p><img src="https://www.rdatagen.net/post/2017-10-16-generating-non-linear-data-using-b-splines_files/figure-html/unnamed-chunk-7-1.png" width="672" /></p>
<pre class="r"><code>plot.spline(sdata)</code></pre>
<p><img src="https://www.rdatagen.net/post/2017-10-16-generating-non-linear-data-using-b-splines_files/figure-html/unnamed-chunk-7-2.png" width="672" /></p>
</div>
<div id="quadratic-spline-with-two-cut-points-three-regions" class="section level3">
<h3>Quadratic spline with two cut-points (three regions)</h3>
<pre class="r"><code>knots <- c(0.333, 0.666)
theta = c(0.2, 0.4, 0.1, 0.9, 0.6)
sdata <- genSpline(x, knots, 2, theta)
plot.basis(sdata)</code></pre>
<p><img src="https://www.rdatagen.net/post/2017-10-16-generating-non-linear-data-using-b-splines_files/figure-html/unnamed-chunk-8-1.png" width="672" /></p>
<pre class="r"><code>plot.spline(sdata)</code></pre>
<p><img src="https://www.rdatagen.net/post/2017-10-16-generating-non-linear-data-using-b-splines_files/figure-html/unnamed-chunk-8-2.png" width="672" /></p>
</div>
<div id="cubic-spline-with-two-cut-points-three-regions" class="section level3">
<h3>Cubic spline with two cut-points (three regions)</h3>
<p>And in this last example, we generate basis functions for a cubic spline the differs in three regions. The added curvature is apparent:</p>
<pre class="r"><code>knots <- c(0.333, 0.666)
theta = c(0.2, 0.6, 0.1, 0.9, 0.2, 0.8)
sdata <- genSpline(x, knots, 3, theta)
plot.basis(sdata)</code></pre>
<p><img src="https://www.rdatagen.net/post/2017-10-16-generating-non-linear-data-using-b-splines_files/figure-html/unnamed-chunk-9-1.png" width="672" /></p>
<pre class="r"><code>plot.spline(sdata)</code></pre>
<p><img src="https://www.rdatagen.net/post/2017-10-16-generating-non-linear-data-using-b-splines_files/figure-html/unnamed-chunk-9-2.png" width="672" /></p>
</div>
<div id="generating-new-data-from-the-underlying-spline" class="section level3">
<h3>Generating new data from the underlying spline</h3>
<p>It is a simple step to generate data from the spline. Each value on the line is treated as the mean, and “observed” data can be generated by adding variation. In this case, I use the normal distribution, but there is no reason other distributions can’t be used. I’m generating data based on the the parameters in the previous example. And this time, the spline plot includes the randomly generated data:</p>
<pre class="r"><code>set.seed(5)
x <- runif(250)
sdata <- genSpline(x, knots, 3, theta)
sdata$dt[, y := rnorm(.N, y.spline, 0.1)]
plot.spline(sdata, points = TRUE)</code></pre>
<p><img src="https://www.rdatagen.net/post/2017-10-16-generating-non-linear-data-using-b-splines_files/figure-html/unnamed-chunk-10-1.png" width="672" /></p>
<p>Now that we have generated new data, why don’t we go ahead and fit a model to see if we can recover the coefficients specified in <em>theta</em>? We are interested in the relationship of <span class="math inline">\(x\)</span> and <span class="math inline">\(y\)</span>, but the relationship is not linear and changes across <span class="math inline">\(x\)</span>. To estimate a model, we regress the outcome data <span class="math inline">\(y\)</span> on the values of the basis function that correspond to each value of <span class="math inline">\(x\)</span>:</p>
<pre class="r"><code>dxbasis <- as.data.table(sdata$basis)
setnames(dxbasis, paste0("x", names(dxbasis)))
dxbasis[, y := sdata$dt$y]
round(dxbasis, 3)</code></pre>
<pre><code>## x1 x2 x3 x4 x5 x6 y
## 1: 0.063 0.557 0.343 0.036 0.000 0.000 0.443
## 2: 0.000 0.000 0.140 0.565 0.295 0.000 0.542
## 3: 0.000 0.000 0.003 0.079 0.495 0.424 0.634
## 4: 0.003 0.370 0.523 0.104 0.000 0.000 0.232
## 5: 0.322 0.553 0.120 0.005 0.000 0.000 0.269
## ---
## 246: 0.000 0.023 0.442 0.494 0.041 0.000 0.520
## 247: 0.613 0.356 0.031 0.001 0.000 0.000 0.440
## 248: 0.246 0.584 0.161 0.009 0.000 0.000 0.236
## 249: 0.000 0.000 0.014 0.207 0.597 0.182 0.505
## 250: 0.002 0.344 0.539 0.115 0.000 0.000 0.313</code></pre>
<pre class="r"><code># fit the model - explicitly exclude intercept since x1 is intercept
lmfit <- lm(y ~ x1 + x2 + x3 + x4 + x5 + x6 - 1, data = dxbasis)
cbind(tidy(lmfit)[,1:3], true = theta)</code></pre>
<pre><code>## term estimate std.error true
## 1 x1 0.16465186 0.03619581 0.2
## 2 x2 0.57855125 0.03996219 0.6
## 3 x3 0.09093425 0.04267027 0.1
## 4 x4 0.94938718 0.04395370 0.9
## 5 x5 0.13579559 0.03805510 0.2
## 6 x6 0.85867619 0.03346704 0.8</code></pre>
<p>Using the parameter estimates (estimated here using OLS), we can get predicted values and plot them to see how well we did:</p>
<pre class="r"><code># get the predicted values so we can plot
dxbasis[ , y.pred := predict(object = lmfit)]
dxbasis[ , x := x]
# blue line represents predicted values
plot.spline(sdata, points = TRUE) +
geom_line(data=dxbasis, aes(x=x, y=y.pred), color = "blue", size = 1 )</code></pre>
<p><img src="https://www.rdatagen.net/post/2017-10-16-generating-non-linear-data-using-b-splines_files/figure-html/unnamed-chunk-12-1.png" width="672" /></p>
<p>The model did quite a good job, because we happened to assume the correct underlying assumptions of the spline. However, let’s say we suspected that the data were generated by a quadratic spline. We need to get the basis function assuming the same cut-points for the knots but now using a degree equal to two. Since a reduction in curvature reduces the number of basis functions by one, the linear model changes slightly. (Note that this model is not quite nested in the previous (cubic) model, because the values of the basis functions are different.)</p>
<pre class="r"><code>xdata <- genSpline(x, knots, 2, theta = rep(1,5))
dxbasis <- as.data.table(xdata$basis)
setnames(dxbasis, paste0("x", names(dxbasis)))
dxbasis[, y := sdata$dt$y]
lmfit <- lm(y ~ x1 + x2 + x3 + x4 + x5 - 1, data = dxbasis)
dxbasis[ , y.pred := predict(object = lmfit)]
dxbasis[ , x := x]
plot.spline(sdata, points = TRUE) +
geom_line(data=dxbasis, aes(x=x, y=y.pred),
color = "forestgreen", size = 1 )</code></pre>
<p><img src="https://www.rdatagen.net/post/2017-10-16-generating-non-linear-data-using-b-splines_files/figure-html/unnamed-chunk-13-1.png" width="672" /></p>
<p>If we compare the two models in terms of model fit, the cubic model only does slightly better in term of <span class="math inline">\(R^2\)</span>: 0.96 vs. 0.94. In this case, it probably wouldn’t be so obvious which model to use.</p>
</div>
</div>
A minor update to simstudy provides an excuse to talk a bit about the negative binomial and Poisson distributions
https://www.rdatagen.net/post/a-small-update-to-simstudy-neg-bin/
Thu, 05 Oct 2017 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/a-small-update-to-simstudy-neg-bin/<p>I just updated <code>simstudy</code> to version 0.1.5 (available on <a href="https://cran.r-project.org/web/packages/simstudy/index.html">CRAN</a>) so that it now includes several new distributions - <em>exponential</em>, <em>discrete uniform</em>, and <em>negative binomial</em>.</p>
<p>As part of the release, I thought I’d explore the negative binomial just a bit, particularly as it relates to the Poisson distribution. The Poisson distribution is a discrete (integer) distribution of outcomes of non-negative values that is often used to describe count outcomes. It is characterized by a mean (or rate) and its variance equals its mean.</p>
<div id="added-variation" class="section level3">
<h3>Added variation</h3>
<p>In many situations, when count data are modeled, it turns out that the variance of the data exceeds the mean (a situation called <em>over-dispersion</em>). In this case an alternative model is used that allows for the greater variance, which is based on the negative binomial distribution. It turns out that if the negative binomial distribution has mean <span class="math inline">\(\mu\)</span>, it has a variance of <span class="math inline">\(\mu + \theta \mu^2\)</span>, where <span class="math inline">\(\theta\)</span> is called a <em>dispersion</em> parameter. If <span class="math inline">\(\theta = 0\)</span>, we have the Poisson distribution, but otherwise the variance of a negative binomial random variable will exceed the variance of a Poisson random variable as long as they share the same mean, because <span class="math inline">\(\mu > 0\)</span> and <span class="math inline">\(\theta \ge 0\)</span>.</p>
<p>We can see this by generating data from each distribution with mean 15, and a dispersion parameter of 0.2 for the negative binomial. We expect a variance around 15 for the Poisson distribution, and 60 for the negative binomial distribution.</p>
<pre class="r"><code>library(simstudy)
library(ggplot2)
# for a less cluttered look
theme_no_minor <- function(color = "grey90") {
theme(panel.grid.minor = element_blank(),
panel.background = element_rect(fill="grey95")
)
}
options(digits = 2)
# define data
defC <- defCondition(condition = "dist == 0", formula = 15,
dist = "poisson", link = "identity")
defC <- defCondition(defC, condition = "dist == 1", formula = 15,
variance = 0.2, dist = "negBinomial",
link = "identity")
# generate data
set.seed(50)
dt <- genData(500)
dt <- trtAssign(dt, 2, grpName = "dist")
dt <- addCondition(defC, dt, "y")
genFactor(dt, "dist", c("Poisson", "Negative binomial"))
# compare distributions
dt[, .(mean = mean(y), var = var(y)), keyby = fdist]</code></pre>
<pre><code>## fdist mean var
## 1: Poisson 15 15
## 2: Negative binomial 15 54</code></pre>
<pre class="r"><code>ggplot(data = dt, aes(x = y, group = fdist)) +
geom_density(aes(fill=fdist), alpha = .4) +
scale_fill_manual(values = c("#808000", "#000080")) +
scale_x_continuous(limits = c(0,60),
breaks = seq(0, 60, by = 20)) +
theme_no_minor() +
theme(legend.title = element_blank(),
legend.position = c(0.80, 0.83))</code></pre>
<p><img src="https://www.rdatagen.net/post/2017-10-05-a-small-update-to-simstudy-provides-an-excuse-to-compare-the-negative-binomial-and-poisson-distributions_files/figure-html/unnamed-chunk-1-1.png" width="672" /></p>
</div>
<div id="underestimating-standard-errors" class="section level3">
<h3>Underestimating standard errors</h3>
<p>In the context of a regression, misspecifying a model as Poisson rather than negative binomial, can lead to an underestimation of standard errors, even though the point estimates may be quite reasonable (or may not). The Poisson model will force the variance estimate to be equal to the mean at any particular point on the regression curve. The Poisson model will effectively ignore the true extent of the variation, which can lead to problems of interpretation. We might conclude that there is an association when in fact there is none.</p>
<p>In this simple simulation, we generate two predictors (<span class="math inline">\(x\)</span> and <span class="math inline">\(b\)</span>) and an outcome (<span class="math inline">\(y\)</span>). The outcome is a function of <span class="math inline">\(x\)</span> only:</p>
<pre class="r"><code>library(broom)
library(MASS)
# Generating data from negative binomial dist
def <- defData(varname = "x", formula = 0, variance = 1,
dist = "normal")
def <- defData(def, varname = "b", formula = 0, variance = 1,
dist = "normal")
def <- defData(def, varname = "y", formula = "0.9 + 0.6*x",
variance = 0.3, dist = "negBinomial", link = "log")
set.seed(35)
dt <- genData(500, def)
ggplot(data = dt, aes(x=x, y = y)) +
geom_jitter(width = .1) +
ggtitle("Outcome as function of 1st predictor") +
theme_no_minor()</code></pre>
<p><img src="https://www.rdatagen.net/post/2017-10-05-a-small-update-to-simstudy-provides-an-excuse-to-compare-the-negative-binomial-and-poisson-distributions_files/figure-html/unnamed-chunk-2-1.png" width="672" /></p>
<pre class="r"><code>ggplot(data = dt, aes(x=b, y = y)) +
geom_jitter(width = 0) +
ggtitle("Outcome as function of 2nd predictor") +
theme_no_minor()</code></pre>
<p><img src="https://www.rdatagen.net/post/2017-10-05-a-small-update-to-simstudy-provides-an-excuse-to-compare-the-negative-binomial-and-poisson-distributions_files/figure-html/unnamed-chunk-2-2.png" width="672" /></p>
<p>I fit two models using both predictors. The first assumes (incorrectly) a Poisson distribution, and the second assumes (correctly) a negative binomial distribution. We can see that although the point estimates are quite close, the standard error estimates for the predictors in the Poisson model are considerably greater (about 50% higher) than the negative binomial model. And if we were basing any conclusion on the p-value (which is not always the obvious way to do <a href="http://www.stat.columbia.edu/~gelman/research/unpublished/abandon.pdf">things</a>), we might make the wrong call since the p-value for the slope of <span class="math inline">\(b\)</span> is estimated to be 0.029. Under the correct model model, the p-value is 0.29.</p>
<pre class="r"><code>glmfit <- glm(y ~ x + b, data = dt, family = poisson (link = "log") )
tidy(glmfit)</code></pre>
<pre><code>## term estimate std.error statistic p.value
## 1 (Intercept) 0.956 0.030 32.3 1.1e-228
## 2 x 0.516 0.024 21.9 1.9e-106
## 3 b -0.052 0.024 -2.2 2.9e-02</code></pre>
<pre class="r"><code>nbfit <- glm.nb(y ~ x + b, data = dt)
tidy(nbfit)</code></pre>
<pre><code>## term estimate std.error statistic p.value
## 1 (Intercept) 0.954 0.039 24.2 1.1e-129
## 2 x 0.519 0.037 14.2 7.9e-46
## 3 b -0.037 0.036 -1.1 2.9e-01</code></pre>
<p>A plot of the fitted regression curve and confidence bands of <span class="math inline">\(b\)</span> estimated by each model reinforces the difference. The lighter shaded region is the wider confidence band of the negative binomial model, and the darker shaded region the based on the Poisson model.</p>
<pre class="r"><code>newb <- data.table(b=seq(-3,3,length = 100), x = 0)
poispred <- predict(glmfit, newdata = newb, se.fit = TRUE,
type = "response")
nbpred <-predict(nbfit, newdata = newb, se.fit = TRUE,
type = "response")
poisdf <- data.table(b = newb$b, y = poispred$fit,
lwr = poispred$fit - 1.96*poispred$se.fit,
upr = poispred$fit + 1.96*poispred$se.fit)
nbdf <- data.table(b = newb$b, y = nbpred$fit,
lwr = nbpred$fit - 1.96*nbpred$se.fit,
upr = nbpred$fit + 1.96*nbpred$se.fit)
ggplot(data = poisdf, aes(x=b, y = y)) +
geom_line() +
geom_ribbon(data=nbdf, aes(ymin = lwr, ymax=upr), alpha = .3,
fill = "red") +
geom_ribbon(aes(ymin = lwr, ymax=upr), alpha = .5,
fill = "red") +
theme_no_minor()</code></pre>
<p><img src="https://www.rdatagen.net/post/2017-10-05-a-small-update-to-simstudy-provides-an-excuse-to-compare-the-negative-binomial-and-poisson-distributions_files/figure-html/unnamed-chunk-4-1.png" width="672" /></p>
<p>And finally, if we take 500 samples of size 500, and estimate slope for <span class="math inline">\(b\)</span> each time and calculate the standard deviation of those estimates, it is quite close to the standard error estimate we saw in the model of the original simulated data set using the negative binomial assumption (0.036). And the mean of those estimates is quite close to zero, the true value.</p>
<pre class="r"><code>result <- data.table()
for (i in 1:500) {
dt <- genData(500, def)
glmfit <- glm(y ~ x + b, data = dt, family = poisson)
nbfit <- glm.nb(y ~ x + b, data = dt)
result <- rbind(result, data.table(bPois = coef(glmfit)["b"],
bNB = coef(nbfit)["b"])
)
}
result[,.(sd(bPois), sd(bNB))] # observed standard error</code></pre>
<pre><code>## V1 V2
## 1: 0.037 0.036</code></pre>
<pre class="r"><code>result[,.(mean(bPois), mean(bNB))] # observed mean</code></pre>
<pre><code>## V1 V2
## 1: 0.0025 0.0033</code></pre>
</div>
<div id="negative-binomial-as-mixture-of-poissons" class="section level3">
<h3>Negative binomial as mixture of Poissons</h3>
<p>An interesting relationship between the two distributions is that a negative binomial distribution can be generated from a mixture of individuals whose outcomes come from a Poisson distribution, but each individual has her own rate or mean. Furthermore, those rates must have a specific distribution - a Gamma. (For much more on this, you can take a look <a href="https://probabilityandstats.wordpress.com/tag/poisson-gamma-mixture/">here</a>.) Here is a little simulation:</p>
<pre class="r"><code>mu = 15
disp = 0.2
# Gamma distributed means
def <- defData(varname = "gmu", formula = mu, variance = disp,
dist = "gamma")
# generate data from each distribution
defC <- defCondition(condition = "nb == 0", formula = "gmu",
dist = "poisson")
defC <- defCondition(defC, condition = "nb == 1", formula = mu,
variance = disp, dist = "negBinomial")
dt <- genData(5000, def)
dt <- trtAssign(dt, 2, grpName = "nb")
genFactor(dt, "nb", labels = c("Poisson-Gamma", "Negative binomial"))
dt <- addCondition(defC, dt, "y")
# means and variances should be very close
dt[, .(Mean = mean(y), Var = var(y)), keyby = fnb]</code></pre>
<pre><code>## fnb Mean Var
## 1: Poisson-Gamma 15 62
## 2: Negative binomial 15 57</code></pre>
<pre class="r"><code># plot
ggplot(data = dt, aes(x = y, group = fnb)) +
geom_density(aes(fill=fnb), alpha = .4) +
scale_fill_manual(values = c("#808000", "#000080")) +
scale_x_continuous(limits = c(0,60),
breaks = seq(0, 60, by = 20)) +
theme_no_minor() +
theme(legend.title = element_blank(),
legend.position = c(0.80, 0.83))</code></pre>
<p><img src="https://www.rdatagen.net/post/2017-10-05-a-small-update-to-simstudy-provides-an-excuse-to-compare-the-negative-binomial-and-poisson-distributions_files/figure-html/unnamed-chunk-6-1.png" width="672" /></p>
<p>```</p>
</div>
CACE closed: EM opens up exclusion restriction (among other things)
https://www.rdatagen.net/post/em-estimation-of-cace/
Thu, 28 Sep 2017 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/em-estimation-of-cace/<p>This is the third, and probably last, of a series of posts touching on the estimation of <a href="https://www.rdatagen.net/post/cace-explored/">complier average causal effects</a> (CACE) and <a href="https://www.rdatagen.net/post/simstudy-update-provides-an-excuse-to-talk-a-little-bit-about-the-em-algorithm-and-latent-class/">latent variable modeling techniques</a> using an expectation-maximization (EM) algorithm . What follows is a simplistic way to implement an EM algorithm in <code>R</code> to do principal strata estimation of CACE.</p>
<div id="the-em-algorithm" class="section level3">
<h3>The EM algorithm</h3>
<p>In this approach, we assume that individuals fall into one of three possible groups - <em>never-takers</em>, <em>always-takers</em>, and <em>compliers</em> - but we cannot see who is who (except in a couple of cases). For each group, we are interested in estimating the unobserved potential outcomes <span class="math inline">\(Y_0\)</span> and <span class="math inline">\(Y_1\)</span> using observed outcome measures of <span class="math inline">\(Y\)</span>. The EM algorithm does this in two steps. The <em>E-step</em> estimates the missing class membership for each individual, and the <em>M-step</em> provides maximum likelihood estimates of the group-specific potential outcomes and variation.</p>
<p>An estimate group membership was presented in this <a href="https://projecteuclid.org/euclid.aos/1034276631">Imbens & Rubin 1997 paper</a>. The probability that an individual is a member of a particular group is a function of how close the individual’s observed outcome is to the mean of the group and the overall probability of group membership:</p>
<div class="figure">
<img src="https://www.rdatagen.net/img/post-em-cace/table.png" />
</div>
<p>where <span class="math inline">\(Z\)</span> is treatment assignment and <span class="math inline">\(M\)</span> is treatment received. In addition, <span class="math inline">\(g_{c0}^i = \phi\left( \frac{Y_{obs,i} - \mu_{c0}}{\sigma_{c0}} \right)/\sigma_{c0}\)</span>, where <span class="math inline">\(\phi(.)\)</span> is the standard normal density. (And the same goes for the other <span class="math inline">\(g^i\)</span>’s.) <span class="math inline">\(\pi_a\)</span>, <span class="math inline">\(\pi_n\)</span>, and <span class="math inline">\(\pi_c\)</span> are estimated in the prior stage (or with starting values). <span class="math inline">\(\mu_{c0}\)</span>, <span class="math inline">\(\mu_{c1}\)</span>, <span class="math inline">\(\sigma_{c0}\)</span>, <span class="math inline">\(\sigma_{c1}\)</span>, etc. are also estimated in the prior <em>M-step</em> or with starting values in the case of the first <em>E-step</em>. Note that because we <em>are</em> assuming monotonicity (no <em>deniers</em> - which is not a necessary assumption for the EM approach, but used here to simplify things a bit), the probability of group membership is 1 for those randomized to control but who receive treatment (<em>always-takers</em>) and for those randomized to intervention but refuse (<em>never-takers</em>).</p>
</div>
<div id="em-steps" class="section level3">
<h3>EM steps</h3>
<p>I’ve created a separate function for each step in the algorithm. The <em>E-step</em> follows the Imbens & Rubin specification just described. The <em>M-step</em> just calculates the weighted averages and variances of the outcomes within each <span class="math inline">\(Z\)</span>/<span class="math inline">\(M\)</span> pair, with the weights coming from the probabilities estimated in the <em>E-step</em>. (These are, in fact, maximum likelihood estimates of the means and variances.) There are a pair of functions to estimate the log likelihood after each iteration. We stop iterating once the log likelihood has reached a stable state. And finally, there is a function to initialize the 15 parameters.</p>
<p>One thing to highlight here is that a strong motivation for using the EM algorithm is that we do <em>not</em> need to assume the exclusion restriction. That is, it is possible that randomizing someone to the intervention may have an effect on the outcome even if there is no effect on whether or not the intervention is used. Or in other words, we are saying it is possible that randomization has an effect on <em>always-takers</em> and <em>never-takers</em>, an assumption we <em>cannot</em> make using an instrumental variable (IV) approach. I mention that here, because the <em>M-step</em> function as written here explicitly drops the exclusion restriction assumption. However, I will first illustrate the model estimates in a case where data are indeed based on that assumption; while my point is to show that the EM estimates are unbiased as are the IV estimates in this scenario, I may actually be introducing a small amount of bias into the EM estimate by not re-writing the function to create a single mean for <em>always-takers</em> and <em>never-takers</em>. But, for brevity’s sake, this seems adequate.</p>
<pre class="r"><code>estep <- function(params, y, z, m) {
piC <- 0
piN <- 0
piA <- 0
if (z == 0 & m == 0) {
gC0 <- dnorm((y - params$mC0)/params$sC0) / params$sC0
gN0 <- dnorm((y - params$mN0)/params$sN0) / params$sN0
piC <- params$pC * gC0 / ( params$pC * gC0 + params$pN * gN0)
piN <- 1- piC
}
if (z == 0 & m == 1) {
piA <- 1
}
if (z == 1 & m == 0) {
piN <- 1
}
if (z == 1 & m == 1) {
gC1 <- dnorm((y - params$mC1)/params$sC1) / params$sC1
gA1 <- dnorm((y - params$mA1)/params$sA1) / params$sA1
piC <- params$pC * gC1 / ( params$pC * gC1 + params$pA * gA1)
piA <- 1 - piC
}
return(list(piC = piC, piN = piN, piA = piA))
}
library(Weighted.Desc.Stat)
mstep <- function(params, dx) {
params$mN0 <- dx[z == 0 & m == 0, w.mean(y, piN)] # never-taker
params$sN0 <- dx[z == 0 & m == 0, sqrt(w.var(y, piN))] # never-taker
params$mN1 <- dx[z == 1 & m == 0, w.mean(y, piN)] # never-taker
params$sN1 <- dx[z == 1 & m == 0, sqrt(w.var(y, piN))] # never-taker
params$mA0 <- dx[z == 0 & m == 1, w.mean(y, piA)]# always-taker
params$sA0 <- dx[z == 0 & m == 1, sqrt(w.var(y, piA))] # always-taker
params$mA1 <- dx[z == 1 & m == 1, w.mean(y, piA)]# always-taker
params$sA1 <- dx[z == 1 & m == 1, sqrt(w.var(y, piA))] # always-taker
params$mC0 <- dx[z == 0 & m == 0, w.mean(y, piC)] # complier, z=0
params$sC0 <- dx[z == 0 & m == 0, sqrt(w.var(y, piC))] # complier, z=0
params$mC1 <- dx[z == 1 & m == 1, w.mean(y, piC)] # complier, z=1
params$sC1 <- dx[z == 1 & m == 1, sqrt(w.var(y, piC))] # complier, z=1
nC <- dx[, sum(piC)]
nN <- dx[, sum(piN)]
nA <- dx[, sum(piA)]
params$pC <- (nC / sum(nC, nN, nA))
params$pN <- (nN / sum(nC, nN, nA))
params$pA <- (nA / sum(nC, nN, nA))
return(params)
}
like.i <- function(params, y, z, m) {
if (z == 0 & m == 0) {
l <- params$pC * dnorm(x = y, mean = params$mC0, sd = params$sC0) +
params$pN * dnorm(x = y, mean = params$mN0, sd = params$sN0)
}
if (z == 0 & m == 1) {
l <- params$pA * dnorm(x = y, mean = params$mA0, sd = params$sA0)
}
if (z == 1 & m == 0) {
l <- params$pN * dnorm(x = y, mean = params$mN1, sd = params$sN1)
}
if (z == 1 & m == 1) {
l <- params$pC * dnorm(x = y, mean = params$mC1, sd = params$sC1) +
params$pA * dnorm(x = y, mean = params$mA1, sd = params$sA1)
}
return(l)
}
loglike <- function(dt, params){
dl <- dt[, .(l.i = like.i(params, y, z, m)), keyby = id]
return(dl[, sum(log(l.i))])
}
initparams <- function() {
params = list(pC = 1/3, pN = 1/3, pA = 1/3,
mC0 = rnorm(1,0,.1), sC0 = 0.2,
mC1 = rnorm(1,0,.1), sC1 = 0.2,
mN0 = rnorm(1,0,.1), sN0 = 0.2,
mN1 = rnorm(1,0,.1), sN1 = 0.2,
mA0 = rnorm(1,0,.1), sA0 = 0.2,
mA1 = rnorm(1,0,.1), sA1 = 0.2)
return(params)
}</code></pre>
</div>
<div id="data-defintions" class="section level3">
<h3>Data defintions</h3>
<p>These next set of statements define the data that will be generated. I define the distribution of group assignment as well as potential outcomes for the intervention and the outcome <span class="math inline">\(Y\)</span>. We also define how the observed data will be generated, which is a function of treatment randomization …</p>
<pre class="r"><code>library(simstudy)
### Define data distributions
# Status :
# 1 = A(lways taker)
# 2 = N(ever taker)
# 3 = C(omplier)
def <- defDataAdd(varname = "Status",
formula = "0.25; 0.40; 0.35", dist = "categorical")
# potential outcomes (PO) for intervention depends on group status
def <- defDataAdd(def, varname = "M0",
formula = "(Status == 1) * 1", dist = "nonrandom")
def <- defDataAdd(def, varname = "M1",
formula = "(Status != 2) * 1", dist = "nonrandom")
# observed intervention status based on randomization and PO
def <- defDataAdd(def, varname = "m",
formula = "(z==0) * M0 + (z==1) * M1",
dist = "nonrandom")
# potential outcome for Y (depends group status - A, N, or C)
# under assumption of exclusion restriction
defY0 <- defCondition(condition = "Status == 1",
formula = 0.3, variance = .25, dist = "normal")
defY0 <- defCondition(defY0, condition = "Status == 2",
formula = 0.0, variance = .36, dist = "normal")
defY0 <- defCondition(defY0, condition = "Status == 3",
formula = 0.1, variance = .16, dist = "normal")
defY1 <- defCondition(condition = "Status == 1",
formula = 0.3, variance = .25, dist = "normal")
defY1 <- defCondition(defY1, condition = "Status == 2",
formula = 0.0, variance = .36, dist = "normal")
defY1 <- defCondition(defY1, condition = "Status == 3",
formula = 0.9, variance = .49, dist = "normal")
# observed outcome function of actual treatment
defy <- defDataAdd(varname = "y",
formula = "(z == 0) * Y0 + (z == 1) * Y1",
dist = "nonrandom")</code></pre>
</div>
<div id="data-generation" class="section level3">
<h3>Data generation</h3>
<p>I am generating multiple data sets and estimating the causal effects for each using the EM and IV approaches. This gives better picture of the bias and variation under the two different scenarios (exclusion restriction & no exclusion restriction) and different methods (EM & IV). To simplify the code a bit, I’ve written a function to consolidate the data generating process:</p>
<pre class="r"><code>createDT <- function(n, def, defY0, defY1, defy) {
dt <- genData(n)
dt <- trtAssign(dt, n=2, grpName = "z")
dt <- addColumns(def, dt)
genFactor(dt, "Status",
labels = c("Always-taker","Never-taker", "Complier"),
prefix = "A")
dt <- addCondition(defY0, dt, "Y0")
dt <- addCondition(defY1, dt, "Y1")
dt <- addColumns(defy, dt)
}
set.seed(16)
dt <- createDT(2500, def, defY0, defY1, defy)
options(digits = 3)
dt</code></pre>
<pre><code>## id Y1 Y0 z Status M0 M1 m AStatus y
## 1: 1 0.12143 -0.400007 0 2 0 0 0 Never-taker -0.4000
## 2: 2 0.13114 0.713202 1 1 1 1 1 Always-taker 0.1311
## 3: 3 0.73766 -0.212530 1 3 0 1 1 Complier 0.7377
## 4: 4 -0.07531 0.209330 1 1 1 1 1 Always-taker -0.0753
## 5: 5 -0.25214 -0.696207 0 2 0 0 0 Never-taker -0.6962
## ---
## 2496: 2496 -0.00882 0.206581 0 2 0 0 0 Never-taker 0.2066
## 2497: 2497 0.39226 0.749465 1 2 0 0 0 Never-taker 0.3923
## 2498: 2498 -0.81486 0.000605 1 2 0 0 0 Never-taker -0.8149
## 2499: 2499 0.10359 -0.417344 0 2 0 0 0 Never-taker -0.4173
## 2500: 2500 -0.68397 0.304398 1 2 0 0 0 Never-taker -0.6840</code></pre>
</div>
<div id="cace-estimation" class="section level3">
<h3>CACE estimation</h3>
<p>Finally, we are ready to put all of this together and estimate the CACE using the EM algorithm. After initializing the parameters (here we just use random values except for the probabilities of group membership, which we assume to be 1/3 to start), we loop through the E and M steps, checking the change in log likelihood each time. For this single data set, we provide a point estimate of the CACE using EM and IV. (We could provide an estimate of standard error using a bootstrap approach.) We see that both do a reasonable job, getting fairly close to the truth.</p>
<pre class="r"><code>params <- initparams()
prev.loglike <- -Inf
continue <- TRUE
while (continue) {
dtPIs <- dt[, estep(params, y, z, m), keyby = id]
dx <- dt[dtPIs]
params <- mstep(params, dx)
EM.CACE <- params$mC1 - params$mC0
current.loglike <- loglike(dt, params)
diff <- current.loglike - prev.loglike
prev.loglike <- current.loglike
if ( diff < 1.00e-07 ) continue = FALSE
}
library(ivpack)
ivmodel <- ivreg(formula = y ~ m | z, data = dt, x = TRUE)
data.table(truthC = dt[AStatus == "Complier", mean(Y1 - Y0)],
IV.CACE = coef(ivmodel)[2],
EM.CACE)</code></pre>
<pre><code>## truthC IV.CACE EM.CACE
## 1: 0.806 0.823 0.861</code></pre>
</div>
<div id="more-general-performance" class="section level3">
<h3>More general performance</h3>
<p>I am not providing the code here (it is just a slight modification of what has come before), but I want to show the results of generating 1000 data sets of 500 observations in each. The first plot assumes all data sets were generated using an exclusion restriction - just as we did with the single data set. The IV approach, as expected is unbiased (estimated bias 0.01), while the EM approach is slightly biased (-0.13). We can also see that the EM approach (standard deviation 0.30) has more variation than IV (standard deviation 0.15), while the actual sample CACE (calculated based on the actual group membership and potential outcomes) had a standard deviation of 0.05, which we can see from the narrow vertical band in the plot:</p>
<div class="figure">
<img src="https://www.rdatagen.net/img/post-em-cace/Exclusion_restriction.png" />
</div>
<p>In the second set of simulations, I change the potential outcomes definition so that the exclusion restriction is no longer relevant.</p>
<pre class="r"><code>defY0 <- defCondition(condition = "Status == 1",
formula = 0.3, variance = .20, dist = "normal")
defY0 <- defCondition(defY0, condition = "Status == 2",
formula = 0.0, variance = .36, dist = "normal")
defY0 <- defCondition(defY0, condition = "Status == 3",
formula = 0.1, variance = .16, dist = "normal")
defY1 <- defCondition(condition = "Status == 1",
formula = 0.7, variance = .25, dist = "normal")
defY1 <- defCondition(defY1, condition = "Status == 2",
formula = 0.2, variance = .40, dist = "normal")
defY1 <- defCondition(defY1, condition = "Status == 3",
formula = 0.9, variance = .49, dist = "normal")</code></pre>
<p>In this second case, the IV estimate is biased (0.53), while the EM estimated does quite well (-.03). (I suspect EM did worse in the first example above, because estimates were made without the assumption of the exclusion restriction, even though that was the case.) However, EM estimates still have more variation than IV: standard deviation 0.26 vs 0.17, consistent with the estimates under the exclusion restriction assumption. This variation arises from the fact that we don’t know what the true group membership is, and we need to estimate it. Here is what the estimates look like:</p>
<div class="figure">
<img src="https://www.rdatagen.net/img/post-em-cace/No_exclusion_restriction.png" />
</div>
</div>
<div id="can-we-expand-on-this" class="section level3">
<h3>Can we expand on this?</h3>
<p>The whole point of this was to illustrate that there might be a way around some rather restrictive assumptions, which in some cases might not seem so reasonable. EM methods provide an alternative way to approach things - more of which you can see in the <a href="https://courseplus.jhu.edu/core/index.cfm/go/course.home/coid/8155/">free online course</a> that inspired these last few posts. Unfortunately, there is no obvious way to tackle these problems in <code>R</code> using existing packages, and I am not suggesting that what I have done here is the best way to go about it. The course suggests using <code>Mplus</code>. While that is certainly a great software package, maybe it would be worthwhile to build an R package to implement these methods more completely in R? Or maybe someone has already done this, and I just haven’t come across it yet?</p>
</div>
A simstudy update provides an excuse to talk a little bit about latent class regression and the EM algorithm
https://www.rdatagen.net/post/simstudy-update-provides-an-excuse-to-talk-a-little-bit-about-the-em-algorithm-and-latent-class/
Wed, 20 Sep 2017 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/simstudy-update-provides-an-excuse-to-talk-a-little-bit-about-the-em-algorithm-and-latent-class/<p>I was just going to make a quick announcement to let folks know that I’ve updated the <code>simstudy</code> package to version 0.1.4 (now available on CRAN) to include functions that allow conversion of columns to factors, creation of dummy variables, and most importantly, specification of outcomes that are more flexibly conditional on previously defined variables. But, as I was coming up with an example that might illustrate the added conditional functionality, I found myself playing with package <code>flexmix</code>, which uses an Expectation-Maximization (EM) algorithm to estimate latent classes and fit regression models. So, in the end, this turned into a bit more than a brief service announcement.</p>
<div id="defining-data-conditionally" class="section level3">
<h3>Defining data conditionally</h3>
<p>Of course, simstudy has always enabled conditional distributions based on sequentially defined variables. That is really the whole point of simstudy. But, what if I wanted to specify completely different families of distributions or very different regression curves based on different individual characteristics? With the previous version of simstudy, it was not really easy to do. Now, with the addition of two key functions, <code>defCondition</code> and <code>addCondition</code> the process is much improved. <code>defCondition</code> is analogous to the function <code>defData</code>, in that this new function provides an easy way to specify conditional definitions (as does <code>defReadCond</code>, which is analogous to <code>defRead</code>). <code>addCondition</code> is used to actually add the data column, just as <code>addColumns</code> adds columns.</p>
<p>It is probably easiest to see in action:</p>
<pre class="r"><code>library(simstudy)
# Define baseline data set
def <- defData(varname="x", dist="normal", formula=0, variance=9)
def <- defData(def, varname = "group", formula = "0.2;0.5;0.3",
dist = "categorical")
# Generate data
set.seed(111)
dt <- genData(1000, def)
# Convert group to factor - new function
dt <- genFactor(dt, "group", replace = TRUE)
dt</code></pre>
<p><code>defCondition</code> is the same as <code>defData</code>, except that instead of specifying a variable name, we need to specify a condition that is based on a pre-defined field:</p>
<pre class="r"><code>defC <- defCondition(condition = "fgroup == 1", formula = "5 + 2*x",
variance = 4, dist = "normal")
defC <- defCondition(defC, condition = "fgroup == 2", formula = 4,
variance = 3, dist="normal")
defC <- defCondition(defC, condition = "fgroup == 3", formula = "3 - 2*x",
variance = 2, dist="normal")
defC</code></pre>
<pre><code>## condition formula variance dist link
## 1: fgroup == 1 5 + 2*x 4 normal identity
## 2: fgroup == 2 4 3 normal identity
## 3: fgroup == 3 3 - 2*x 2 normal identity</code></pre>
<p>A subsequent call to <code>addCondition</code> generates a data table with the new variable, in this case <span class="math inline">\(y\)</span>:</p>
<pre class="r"><code>dt <- addCondition(defC, dt, "y")
dt</code></pre>
<pre><code>## id y x fgroup
## 1: 1 5.3036869 0.7056621 2
## 2: 2 2.1521853 -0.9922076 2
## 3: 3 4.7422359 -0.9348715 3
## 4: 4 16.1814232 -6.9070370 3
## 5: 5 4.3958893 -0.5126281 3
## ---
## 996: 996 -0.8115245 -2.7092396 1
## 997: 997 1.9946074 0.7126094 2
## 998: 998 11.8384871 2.3895135 1
## 999: 999 3.3569664 0.8123200 1
## 1000: 1000 3.4662074 -0.4653198 3</code></pre>
<p>In this example, I’ve partitioned the data into three subsets, each of which has a very different linear relationship between variables <span class="math inline">\(x\)</span> and <span class="math inline">\(y\)</span>, and different variation. In this particular case, all relationships are linear with normally distributed noise, but this is absolutely not required.</p>
<p>Here is what the data look like:</p>
<pre class="r"><code>library(ggplot2)
mycolors <- c("#555bd4","#d4555b","#d4ce55")
ggplot(data = dt, aes(x = x, y = y, group = fgroup)) +
geom_point(aes(color = fgroup), size = 1, alpha = .4) +
geom_smooth(aes(color = fgroup), se = FALSE, method = "lm") +
scale_color_manual(name = "Cluster", values = mycolors) +
scale_x_continuous(limits = c(-10,10), breaks = c(-10, -5, 0, 5, 10)) +
theme(panel.grid = element_blank(),
panel.background = element_rect(fill = "grey96", color = "grey80"))</code></pre>
<p><img src="https://www.rdatagen.net/post/2017-09-20-simstudy-update-provides-an-excuse-to-talk-a-little-bit-about-the-em-algorithm-and-latent-class_files/figure-html/unnamed-chunk-4-1.png" width="576" /></p>
</div>
<div id="latent-class-regression-models" class="section level3">
<h3>Latent class regression models</h3>
<p>Suppose we come across the same data set, but are not privy to the group classification, and we are still interested in the relationship between <span class="math inline">\(x\)</span> and <span class="math inline">\(y\)</span>. This is what the data set would look like - not as user-friendly:</p>
<pre class="r"><code>rawp <- ggplot(data = dt, aes(x = x, y = y, group = fgroup)) +
geom_point(color = "grey75", size = .5) +
scale_x_continuous(limits = c(-10,10), breaks = c(-10, -5, 0, 5, 10)) +
theme(panel.grid = element_blank(),
panel.background = element_rect(fill = "grey96", color = "grey80"))
rawp</code></pre>
<p><img src="https://www.rdatagen.net/post/2017-09-20-simstudy-update-provides-an-excuse-to-talk-a-little-bit-about-the-em-algorithm-and-latent-class_files/figure-html/unnamed-chunk-5-1.png" width="504" /></p>
<p>We might see from the plot, or we might have some subject-matter knowledge that suggests there are are several sub-clusters within the data, each of which appears to have a different relationship between <span class="math inline">\(x\)</span> and <span class="math inline">\(y\)</span>. (Obviously, we know this is the case, since we generated the data.) The question is, how can we estimate the regression lines if we don’t know the class membership? That is where the EM algorithm comes into play.</p>
</div>
<div id="the-em-algorithm-very-very-briefly" class="section level3">
<h3>The EM algorithm, very, very briefly</h3>
<p>The EM algorithm handles model parameter estimation in the context of incomplete or missing data. In the example I’ve been discussing here, the subgroups or cluster membership are the missing data. There is an extensive literature on EM methods (starting with <a href="http://www.jstor.org/stable/2984875">this article</a> by Dempster, Laird & Rubin), and I am barely even touching the surface, let alone scratching it.</p>
<p>The missing data (cluster memberships) are estimated in the <em>Expectation-</em> or <em>E-step</em>. These are replaced with their expected values as given by the posterior probabilities. The mixture model assumes that each observation is exactly from one cluster, but this information has not been observed. The unknown model parameters (intercept, slope, and variance) for each of the clusters is estimated in the <em>Maximization-</em> or <em>M-step</em>, which in this case assumes the data come from a linear process with normally distributed noise - both the linear coefficients and variation around the line are conditional on cluster membership. The process is iterative. First, the <em>E-step</em>, which is based on some starting model parameters at first and then updated with the most recent parameter estimates from the prior <em>M-step</em>. Second, the <em>M-step</em> is based on estimates of the maximum likelihood of all the data (including the ‘missing’ data estimated in the prior <em>E-step</em>). We iterate back and forth until the parameter estimates in the <em>M-step</em> reach a steady state, or the overal likelihood estimate becomes stable.</p>
<p>The strength or usefulness of the EM method is that the likelihood of the full data (both observed data - <span class="math inline">\(x\)</span>’s and <span class="math inline">\(y\)</span>’s - and unobserved data - cluster probabilities) is much easier to write down and estimate than the likelihood of the observed data only (<span class="math inline">\(x\)</span>’s and <span class="math inline">\(y\)</span>’s). Think of the first plot above with the structure given by the colors compared to the second plot in grey without the structure. The first seems so much more manageable than the second - if only we knew the underlying structure defined by the clusters. The EM algorithm builds the underlying structure so that the maximum likelihood estimation problem becomes much easier.</p>
<p>Here is a little more detail on what the EM algorithm is estimating in our application. (See <a href="https://cran.r-project.org/web/packages/flexmix/vignettes/flexmix-intro.pdf">this</a> for the much more detail.) First, we estimate the probability of membership in cluster <span class="math inline">\(j\)</span> for our linear regression model with three clusters:</p>
<p><span class="math display">\[P_i(j|x_i, y_i, \mathbf{\pi}, \mathbf{\alpha_0}, \mathbf{\alpha_1}, \mathbf{\sigma}) = p_{ij}= \frac{\pi_jf(y_i|x_i, \mathbf{\alpha_0}, \mathbf{\alpha_1}, \mathbf{\sigma})}{\sum_{k=1}^3 \pi_k f(y_i|x_i, \mathbf{\alpha_0}, \mathbf{\alpha_1}, \mathbf{\sigma})},\]</span> where <span class="math inline">\(\mathbf{\alpha_0}\)</span>, <span class="math inline">\(\mathbf{\alpha_1}\)</span>, and <span class="math inline">\(\mathbf{\sigma}\)</span> are the vectors of intercepts, slopes, and standard deviations for the three clusters. <span class="math inline">\(\pi\)</span> is the vector of probabilities that any individual is in the respective clusters, and each <span class="math inline">\(\pi_j\)</span> is estimated by averaging the <span class="math inline">\(p_{ij}\)</span>’s across all individuals. Finally, <span class="math inline">\(f(.|.)\)</span> is the density from the normal distribution <span class="math inline">\(N(\alpha_{j0} + \alpha_{j1}x, \sigma_j^2)\)</span>, with cluster-specific parameters.</p>
<p>Second, we maximize each of the three cluster-specific log-likelihoods, where each individual is weighted by its probability of cluster membership (which is <span class="math inline">\(P_i(j)\)</span>, estimated in the <em>E-step</em>). In particular, we are maximizing the cluster-specific likelihood with respect to the three unknown parameters <span class="math inline">\(\alpha_{j0}\)</span>, <span class="math inline">\(\alpha_{j1}\)</span>, and <span class="math inline">\(\sigma_j\)</span>:</p>
<p><span class="math display">\[\sum_{n=1}^N \hat{p}_{nk} \text{log} (f(y_n|x_n,\alpha_{j0},\alpha_{j1},\sigma_j)\]</span> In <code>R</code>, the <code>flexmix</code> package has implemented an EM algorithm to estimate latent class regression models. The package documentation provides a really nice, accessible <a href="https://cran.r-project.org/web/packages/flexmix/vignettes/flexmix-intro.pdf">description</a> of the two-step procedure, with much more detail than I have provided here. I encourage you to check it out.</p>
</div>
<div id="iterating-slowly-through-the-em-algorithm" class="section level3">
<h3>Iterating slowly through the EM algorithm</h3>
<p>Here is a slow-motion version of the EM estimation process. I show the parameter estimates (visually) at the early stages of estimation, checking in after every three steps. In addition, I highlight two individuals and show the estimated probabilities of cluster membership. At the beginning, there is little differentiation between the regression lines for each cluster. However, by the 10th iteration the parameter estimates for the regression lines are looking pretty similar to the original plot.</p>
<pre class="r"><code>library(flexmix)
selectIDs <- c(508, 775) # select two individuals
ps <- list()
count <- 0
p.ij <- data.table() # keep track of estimated probs
pi.j <- data.table() # keep track of average probs
for (i in seq(1,10, by=3)) {
count <- count + 1
set.seed(5)
# fit model up to "i" iterations - either 1, 4, 7, or 10
exMax <- flexmix(y ~ x,
data = dt, k = 3,
control = list(iter.max = i)
)
p.ij <- rbind(p.ij,
data.table(i, selectIDs, posterior(exMax)[selectIDs,]))
pi.j <- rbind(pi.j,
data.table(i, t(apply(posterior(exMax), 2, mean))))
dp <- as.data.table(t(parameters(exMax)))
setnames(dp, c("int","slope", "sigma"))
# flexmix rearranges columns/clusters
dp[, grp := c(3, 1, 2)]
setkey(dp, grp)
# create plot for each iteration
ps[[count]] <- rawp +
geom_abline(data = dp, aes(intercept = int, slope = slope,
color=factor(grp)), size = 1) +
geom_point(data = dt[id %in% selectIDs], color = "black") +
scale_color_manual(values = mycolors) +
ggtitle(paste("Iteration", i)) +
theme(legend.position = "none",
plot.title = element_text(size = 9))
}</code></pre>
<pre class="r"><code>library(gridExtra)
grid.arrange(ps[[1]], ps[[2]], ps[[3]], ps[[4]], nrow = 1)</code></pre>
<p><img src="https://www.rdatagen.net/post/2017-09-20-simstudy-update-provides-an-excuse-to-talk-a-little-bit-about-the-em-algorithm-and-latent-class_files/figure-html/unnamed-chunk-7-1.png" width="864" style="display: block; margin: auto;" /></p>
<p>For the two individuals, we can see the probabilities converging to a level of certainty/uncertainty. The individual with ID #775 lies right on the regression line for cluster 3, far from the other lines, and the algorithm quickly assigns a probability of 100% to cluster 3 (its actual cluster). The cluster assignment is less certain for ID #508, which lies between the two regression lines for clusters 1 and 2.</p>
<pre class="r"><code># actual cluster membership
dt[id %in% selectIDs, .(id, fgroup)]</code></pre>
<pre><code>## id fgroup
## 1: 508 2
## 2: 775 3</code></pre>
<pre class="r"><code>setkey(p.ij, selectIDs, i)
p.ij[, .(selectIDs, i, C1 = round(V2, 2), C2 = round(V3,2), C3 = round(V1,2))]</code></pre>
<pre><code>## selectIDs i C1 C2 C3
## 1: 508 1 0.32 0.36 0.32
## 2: 508 4 0.29 0.44 0.27
## 3: 508 7 0.25 0.65 0.10
## 4: 508 10 0.24 0.76 0.00
## 5: 775 1 0.35 0.28 0.37
## 6: 775 4 0.33 0.14 0.53
## 7: 775 7 0.11 0.01 0.88
## 8: 775 10 0.00 0.00 1.00</code></pre>
<p>In addition, we can see how the estimate of overall group membership (for all individuals) changes through the iterations. The algorithm starts by assigning equal probability to each cluster (1/3) and slowly moves towards the actual distribution used to generate the data (20%, 50%, and 30%).</p>
<pre class="r"><code>pi.j[, .(i, C1 = round(V2, 2), C2 = round(V3,2), C3 = round(V1,2))]</code></pre>
<pre><code>## i C1 C2 C3
## 1: 1 0.33 0.34 0.33
## 2: 4 0.31 0.34 0.35
## 3: 7 0.25 0.39 0.36
## 4: 10 0.23 0.44 0.33</code></pre>
</div>
<div id="final-estimation-of-linear-models" class="section level3">
<h3>Final estimation of linear models</h3>
<p>The final estimation is shown below, and we can see that the parameters have largely converged to the values used to generate the data.</p>
<pre class="r"><code># Estimation until convergence
set.seed(5)
ex1 <- flexmix(y ~ x, data = dt, k = 3)
# paramter estimates
data.table(parameters(ex1))[, .(param = c("int", "slope", "sd"),
C1 = round(Comp.2, 2),
C2 = round(Comp.3, 2),
C3 = round(Comp.1, 2))]</code></pre>
<pre><code>## param C1 C2 C3
## 1: int 5.18 3.94 3.00
## 2: slope 1.97 -0.03 -1.99
## 3: sd 2.07 1.83 1.55</code></pre>
<pre class="r"><code># estimates of cluster probabilities
round(apply(posterior(ex1), 2, mean), 2)[c(2,3,1)]</code></pre>
<pre><code>## [1] 0.19 0.51 0.30</code></pre>
<pre class="r"><code># estimates of individual probabilities
data.table(posterior(exMax)[selectIDs,])[,.(selectIDs,
C1 = round(V2, 2),
C2 = round(V3, 2),
C3 = round(V1, 2))]</code></pre>
<pre><code>## selectIDs C1 C2 C3
## 1: 508 0.24 0.76 0
## 2: 775 0.00 0.00 1</code></pre>
</div>
<div id="how-do-we-know-the-relationship-is-linear" class="section level3">
<h3>How do we know the relationship is linear?</h3>
<p>In reality, there is no reason to assume that the relationship between <span class="math inline">\(x\)</span> and <span class="math inline">\(y\)</span> is simply linear. We might want to look at other possibilities, such as a quadratic relationship. So, we use flexmix to estimate an expanded model, and then we plot the fitted lines on the original data:</p>
<pre class="r"><code>ex2 <- flexmix(y ~ x + I(x^2), data = dt, k = 3)
dp <- as.data.table(t(parameters(ex2)))
setnames(dp, c("int","slope", "slope2", "sigma"))
dp[, grp := c(1,2,3)]
x <- c(seq(-10,10, by =.1))
dp1 <- data.table(grp = 1, x, dp[1, int + slope*x + slope2*(x^2)])
dp2 <- data.table(grp = 2, x, dp[2, int + slope*x + slope2*(x^2)])
dp3 <- data.table(grp = 3, x, dp[3, int + slope*x + slope2*(x^2)])
dp <- rbind(dp1, dp2, dp3)
rawp +
geom_line(data=dp, aes(x=x, y=V3, group = grp, color = factor(grp)),
size = 1) +
scale_color_manual(values = mycolors) +
theme(legend.position = "none")</code></pre>
<p><img src="https://www.rdatagen.net/post/2017-09-20-simstudy-update-provides-an-excuse-to-talk-a-little-bit-about-the-em-algorithm-and-latent-class_files/figure-html/unnamed-chunk-11-1.png" width="576" /></p>
<p>And even though the parameter estimates appear to be reasonable, we would want to compare the simple linear model with the quadratic model, which we can use with something like the BIC. We see that the linear model is a better fit (lower BIC value) - not surprising since this is how we generated the data.</p>
<pre class="r"><code>summary(refit(ex2))</code></pre>
<pre><code>## $Comp.1
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) 1.440736 0.309576 4.6539 3.257e-06 ***
## x -0.405118 0.048808 -8.3003 < 2.2e-16 ***
## I(x^2) -0.246075 0.012162 -20.2337 < 2.2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## $Comp.2
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) 6.955542 0.289914 23.9918 < 2.2e-16 ***
## x 0.305995 0.049584 6.1712 6.777e-10 ***
## I(x^2) 0.263160 0.014150 18.5983 < 2.2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## $Comp.3
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) 3.9061090 0.1489738 26.2201 < 2e-16 ***
## x -0.0681887 0.0277366 -2.4584 0.01395 *
## I(x^2) 0.0113305 0.0060884 1.8610 0.06274 .
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1</code></pre>
<pre class="r"><code># Comparison of the two models
BIC(ex1)</code></pre>
<pre><code>## [1] 5187.862</code></pre>
<pre class="r"><code>BIC(ex2)</code></pre>
<pre><code>## [1] 5316.034</code></pre>
</div>
Complier average causal effect? Exploring what we learn from an RCT with participants who don't do what they are told
https://www.rdatagen.net/post/cace-explored/
Tue, 12 Sep 2017 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/cace-explored/<p>Inspired by a free online <a href="https://courseplus.jhu.edu/core/index.cfm/go/course.home/coid/8155/">course</a> titled <em>Complier Average Causal Effects (CACE) Analysis</em> and taught by Booil Jo and Elizabeth Stuart (through Johns Hopkins University), I’ve decided to explore the topic a little bit. My goal here isn’t to explain CACE analysis in extensive detail (you should definitely go take the course for that), but to describe the problem generally and then (of course) simulate some data. A plot of the simulated data gives a sense of what we are estimating and assuming. And I end by describing two simple methods to estimate the CACE, which we can compare to the truth (since this is a simulation); next time, I will describe a third way.</p>
<div id="non-compliance-in-randomized-trials" class="section level3">
<h3>Non-compliance in randomized trials</h3>
<p>Here’s the problem. In a randomized trial, investigators control the randomization process; they determine if an individual is assigned to the treatment group or control group (I am talking about randomized trials here, but many of these issues can apply in the context of observed or quasi-experimental settings, but require more data and assumptions). However, those investigators may not have as much control over the actual treatments that study participants receive. For example, an individual randomized to some type of behavioral intervention may opt not to take advantage of the intervention. Likewise, someone assigned to control may, under some circumstances, figure out a way to get services that are quite similar to the intervention. In all cases, the investigator is able to collect outcome data on all of these patients, regardless of whether or not they followed directions. (This is different from drop-out or loss-to-followup, where outcome data may be missing.)</p>
</div>
<div id="cace" class="section level3">
<h3>CACE</h3>
<p>Typically, studies analyze data based on treatment <em>assignment</em> rather than treatment <em>received</em>. This focus on assignment is called an intention-to-treat (ITT) analysis. In a policy environment, the ITT may make a lot of sense; we are answering this specific question: “What is the overall effect in the real world where the intervention is made available yet some people take advantage of it while others do not?” Alternatively, researchers may be interested in different question: “What is the causal effect of actually receiving the treatment?”</p>
<p>Now, to answer the second question, there are numerous subtle issues that you need to wrestle with (again, go take the <a href="https://courseplus.jhu.edu/core/index.cfm/go/course.home/coid/8155/">course</a>). But, long story short, we need to (1) identify the folks in the <em>intervention</em> group who actually do what they have been encouraged to do (receive the intervention) but only because they were encouraged, and not because they would have received the intervention anyways had they not been randomized, and compare their outcomes with (2) the folks in the control group who did not seek out the intervention on their own initiative but would have received the intervention had they been encouraged. These two groups are considered to be <em>compliers</em> - they would always do what they are told in the context of the study. And the effect of the intervention that is based on outcomes from this type of patient is called the <em>complier average causal effect</em> (CACE).</p>
<p>The biggest challenge in estimating the CACE is that we cannot actually identify if people are compliers or not. Some of those receiving the treatment in the intervention group are <em>compliers</em>, but the rest are <em>always-takers</em>. Some of those not receiving the treatment in the control arm are also <em>compliers</em>, but the others are <em>never-takers</em>. There are several methods available to overcome this challenge, two of which I will briefly mention here: method of moments and instrumental variables.</p>
</div>
<div id="using-potential-outcomes-to-define-cace" class="section level3">
<h3>Using potential outcomes to define CACE</h3>
<p>In an earlier <a href="https://www.rdatagen.net/post/be-careful/">post</a>, I briefly introduced the idea of potential outcomes. Since we are talking about causal relationships, they are useful here. If <span class="math inline">\(Z\)</span> is the randomization indicator, <span class="math inline">\(Z=1\)</span> for those randomized to the intervention, <span class="math inline">\(Z=0\)</span> for those in control. <span class="math inline">\(M\)</span> is the indicator of whether or not the individual received the intervention. Since <span class="math inline">\(M\)</span> is an outcome, we can imagine the potential outcomes <span class="math inline">\(M_{0i}\)</span> and <span class="math inline">\(M_{1i}\)</span>, or what the value of <span class="math inline">\(M_i\)</span> would be for an individual if <span class="math inline">\(Z_i=0\)</span> or <span class="math inline">\(Z_i=1\)</span>, respectively. And let us say <span class="math inline">\(Y\)</span> is the outcome, so we have potential outcomes that can be written as <span class="math inline">\(Y_{0,M_0}\)</span> and <span class="math inline">\(Y_{1,M_1}\)</span>. Think about that for a bit.</p>
<p>Using these potential outcomes, we can define the compliers and the CACE. Compliers are people for whom <span class="math inline">\(M_0 = 0\)</span> <em>and</em> <span class="math inline">\(M_1 = 1\)</span>. (Never-takers look like this: <span class="math inline">\(M_0 = 0\)</span> <em>and</em> <span class="math inline">\(M_1 = 0\)</span>. Always-takers: <span class="math inline">\(M_0 = 1\)</span> <em>and</em> <span class="math inline">\(M_1 = 1\)</span>). Now, the average causal effect is the average difference between potential outcomes. In this case, the CACE is <span class="math inline">\(E[Y_{1,M_1} - Y_{0,M_0}|M_0 = 0 \ \& \ M_1 = 1]\)</span>. The patients for whom <span class="math inline">\(M_0 = 0\)</span> <em>and</em> <span class="math inline">\(M_1 = 1\)</span> are the compliers.</p>
</div>
<div id="simulating-data" class="section level3">
<h3>Simulating data</h3>
<p>The data simulation will be based on generating potential outcomes. Observed outcomes will be a function of randomization group and complier status.</p>
<pre class="r"><code>options(digits = 3)
library(data.table)
library(simstudy)
library(ggplot2)
# Status :
# 1 = A(lways taker)
# 2 = N(ever taker)
# 3 = C(omplier)
def <- defDataAdd(varname = "Status",
formula = "0.20; 0.40; 0.40", dist = "categorical")
# potential outcomes (PO) for intervention
def <- defDataAdd(def, varname = "M0",
formula = "(Status == 1) * 1", dist = "nonrandom")
def <- defDataAdd(def, varname = "M1",
formula = "(Status != 2) * 1", dist = "nonrandom")
# observed intervention status based on randomization and PO
def <- defDataAdd(def, varname = "m",
formula = "(z==0) * M0 + (z==1) * M1", dist = "nonrandom")
# potential outcome for Y (depends on potential outcome for M)
set.seed(888)
dt <- genData(2000)
dt <- trtAssign(dt, n=2, grpName = "z")
dt <- addColumns(def, dt)
# using data functions here, not simstudy - I need add
# this functionality to simstudy
dt[, AStatus := factor(Status,
labels = c("Always-taker","Never-taker", "Complier"))]
# potential outcomes depend on group status - A, N, or C
dt[Status == 1, Y0 := rnorm(.N, 1.0, sqrt(0.25))]
dt[Status == 2, Y0 := rnorm(.N, 0.0, sqrt(0.36))]
dt[Status == 3, Y0 := rnorm(.N, 0.1, sqrt(0.16))]
dt[Status == 1, Y1 := rnorm(.N, 1.0, sqrt(0.25))]
dt[Status == 2, Y1 := rnorm(.N, 0.0, sqrt(0.36))]
dt[Status == 3, Y1 := rnorm(.N, 0.9, sqrt(0.49))]
# observed outcome function of actual treatment
dt[, y := (m == 0) * Y0 + (m == 1) * Y1]
dt</code></pre>
<pre><code>## id z Status M0 M1 m AStatus Y0 Y1 y
## 1: 1 1 3 0 1 1 Complier 0.5088 0.650 0.6500
## 2: 2 1 3 0 1 1 Complier 0.1503 0.729 0.7292
## 3: 3 1 2 0 0 0 Never-taker 1.4277 0.454 1.4277
## 4: 4 0 3 0 1 0 Complier 0.6393 0.998 0.6393
## 5: 5 0 1 1 1 1 Always-taker 0.6506 1.927 1.9267
## ---
## 1996: 1996 0 3 0 1 0 Complier -0.9554 0.114 -0.9554
## 1997: 1997 0 3 0 1 0 Complier 0.0366 0.903 0.0366
## 1998: 1998 1 3 0 1 1 Complier 0.3606 1.098 1.0982
## 1999: 1999 1 3 0 1 1 Complier 0.6651 1.708 1.7082
## 2000: 2000 0 3 0 1 0 Complier 0.2207 0.531 0.2207</code></pre>
<p>The plot shows outcomes <span class="math inline">\(y\)</span> for the two randomization groups. The ITT estimate would be based on an average of all the points in group, regardless of color or shape. The difference between the average of the black circles in the two groups represents the CACE.</p>
<pre class="r"><code>ggplot(data=dt, aes(y=y, x = factor(z, labels = c("Assigned to control",
"Assigned to treatment")))) +
geom_jitter(aes(shape=factor(m, labels = c("No treatment", "Treatment")),
color=AStatus),
width = 0.35) +
scale_shape_manual(values = c(1,19)) +
scale_color_manual(values = c("#e1d07d", "#7d8ee1", "grey25")) +
scale_y_continuous(breaks = seq(-3, 3, 1), labels = seq(-3, 3, 1)) +
theme(legend.title = element_blank(),
axis.title.x = element_blank(),
panel.grid.minor.y = element_blank(),
panel.grid.major.x = element_blank())</code></pre>
<p><img src="https://www.rdatagen.net/post/2017-09-08-iv-em-two-important-ideas-explored_files/figure-html/unnamed-chunk-2-1.png" width="672" /></p>
<p>In the real world, we cannot see the colors, yet we need to estimate as if we do, or at least use a method to bypasses that need:</p>
<p><img src="https://www.rdatagen.net/post/2017-09-08-iv-em-two-important-ideas-explored_files/figure-html/unnamed-chunk-3-1.png" width="672" /></p>
</div>
<div id="estimating-cace-using-observed-data" class="section level3">
<h3>Estimating CACE using observed data</h3>
<p>The challenge is to estimate the CACE using <em>observed</em> data only, since that is all we have (along with a couple of key assumptions). We start of by claiming that the average causal effect of treatment <strong>assignment</strong> (<span class="math inline">\(ACE\)</span>) is a weighted average of the three sub-populations of <em>compliers</em>, <em>never-takers</em>, and <em>always-takers</em>:</p>
<p><span class="math display">\[ ACE = \pi_C \times CACE + \pi_N \times NACE + \pi_A \times AACE, \]</span>
where <span class="math inline">\(CACE\)</span> is the average causal effect of treatment assignment for the subset of those in the sample who are <em>compliers</em>, <span class="math inline">\(NACE\)</span> is the average causal effect of treatment assignment for the subset who are <em>never-takers</em>, and <span class="math inline">\(AACE\)</span> is the average causal effect for those who are <em>always-takers</em>. <span class="math inline">\(\pi_C\)</span>, <span class="math inline">\(\pi_N\)</span>, and <span class="math inline">\(\pi_A\)</span> represent the sample proportions of compliers, never-takers, and always-takers, respectively.</p>
<p>A key assumption often made to estimate <span class="math inline">\(CACE\)</span> is known as the <em>exclusion restriction</em>: treatment assignment has an effect on the outcome <em>only</em> if it changes the actual treatment taken. (A second key assumption is that there are no <em>deniers</em>, or folks who do the opposite of what they are told. This is called the monotonicity assumption.) This <em>exclusion restriction</em> implies that both <span class="math inline">\(NACE=0\)</span> and <span class="math inline">\(AACE=0\)</span>, since in both cases the treatment <em>received</em> is the same regardless of treatment assignment. In that case, we can re-write the equality as</p>
<p><span class="math display">\[ ACE = \pi_C \times CACE,\]</span></p>
<p>and finally with a little re-arranging,</p>
<p><span class="math display">\[ CACE = \frac{ACE}{\pi_C}. \]</span>
So, in order estimate <span class="math inline">\(CACE\)</span>, we need to be able to estimate <span class="math inline">\(ACE\)</span> and <span class="math inline">\(\pi_C\)</span>. Fortunately, we are in a position to do this. Since this is a randomized trial, the average causal effect of treatment assignment is just the difference in observed outcomes for the two treatment assignment groups:</p>
<p><span class="math display">\[ ACE = E[Y | Z = 1] - E[Y | Z = 0] \]</span>
This also happens to be the <em>intention-to-treat</em> ) (<span class="math inline">\(ITT\)</span>) estimate.</p>
<p><span class="math inline">\(\pi_C\)</span> is a little harder, but in this simplified scenario, not that hard. We just need to follow a little logic: for the control group, we can identify the <em>always-takers</em> (they’re the ones who actually receive the treatment), so we know <span class="math inline">\(\pi_A\)</span> for the the control group. This can be estimated as <span class="math inline">\(P(M=1|Z=0)\)</span>. And, since the study was randomized, the distribution of <em>always-takers</em> in the treatment group must be the same. So, we can use <span class="math inline">\(\pi_A\)</span> estimated from the control group as an estimate for the treatment group.</p>
<p>For the treatment group, we know that <span class="math inline">\(\pi_C + \pi_A = P(M = 1 | Z = 1)\)</span>. That is everyone who receives treatment in the treatment group is either a complier or always-taker. With this, we can say</p>
<p><span class="math display">\[\pi_C = P(M=1 | Z = 1) - \pi_A.\]</span></p>
<p>But, of course, we argued above that we can estimate <span class="math inline">\(\pi_A\)</span> as <span class="math inline">\(P(M=1|Z=0)\)</span>. So, finally, we have</p>
<p><span class="math display">\[\pi_C = P(M=1 | Z = 1) - P(M=1|Z=0).\]</span>
This gives us a method of moments estimator for <span class="math inline">\(CACE\)</span> from observed data:</p>
<p><span class="math display">\[ CACE = \frac{ACE}{\pi_C} = \frac{E[Y | Z = 1] - E[Y | Z = 0]}{P(M=1 | Z = 1) - P(M=1|Z=0)}. \]</span></p>
</div>
<div id="the-simulated-estimate" class="section level2">
<h2>The simulated estimate</h2>
<pre class="r"><code>ACE <- dt[z==1, mean(y)] - dt[z==0, mean(y)] # Also ITT
ACE</code></pre>
<pre><code>## [1] 0.307</code></pre>
<pre class="r"><code>pi_C <- dt[z==1, mean(m)] - dt[z==0, mean(m)] # strength of instrument
pi_C</code></pre>
<pre><code>## [1] 0.372</code></pre>
<pre class="r"><code>truth <- dt[AStatus == "Complier", mean(Y1 - Y0)]
truth</code></pre>
<pre><code>## [1] 0.81</code></pre>
<pre class="r"><code>ACE/pi_C</code></pre>
<pre><code>## [1] 0.826</code></pre>
<p>A method quite commonly used to analyze non-compliance is the instrumental variable model estimated with two-staged least squares regression. The R package <code>ivpack</code> is one of several that facilitates this type of analysis. A discussion of this methodology far exceeds the scope of this post. In any case, we can see that in this simple example, the IV estimate is the same as the method of moments estimator (by looking at the coefficient estimate of <code>m</code>).</p>
<pre class="r"><code>library(ivpack)
ivmodel <- ivreg(formula = y ~ m | z, data = dt, x = TRUE)
summary(ivmodel)</code></pre>
<pre><code>##
## Call:
## ivreg(formula = y ~ m | z, data = dt, x = TRUE)
##
## Residuals:
## Min 1Q Median 3Q Max
## -2.19539 -0.36249 0.00248 0.35859 2.27902
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 0.0932 0.0302 3.09 0.002 **
## m 0.8262 0.0684 12.08 <2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.569 on 1998 degrees of freedom
## Multiple R-Squared: 0.383, Adjusted R-squared: 0.383
## Wald test: 146 on 1 and 1998 DF, p-value: <2e-16</code></pre>
<p>So, again, if I have piqued your interest of this very rich and interesting topic, or if I have totally confused you, go check out the <a href="https://courseplus.jhu.edu/core/index.cfm/go/course.home/coid/8155/">course</a>. In my next post, I will describe a simple latent variable model using a maximum likelihood EM (expectation-maximization) algorithm that arrives at an estimate by predicting complier status.</p>
</div>
Further considerations of a hidden process underlying categorical responses
https://www.rdatagen.net/post/a-hidden-process-part-2-of-2/
Tue, 05 Sep 2017 00:00:00 +0000keith.goldfeld@nyumc.org (Keith Goldfeld)https://www.rdatagen.net/post/a-hidden-process-part-2-of-2/<p>In my <a href="https://www.rdatagen.net/post/ordinal-regression/">previous post</a>, I described a continuous data generating process that can be used to generate discrete, categorical outcomes. In that post, I focused largely on binary outcomes and simple logistic regression just because things are always easier to follow when there are fewer moving parts. Here, I am going to focus on a situation where we have <em>multiple</em> outcomes, but with a slight twist - these groups of interest can be interpreted in an ordered way. This conceptual latent process can provide another perspective on the models that are typically applied to analyze these types of outcomes.</p>
<div id="categorical-outcomes-generally" class="section level3">
<h3>Categorical outcomes, generally</h3>
<p>Certainly, group membership is not necessarily intrinsically ordered. In a general categorical or multinomial outcome, a group does not necessarily have any quantitative relationship vis a vis the other groups. For example, if we were interested in primary type of meat consumption, individuals might be grouped into those favoring (1) chicken, (2) beef, (3) pork, or (4) no meat. We might be interested in estimating the different distributions across the four groups for males and females. However, since there is no natural ranking or ordering of these meat groups (though maybe I am just not creative enough), we are limited to comparing the odds of being in one group relative to another for two exposure groups A and B, such as</p>
<p><span class="math display">\[\small{\frac{P(Beef|Group = A)}{P(Chicken|Group = A)} \ vs. \frac{P(Beef|Group = B)}{P(Chicken|Group = B)}}\]</span>.</p>
</div>
<div id="ordinal-outcomes" class="section level3">
<h3>Ordinal outcomes</h3>
<p>Order becomes relevant when the categories take on meanings related strength of opinion or agreement (as in a Likert-type response) or frequency. In the motivating example I described in the initial post, the response of interest was the frequency meat consumption in a month, so the response categories could be (1) none, (2) 1-3 times per month, (3) once per week, (4) 2-6 times per week, (5) 1 or more times per day. Individuals in group 2 consume meat more frequently than group 1, individuals in group 3 consume meat more frequently than those both groups 1 & 2, and so on. There is a natural quantitative relationship between the groups.</p>
<p>Once we have thrown ordering into the mix, we can expand our possible interpretations of the data. In particular it is quite common to summarize the data by looking at <em>cumulative</em> probabilities, odds, or log-odds. Comparisons of different exposures or individual characteristics typically look at how these cumulative measures vary across the different exposures or characteristics. So, if we were interested in cumulative odds, we would compare <span class="math display">\[\small{\frac{P(Response = 1|Group = A)}{P(Response > 1|Group = A)} \ \ vs. \ \frac{P(Response = 1|Group = B)}{P(Response > 1|Group = B)}},\]</span></p>
<p><span class="math display">\[\small{\frac{P(Response \leq 2|Group = A)}{P(Response > 2|Group = A)} \ \ vs. \ \frac{P(Response \leq 2|Group = B)}{P(Response > 2|Group = B)}},\]</span></p>
<p>and continue until the last (in this case, fourth) comparison</p>
<p><span class="math display">\[\small{\frac{P(Response \leq 4|Group = A)}{P(Response = 5|Group = A)} \ \ vs. \ \frac{P(Response \leq 4|Group = B)}{P(Response = 5|Group = B)}}.\]</span></p>
</div>
<div id="multiple-responses-multiple-thresholds" class="section level3">
<h3>Multiple responses, multiple thresholds</h3>
<p>The latent process that was described for the binary outcome is extended to the multinomial outcome by the addition of more thresholds. These thresholds define the portions of the density that define the probability of each possible response. If there are <span class="math inline">\(k\)</span> possible responses (in the meat example, we have 5), then there will be <span class="math inline">\(k-1\)</span> thresholds. The area under the logistic density curve of each of the regions defined by those thresholds (there will be <span class="math inline">\(k\)</span> distinct regions) represents the probability of each possible response tied to that region. In the example here, we define five regions of a logistic density by setting the four thresholds. We can say that this underlying continuous distribution represents the probability distribution of categorical responses for a specific population, which we are calling <em>Group A</em>.</p>
<pre class="r"><code># preliminary libraries and plotting defaults
library(ggplot2)
library(data.table)
my_theme <- function() {
theme(panel.background = element_rect(fill = "grey90"),
panel.grid = element_blank(),
axis.ticks = element_line(colour = "black"),
panel.spacing = unit(0.25, "lines"),
plot.title = element_text(size = 12, vjust = 0.5, hjust = 0),
panel.border = element_rect(fill = NA, colour = "gray90"))
}
# create data points density curve
x <- seq(-6, 6, length = 1000)
pdf <- dlogis(x, location = 0, scale = 1)
dt <- data.table(x, pdf)
# set thresholds for Group A
thresholdA <- c(-2.1, -0.3, 1.4, 3.6)
pdf <- dlogis(thresholdA)
grpA <- data.table(threshold = thresholdA, pdf)
aBreaks <- c(-6, grpA$threshold, 6)
# plot density with cutpoints
dt[, grpA := cut(x, breaks = aBreaks, labels = F, include.lowest = TRUE)]
p1 <- ggplot(data = dt, aes(x = x, y = pdf)) +
geom_line() +
geom_area(aes(x = x, y = pdf, group = grpA, fill = factor(grpA))) +
geom_hline(yintercept = 0, color = "grey50") +
annotate("text", x = -5, y = .28, label = "Group A", size = 5) +
scale_fill_manual(values = c("#d0d7d1", "#bbc5bc", "#a6b3a7", "#91a192", "#7c8f7d"),
labels = c("None", "1-3/month", "1/week", "2-6/week", "1+/day"),
name = "Frequency") +
scale_x_continuous(breaks = thresholdA) +
scale_y_continuous(limits = c(0, 0.3), name = "Density") +
my_theme() +
theme(legend.position = c(.85, .7),
legend.background = element_rect(fill = "grey90"),
legend.key = element_rect(color = "grey90"))
p1</code></pre>
<p><img src="https://www.rdatagen.net/post/2017-09-04-a-hidden-process-part-2-of-2_files/figure-html/threshold-1.png" width="480" /></p>
<p>The area for each of the five regions can easily be calculated, where each area represents the probability of each response:</p>
<pre class="r"><code>pA= plogis(c(thresholdA, Inf)) - plogis(c(-Inf, thresholdA))
probs <- data.frame(pA)
rownames(probs) <- c("P(Resp = 1)", "P(Resp = 2)",
"P(Resp = 3)", "P(Resp = 4)", "P(Resp = 5)")
probs</code></pre>
<pre><code>## pA
## P(Resp = 1) 0.109
## P(Resp = 2) 0.316
## P(Resp = 3) 0.377
## P(Resp = 4) 0.171
## P(Resp = 5) 0.027</code></pre>
<p>As I’ve already mentioned, when we characterize a multinomial response, we typically do so in terms of cumulative probabilities. I’ve calculated several quantities below, and we can see that the logs of the cumulative odds for this particular group are indeed the threshold values that we used to define the sub-regions.</p>
<pre class="r"><code># cumulative probabilities defined by the threshold
probA <- data.frame(
cprob = plogis(thresholdA),
codds = plogis(thresholdA)/(1-plogis(thresholdA)),
lcodds = log(plogis(thresholdA)/(1-plogis(thresholdA)))
)
rownames(probA) <- c("P(Grp < 2)", "P(Grp < 3)", "P(Grp < 4)", "P(Grp < 5)")
probA</code></pre>
<pre><code>## cprob codds lcodds
## P(Grp < 2) 0.11 0.12 -2.1
## P(Grp < 3) 0.43 0.74 -0.3
## P(Grp < 4) 0.80 4.06 1.4
## P(Grp < 5) 0.97 36.60 3.6</code></pre>
<p>The last column of the table below matches the thresholds defined in vector <code>thresholdA</code>.</p>
<pre class="r"><code>thresholdA</code></pre>
<pre><code>## [1] -2.1 -0.3 1.4 3.6</code></pre>
</div>
<div id="comparing-response-distributions-of-different-populations" class="section level3">
<h3>Comparing response distributions of different populations</h3>
<p>In the cumulative logit model, the underlying assumption is that the odds ratio of one population relative to another is constant across all the possible responses. This means that all of the cumulative odds ratios are equal:</p>
<p><span class="math display">\[\small{\frac{codds(P(Resp = 1 | A))}{codds(P(Resp = 1 | B))} = \frac{codds(P(Resp \leq 2 | A))}{codds(P(Resp \leq 2 | B))} = \ ... \ = \frac{codds(P(Resp \leq 4 | A))}{codds(P(Resp \leq 4 | B))}}\]</span></p>
<p>In terms of the underlying process, this means that each of the thresholds shifts the same amount, as shown below, where we add 1.1 units to each threshold that was set Group A:</p>
<pre class="r"><code># Group B threshold is an additive shift to the right
thresholdB <- thresholdA + 1.1
pdf <- dlogis(thresholdB)
grpB <- data.table(threshold = thresholdB, pdf)
bBreaks <- c(-6, grpB$threshold, 6)</code></pre>
<p>Based on this shift, we can see that the probability distribution for Group B is quite different:</p>
<pre class="r"><code>pB = plogis(c(thresholdB, Inf)) - plogis(c(-Inf, thresholdB))
probs <- data.frame(pA, pB)
rownames(probs) &