Getting to the bottom of TMLE: targeting in action

In the previous post, I worked my way through some key elements of TMLE theory as I try to understand how it all works. At its essence, TMLE is focused on getting the efficient influence function (EIF) to behave properly. When that happens, the estimator of the target parameter behaves as if it were based on a random sample from the true data-generating distribution.

Estimating the outcome and treatment (or exposure) models is an important part of constructing the EIF, but they are treated as nuisance components and do not need to be perfectly specified. The targeting step can adjust for errors in these nuisance estimates, often recovering the desired empirical behavior of the EIF and improving the resulting estimate of the target parameter, even when one of the nuisance models is misspecified.

[Read More]
R  TMLE  simulation 

Getting to the bottom of TMLE: forcing the target to behave

In the last couple of posts (starting here), I’ve tried to unpack some of the ideas that sit underneath TMLE: viewing parameters as functionals of a distribution, thinking about sampling as a perturbation, and understanding how influence functions describe the leading behavior of estimation error. In the second post, I showed through simulation how errors in nuisance estimation can interact with sampling variability, but typically have a smaller effect than the main sampling fluctuation itself. This brings us to the central idea behind TMLE.

[Read More]

Getting to the bottom of TMLE: the (almost) vanishing nuisance interaction

In the previous post, I argued that understanding TMLE starts with understanding how estimation error behaves. In particular, we saw that influence functions allow us to separate sampling variability from nuisance estimation error. But something subtle happens when nuisance models are estimated rather than known. The interaction term that captures their effect on the target parameter appears to shrink as the sample size grows, sometimes quite a bit. In this post, I explore that behavior through simulation. We’ll see that the nuisance interaction does shrink (though perhaps not fast enough to ignore).

[Read More]
R  TMLE 

Getting to the bottom of TMLE: influence functions and perturbations

I first encountered TMLE—sometimes spelled out as targeted maximum likelihood estimation or targeted minimum-loss estimate—about twelve or so years ago when Mark var der Laan, one of the original developers who literally wrote the book, gave a talk at NYU. It sounded very cool and seemed quite revolutionary and important, but it was really challenging to follow all of the details. Following that talk, I tried to tackle some of the literature, but quickly found that it as a challenge to penetrate. What struck me most was not the algorithmic complexity (which it certainly had), but much of the language and terminology, and the underlying math.

[Read More]