View Larger Map
From gay goy gourmet |
From gay goy gourmet |
I was aware that the subject covariates in my hierarchical 1-dimensional ideal point model induced greater dependence in the chain. The reason is obvious – the full conditional update of the subject parameters θ now incorporates two pieces of information from the previous iteration (both η and β). This means much slower mixing chains.
Parameter expansion (px-da) actually improves mixing over the standard data augmentation scheme. Liu and Wu (1999) prove that convergence is always faster, but it still seems like magic. The method is nearly identical to the Marginal Data Augmentation of Meng & van Dyk that Simon Jackman just added to ideal, except that I sample the expansion parameter α on the residual variance of the latent data from an inverse-Gamma distribution whose prior parameters you can change. MDA fixes the prior value of the expansion parameter at 1 (equivalent to IG(a,b) as a→0).
I'm not sure when ADM and KQ plan the next MCMCpack release, but the default in MCMCirtHier1d is px=TRUE. The figure below shows the improvement in autocorrelation: for two chains of 10k iterations thinned every 20 samples, after a burnin of 50k iterations, the autocorrelation (which drops of really, really slowly for some subjects) drops below that even of the naïve model without PX (not shown). The black line is with PX, the red line without.
From CSA 2k8 |
From CSA 2k8 |