Uncertainty quantification (UQ) using Markov Chain Monte Carlo (MCMC) methods is a powerful approach in Bayesian inference. However, there are scenarios where MCMC may not be the best choice or can fail. Below is an example illustrating a case where MCMC struggles with uncertainty quantification due to multimodality and poor mixing.
Consider a Bayesian inference problem where the posterior distribution is highly multimodal. If the MCMC sampler does not mix well between the modes, it can give misleading results.
We define a posterior distribution as a mixture of two Gaussians:
$$ n(x)=0.5 \cdot N (-3.1)+0.5 \cdot N (3.1) $$
This distribution has two well-separated peaks, making it hard for some MCMC algorithms to explore both modes effectively.
We will use the Metropolis-Hastings algorithm, which can struggle in this scenario.
https://gist.github.com/viadean/b91a1f32ae352cc04322980c37968ca9
To improve uncertainty quantification in such cases, one could: